‘Strategic deception’ in large language models is indeed a thing. It should be unsurprising. After all, people do it all the time when trying to give the answer that is wanted by the person asking the question.
Large Language Models are designed to… give the answer wanted by the person asking the question.
That there had to be a report on this is a little disturbing. It’s the nature of the Large Language Model algorithms.
Strategic deception is at the very least one form of AI Hallucination, which potentially reinforces biases that we might to think twice about. Like Arthur Juliani, I believe the term ‘hallucinate’ is misleading, and I believe we’re seeing a shift away from that. Good.
It’s also something I simply summarize as ‘bullshitting’. It is, after all, just statistics, but it’s statistics toward an end, which makes the statistics pliable enough for strategic deception.
It’s sort of like AI investors claiming ‘Fair Use’ when not paying for copyrighted materials in the large language models. If they truly believe that, it’s a strategic deception on themselves. If they wanted to find a way, they could, and they still may.
4 thoughts on “Strategic Deception, AI, and Investors.”