When I saw that only only 40% of People Can Identify Bots from Humans, I wasn’t too surprised. We like to think that technology is getting smarter, but… well… there are other things that are related that we shouldn’t be too quick to discount.
…Children are taught to regurgitate what others tell them and to rely on digital assistants to curate the world rather than learn to navigate the informational landscape on their own. Schools no longer teach source triangulation, conflict arbitration, separating fact from opinion, citation chaining, conducting research or even the basic concept of verification and validation. In short, we’ve stopped teaching society how to think about information, leaving our citizenry adrift in the digital wilderness increasingly saturated with falsehoods without so much as a compass or map to help them find their way to safety. The solution is to teach the world’s citizenry the basics of information literacy…
Kalev Leetaru, “A Reminder That ‘Fake News’ Is An Information Literacy Problem – Not A Technology Problem“, Forbes.com, July 7th, 2019.
Couple that with the study that showed the average attention span is now 47 seconds, there’s a lot of forgiveness for an effective Turing test these days. The very idea of the Turing test did not come up in a world where people thought the world was flat, and that was almost 75 years ago. No one was eating Tide Pods, either, though I do believe that’s under control now.
…Researchers were aware that these tactics might be deployed, so they specifically trained their bots to strategically utilize typos and other forms of errors in syntax and grammar to make them seem more human. Personal questions were also used fairly frequently, with participants trying to get the bots to talk about their backgrounds, assuming that bots would not be able to respond to such queries.
Once again, these bots were trained on datasets that included a wide range of personal stories, and that led to them being able to answer these questions in a way that is surprisingly similar to human beings. Hence, 32% of participants were unable to successfully identity AI during this experiment with all things having been considered and taken into account…
Zia Muhammad, “Only 40% of People Can Identify Bots from Humans“, Digital Information World, July 11th, 2023.
So what they did is they made the bots make mistakes on purpose so that they could fool humans better, because human typos and the inability to write properly are hallmarks of being human.
To err is bot.