About AI, con artists and hysteria


I remember watching one episode of an American show, about a supposed medium that, being surrounded by people in an audience, chooses one person seemingly at random and starts telling them messages from dear people of their lives, deceased but supposedly present there in spirit. Would that be a true “vision” into the afterlife? Clearly not, but it was interesting to decode the technique used. Blaming the fuzziness of spiritual communications, he asked several questions to the person being the object of such “visions” with the claimed intention of getting a clearer picture. While he never came with any new concrete information on anything, the messages from the “spirits” were always full of statements made for the receiver to feel good and, most importantly, to feel they had enough answers (“She is well”, “He wanted to say he didn’t suffer”).

Above all, much more than the naiveté of the people there, especially given very emotionally charged circumstances like that, the important thing was how the “medium” basically said what people wanted him to say. He was a tool to “materialise” the expectations of the person being “read”, giving them immense comfort in return (so much so that sometimes those people can become really indebted to the “medium”). In that regard, general AI language models are born similar. Obviously, the goal of those tools is completely different from the goal of a con artist who earns money convincing people he sees things he doesn’t. So please don’t misunderstand me, as I’m not saying that AI tools are a scam by any stretch, but it is amusing that the mechanism has some striking similarities. The biggest one is that often you get what you expect to get, more than the answer to what you asked.