Chatbots may fuel ‘delusional spirals’ that lead to real-world harm

Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new study, AI researchers at Stanford studied verbatim transcripts of 19 real conversations between humans and chatbots to understand how these relationships arise, evolve, and, too often, devolve into troubling outcomes the researchers describe as “delusional spirals.”Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new study, AI researchers at Stanford studied verbatim transcripts of 19 real conversations between humans and chatbots to understand how these relationships arise, evolve, and, too often, devolve into troubling outcomes the researchers describe as “delusional spirals.”Consumer & Gadgets[#item_full_content]