Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.Machine learning & AI[#item_full_content]