Large language models pose risk to science with false answers, says study

Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.

Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.

Read More

Machine learning & AI

News on Artificial Intelligence and Machine Learning

Leave a Reply

Your email address will not be published. Required fields are marked *