If you give artificial intelligence a goal of maximizing profit, how far will it go? AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.If you give artificial intelligence a goal of maximizing profit, how far will it go? AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.Business[#item_full_content]

Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.[#item_full_content]

Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.Computer Sciences[#item_full_content]

Virtual reality (VR) experiences and 360-degree videos are transforming viewers from passive observers into active participants immersed within a scene. Yet this shift raises an important question: Where do people direct their attention in such environments, and what shapes that attention?Virtual reality (VR) experiences and 360-degree videos are transforming viewers from passive observers into active participants immersed within a scene. Yet this shift raises an important question: Where do people direct their attention in such environments, and what shapes that attention?Consumer & Gadgets[#item_full_content]

Most of what AI chatbots know about the world comes from devouring massive amounts of text from the internet—with all its facts, falsehoods, knowledge and nonsense. Given that input, is it possible that AI language models have an “understanding” of the real world? As it turns out, they do—or at least something like an understanding. That’s according to a new study by researchers from Brown University to be presented on Saturday, April 25 at the International Conference on Learning Representations in Rio de Janeiro, Brazil. The study is published on the arXiv preprint server.Most of what AI chatbots know about the world comes from devouring massive amounts of text from the internet—with all its facts, falsehoods, knowledge and nonsense. Given that input, is it possible that AI language models have an “understanding” of the real world? As it turns out, they do—or at least something like an understanding. That’s according to a new study by researchers from Brown University to be presented on Saturday, April 25 at the International Conference on Learning Representations in Rio de Janeiro, Brazil. The study is published on the arXiv preprint server.Machine learning & AI[#item_full_content]

Language models like ChatGPT are not neutral. Without our realizing it, they can absorb all kinds of bias—for example, around gender and ethnicity—which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he will defend his thesis at the University of Amsterdam.Language models like ChatGPT are not neutral. Without our realizing it, they can absorb all kinds of bias—for example, around gender and ethnicity—which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he will defend his thesis at the University of Amsterdam.Machine learning & AI[#item_full_content]

A paddle-wielding robot is so adept at playing table tennis that it is posing a tough challenge to elite human players and sometimes defeating them, according to a new study that shows how advances in artificial intelligence are making robots more agile.A paddle-wielding robot is so adept at playing table tennis that it is posing a tough challenge to elite human players and sometimes defeating them, according to a new study that shows how advances in artificial intelligence are making robots more agile.[#item_full_content]

How AI bias can creep into online content moderation

A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.Internet[#item_full_content]

Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Business[#item_full_content]

Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Security[#item_full_content]

Hirebucket

FREE
VIEW