Most of what AI chatbots know about the world comes from devouring massive amounts of text from the internet—with all its facts, falsehoods, knowledge and nonsense. Given that input, is it possible that AI language models have an “understanding” of the real world? As it turns out, they do—or at least something like an understanding. That’s according to a new study by researchers from Brown University to be presented on Saturday, April 25 at the International Conference on Learning Representations in Rio de Janeiro, Brazil. The study is published on the arXiv preprint server.Most of what AI chatbots know about the world comes from devouring massive amounts of text from the internet—with all its facts, falsehoods, knowledge and nonsense. Given that input, is it possible that AI language models have an “understanding” of the real world? As it turns out, they do—or at least something like an understanding. That’s according to a new study by researchers from Brown University to be presented on Saturday, April 25 at the International Conference on Learning Representations in Rio de Janeiro, Brazil. The study is published on the arXiv preprint server.Machine learning & AI[#item_full_content]

Language models like ChatGPT are not neutral. Without our realizing it, they can absorb all kinds of bias—for example, around gender and ethnicity—which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he will defend his thesis at the University of Amsterdam.Language models like ChatGPT are not neutral. Without our realizing it, they can absorb all kinds of bias—for example, around gender and ethnicity—which then become increasingly embedded in the model. According to AI researcher Oskar van der Wal, we need different kinds of measurements to detect these biases so that they can be removed from the models. In his doctoral thesis, he shows how this can be done. On 29 April, he will defend his thesis at the University of Amsterdam.Machine learning & AI[#item_full_content]

How AI bias can creep into online content moderation

A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.Internet[#item_full_content]

Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Business[#item_full_content]

Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Security[#item_full_content]

Making big tech algorithms ‘fair’ is harder than it looks

Before big tech engineers can improve the fairness of recommendation systems, such as social media feeds and online shopping results, they need to define what “fairness” even means. Should an app show people only the content it predicts they will like most, or should it boost newer creators, small businesses or historically underrepresented groups? Should an online store rank products purely by past clicks and sales, or make sure independent sellers can compete with dominant brands?Before big tech engineers can improve the fairness of recommendation systems, such as social media feeds and online shopping results, they need to define what “fairness” even means. Should an app show people only the content it predicts they will like most, or should it boost newer creators, small businesses or historically underrepresented groups? Should an online store rank products purely by past clicks and sales, or make sure independent sellers can compete with dominant brands?Business[#item_full_content]

AI systems can “learn to seek revenge” because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows. In short, AI can give as good as it gets and, eventually, go one step further.AI systems can “learn to seek revenge” because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows. In short, AI can give as good as it gets and, eventually, go one step further.Consumer & Gadgets[#item_full_content]

Artificial Intelligence pioneer Geoffrey Hinton insisted Tuesday on the need to strictly regulate the technology, warning that it remained unclear if humanity could co-exist with super intelligent AI.Artificial Intelligence pioneer Geoffrey Hinton insisted Tuesday on the need to strictly regulate the technology, warning that it remained unclear if humanity could co-exist with super intelligent AI.Machine learning & AI[#item_full_content]

A blue-eyed humanoid robot carefully opens a box and places a tool inside as a crowd of visitors watch the demonstration of “physical AI” skills at a major industrial trade fair in Germany.A blue-eyed humanoid robot carefully opens a box and places a tool inside as a crowd of visitors watch the demonstration of “physical AI” skills at a major industrial trade fair in Germany.Robotics[#item_full_content]

Hirebucket

FREE
VIEW