How AI bias can creep into online content moderation

A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.A University of Queensland study has shown large language models (LLMs) used in AI content moderation may be prone to subtle biases that undermine their neutrality. A team led by data scientist Professor Gianluca Demartini from UQ’s School of Electrical Engineering and Computer Science used persona prompting to test the tendency of AI chatbots to encode and reproduce political biases, and found significant behavioral shifts.Internet[#item_full_content]

Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Generative AI is best known for creating images and text. Now, it is helping industries make better planning decisions. Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches. Along with optimizing industrial output, the model lowers costs and reduces risk. Experiments also showed it performs across different fields.Business[#item_full_content]

Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.Security[#item_full_content]

Making big tech algorithms ‘fair’ is harder than it looks

Before big tech engineers can improve the fairness of recommendation systems, such as social media feeds and online shopping results, they need to define what “fairness” even means. Should an app show people only the content it predicts they will like most, or should it boost newer creators, small businesses or historically underrepresented groups? Should an online store rank products purely by past clicks and sales, or make sure independent sellers can compete with dominant brands?Before big tech engineers can improve the fairness of recommendation systems, such as social media feeds and online shopping results, they need to define what “fairness” even means. Should an app show people only the content it predicts they will like most, or should it boost newer creators, small businesses or historically underrepresented groups? Should an online store rank products purely by past clicks and sales, or make sure independent sellers can compete with dominant brands?Business[#item_full_content]

AI systems can “learn to seek revenge” because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows. In short, AI can give as good as it gets and, eventually, go one step further.AI systems can “learn to seek revenge” because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows. In short, AI can give as good as it gets and, eventually, go one step further.Consumer & Gadgets[#item_full_content]

Artificial Intelligence pioneer Geoffrey Hinton insisted Tuesday on the need to strictly regulate the technology, warning that it remained unclear if humanity could co-exist with super intelligent AI.Artificial Intelligence pioneer Geoffrey Hinton insisted Tuesday on the need to strictly regulate the technology, warning that it remained unclear if humanity could co-exist with super intelligent AI.Machine learning & AI[#item_full_content]

A blue-eyed humanoid robot carefully opens a box and places a tool inside as a crowd of visitors watch the demonstration of “physical AI” skills at a major industrial trade fair in Germany.A blue-eyed humanoid robot carefully opens a box and places a tool inside as a crowd of visitors watch the demonstration of “physical AI” skills at a major industrial trade fair in Germany.Robotics[#item_full_content]

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers.American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers.Machine learning & AI[#item_full_content]

New research in Contemporary Economic Policy reveals that generative artificial intelligence tools like GitHub Copilot may lead to more, not fewer, jobs in the software engineering workforce.New research in Contemporary Economic Policy reveals that generative artificial intelligence tools like GitHub Copilot may lead to more, not fewer, jobs in the software engineering workforce.Software[#item_full_content]

Hirebucket

FREE
VIEW