Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Machine learning & AI[#item_full_content]
Introducing MirrorBot, a robot designed to foster human connection
While technology has made the world “smaller,” it has also pulled individuals apart, thanks to mobile phones and other devices that command our attention. Cornell University researchers are using technology, in the form of a mirror-equipped robot, to help bring people together. Members of the Architectural Robotics Lab, led by Keith Evan Green, have built a four-foot-tall robot—dubbed MirrorBot—with dual mirrors that, when placed in front of a pair of strangers, let each participant see themself in one mirror and the other person in the other.While technology has made the world “smaller,” it has also pulled individuals apart, thanks to mobile phones and other devices that command our attention. Cornell University researchers are using technology, in the form of a mirror-equipped robot, to help bring people together. Members of the Architectural Robotics Lab, led by Keith Evan Green, have built a four-foot-tall robot—dubbed MirrorBot—with dual mirrors that, when placed in front of a pair of strangers, let each participant see themself in one mirror and the other person in the other.[#item_full_content]
Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Business[#item_full_content]
A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.Machine learning & AI[#item_full_content]
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.[#item_full_content]
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Computer Sciences[#item_full_content]
Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the starton April 2, 2026 at 11:40 am
Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.[#item_full_content]
Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the start
Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Computer Sciences[#item_full_content]
Air-powered artificial muscles could help robots lift 100 times their weight
Researchers at Arizona State University are developing bio-inspired robotic “muscles” that will enable robots to operate in boiling water, survive abrasive surfaces, bypass impediments that keep their motorized counterparts benched, and still lift up to 100 times their own weight. The new heavyweight champions of robotics will be lighter, smaller, and disconnected from a power source.Researchers at Arizona State University are developing bio-inspired robotic “muscles” that will enable robots to operate in boiling water, survive abrasive surfaces, bypass impediments that keep their motorized counterparts benched, and still lift up to 100 times their own weight. The new heavyweight champions of robotics will be lighter, smaller, and disconnected from a power source.[#item_full_content]
‘Moltbook’ risks: The dangers of AI-to-AI interactions in health care
A new report examines the emerging risks of autonomous AI systems interacting within clinical environments. The article, “Emerging Risks of AI-to-AI Interactions in Health Care: Lessons From Moltbook,” appears in the Journal of Medical Internet Research. The work explores a critical new frontier: as high-risk AI agents begin to communicate directly with one another to manage triage and scheduling, they create a “digital ecosystem” that can operate beyond active human oversight.A new report examines the emerging risks of autonomous AI systems interacting within clinical environments. The article, “Emerging Risks of AI-to-AI Interactions in Health Care: Lessons From Moltbook,” appears in the Journal of Medical Internet Research. The work explores a critical new frontier: as high-risk AI agents begin to communicate directly with one another to manage triage and scheduling, they create a “digital ecosystem” that can operate beyond active human oversight.Machine learning & AI[#item_full_content]