When a screenwriter told New York University researchers last year that letting AI do her work would make her “miserable inside,” she was onto something. A follow-up study from NYU’s Tandon School of Engineering and Stern School of Business finds that the instinct to compete with generative AI, rather than simply embrace it, is associated with meaningful long-term benefits for writing professionals.When a screenwriter told New York University researchers last year that letting AI do her work would make her “miserable inside,” she was onto something. A follow-up study from NYU’s Tandon School of Engineering and Stern School of Business finds that the instinct to compete with generative AI, rather than simply embrace it, is associated with meaningful long-term benefits for writing professionals.Machine learning & AI[#item_full_content]
Microsoft to invest $10 bn for Japan AI data centers
Microsoft said Friday it will invest $10 billion in Japan over the next four years to build artificial intelligence data centers and related infrastructure.Microsoft said Friday it will invest $10 billion in Japan over the next four years to build artificial intelligence data centers and related infrastructure.Machine learning & AI[#item_full_content]
Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer, and more trustworthy automated decision-making systems, new research suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from UK universities will present a paper at a major international computing conference which suggests how “participatory AI auditing” could improve AI decision-making in the future.Machine learning & AI[#item_full_content]
Introducing MirrorBot, a robot designed to foster human connection
While technology has made the world “smaller,” it has also pulled individuals apart, thanks to mobile phones and other devices that command our attention. Cornell University researchers are using technology, in the form of a mirror-equipped robot, to help bring people together. Members of the Architectural Robotics Lab, led by Keith Evan Green, have built a four-foot-tall robot—dubbed MirrorBot—with dual mirrors that, when placed in front of a pair of strangers, let each participant see themself in one mirror and the other person in the other.While technology has made the world “smaller,” it has also pulled individuals apart, thanks to mobile phones and other devices that command our attention. Cornell University researchers are using technology, in the form of a mirror-equipped robot, to help bring people together. Members of the Architectural Robotics Lab, led by Keith Evan Green, have built a four-foot-tall robot—dubbed MirrorBot—with dual mirrors that, when placed in front of a pair of strangers, let each participant see themself in one mirror and the other person in the other.[#item_full_content]
Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Anthropic CEO Dario Amodei has said that AI could surpass “almost all humans at almost everything” shortly after 2027. While AI’s capabilities are certainly improving, such rapid progress might seem at odds with findings that show AI is still failing at 95%+ of remote freelance projects, and continues to struggle with hallucination, long term planning, and forms of abstract reasoning that humans find easy. But recent work from METR has found evidence that LLMs can gain capabilities in rapid surges—jumping from succeeding almost never to almost always in just a few years. If this is true across the economy, it could mean that workers could be blindsided by AI advances.Business[#item_full_content]
A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.A team from the Universitat Politècnica de València, part of the Valencian University Research Institute for Artificial Intelligence (VRAIN) and ValgrAI, has participated in the development of ADeLe, a new methodology that offers precise explanations and predictions regarding whether large language models (LLMs) will succeed or fail at specific new tasks they have not yet performed. Furthermore, this methodology identifies exactly the limits of any given model’s reasoning capacity.Machine learning & AI[#item_full_content]
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.[#item_full_content]
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.Computer Sciences[#item_full_content]
Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the starton April 2, 2026 at 11:40 am
Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.[#item_full_content]
Fair decisions, clear reasons: Creating fuzzy AI with fairness built in from the start
Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Although AI is not intentionally biased, it can inherit biases from the data fed into it, learning and repeating them until the system becomes inherently unfair. This is complicated by the problem of identifying where the AI system introduced the bias, as most AI systems display their final decision without showing the steps that made it. Unfair patterns may go unnoticed simply because they are hard to identify.Computer Sciences[#item_full_content]