A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.[#item_full_content]

AI chatbot teaches AI ‘student’ to love owls, even after data is scrubbedon April 15, 2026 at 3:40 pm

Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.[#item_full_content]

CacheMind turns chip tuning into a conversation, exposing hidden cache failures and lifting processor performanceon April 15, 2026 at 11:40 am

Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.[#item_full_content]

Perfect alignment between AI and human values is mathematically impossible, study sayson April 14, 2026 at 12:20 pm

Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.[#item_full_content]

What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.[#item_full_content]

HarmonyGNN boosts graph AI accuracy on four tough benchmarks by up to 9.6%on April 13, 2026 at 5:20 pm

Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).[#item_full_content]

Revealing the hidden logic behind AI’s judgments of peopleon April 13, 2026 at 5:00 pm

In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.[#item_full_content]

Mechanical computers use springs and bolts to count, sort odd-even pushes and remember forceon April 13, 2026 at 1:40 pm

Published in Nature Communications, researchers from St. Olaf College and Syracuse University built a computer made entirely of mechanical components that can perform simple computations without electricity or batteries.Published in Nature Communications, researchers from St. Olaf College and Syracuse University built a computer made entirely of mechanical components that can perform simple computations without electricity or batteries.[#item_full_content]

Online data is generally pretty secure. Assuming everyone is careful with passwords and other protections, you can think of it as being locked in a vault so strong that even all the world’s supercomputers, working together for 10,000 years, could not crack it.Online data is generally pretty secure. Assuming everyone is careful with passwords and other protections, you can think of it as being locked in a vault so strong that even all the world’s supercomputers, working together for 10,000 years, could not crack it.[#item_full_content]

Virtual reality tools have untapped potential to elicit positive emotions for use in education, health care, architecture and psychological therapy, according to a recent study from Murdoch University that looked at four core visual factors and associated sub-factors and how they contribute to realism and emotional engagement in virtual reality environments.Virtual reality tools have untapped potential to elicit positive emotions for use in education, health care, architecture and psychological therapy, according to a recent study from Murdoch University that looked at four core visual factors and associated sub-factors and how they contribute to realism and emotional engagement in virtual reality environments.[#item_full_content]

Hirebucket

FREE
VIEW