Perfect alignment between AI and human values is mathematically impossible, study sayson April 14, 2026 at 12:20 pm

Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.[#item_full_content]

Perfect alignment between AI and human values is mathematically impossible, study says

Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Published in PNAS Nexus, Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.Computer Sciences[#item_full_content]

Alumna, author and machine learning expert Vivienne Ming explains why the best defense against AI’s downsides is investing in human skills—and using the technology inquisitively, not passively.Alumna, author and machine learning expert Vivienne Ming explains why the best defense against AI’s downsides is investing in human skills—and using the technology inquisitively, not passively.Consumer & Gadgets[#item_full_content]

What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.[#item_full_content]

What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.What if ChatGPT answered with the name of a minister from a year ago when asked, “Who was the minister inaugurated last month?” This is a prime example of the limitations of AI that fails to properly reflect the latest information. A KAIST research team has developed a new evaluation technology that automatically reflects changing real-world information while catching “temporal errors” that may appear correct on the surface. This is expected to drastically improve AI reliability.Computer Sciences[#item_full_content]

Danish pharmaceuticals group Novo Nordisk, maker of the popular Ozempic and Wegovy anti-obesity drugs, announced Tuesday a “strategic partnership” with OpenAI to accelerate the development of new medications.Danish pharmaceuticals group Novo Nordisk, maker of the popular Ozempic and Wegovy anti-obesity drugs, announced Tuesday a “strategic partnership” with OpenAI to accelerate the development of new medications.Machine learning & AI[#item_full_content]

HarmonyGNN boosts graph AI accuracy on four tough benchmarks by up to 9.6%on April 13, 2026 at 5:20 pm

Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).[#item_full_content]

HarmonyGNN boosts graph AI accuracy on four tough benchmarks by up to 9.6%

Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).Computer Sciences[#item_full_content]

Revealing the hidden logic behind AI’s judgments of peopleon April 13, 2026 at 5:00 pm

In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us? A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling. The study is published in the journal Proceedings of the Royal Society A Mathematical Physical and Engineering Science.[#item_full_content]

Hirebucket

FREE
VIEW