Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.[#item_full_content]
Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Computer Sciences[#item_full_content]
What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?Machine learning & AI[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.Computer Sciences[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.Computer Sciences[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.[#item_full_content]
Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Security[#item_full_content]
The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”Robotics[#item_full_content]
Representing the pathway of participants in a study is a key element in clinical and epidemiological research. Flow diagrams are the standard tool to do so, as they allow the different stages of the process to be clearly visualized, from initial selection to final analysis, following international guidelines such as CONSORT or STROBE. However, their creation is often laborious. It usually involves manually entering data or programming complex structures, which makes reproducibility more difficult and may increase the risk of errors.Representing the pathway of participants in a study is a key element in clinical and epidemiological research. Flow diagrams are the standard tool to do so, as they allow the different stages of the process to be clearly visualized, from initial selection to final analysis, following international guidelines such as CONSORT or STROBE. However, their creation is often laborious. It usually involves manually entering data or programming complex structures, which makes reproducibility more difficult and may increase the risk of errors.[#item_full_content]