Generative AI tools are rapidly transforming how software is built—and raising new risks in the process, according to a new TechBrief from the Association for Computing Machinery’s Technology Policy Council (TPC) on the rise of “vibe coding.”Generative AI tools are rapidly transforming how software is built—and raising new risks in the process, according to a new TechBrief from the Association for Computing Machinery’s Technology Policy Council (TPC) on the rise of “vibe coding.”Machine learning & AI[#item_full_content]
On average, car crashes cause more than 40,000 deaths per year in the United States. Technologies like seat belts, advanced airbags, and automated braking systems have improved car driver and passenger safety, but pedestrian deaths due to crashes have actually increased by 48% over the last decade, reaching about 7,500 fatalities in 2022. Transportation researchers comb through police crash reports to identify infrastructure countermeasures that will help in the greatest number of cases. However, sometimes improving the average situation isn’t enough.On average, car crashes cause more than 40,000 deaths per year in the United States. Technologies like seat belts, advanced airbags, and automated braking systems have improved car driver and passenger safety, but pedestrian deaths due to crashes have actually increased by 48% over the last decade, reaching about 7,500 fatalities in 2022. Transportation researchers comb through police crash reports to identify infrastructure countermeasures that will help in the greatest number of cases. However, sometimes improving the average situation isn’t enough.Automotive[#item_full_content]
New research from West Virginia University shows that as generative artificial intelligence begins to show up in courtrooms across the country, judges aren’t rushing to hand over the gavel.New research from West Virginia University shows that as generative artificial intelligence begins to show up in courtrooms across the country, judges aren’t rushing to hand over the gavel.Machine learning & AI[#item_full_content]
Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Computer Sciences[#item_full_content]
What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?Machine learning & AI[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.Computer Sciences[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.Computer Sciences[#item_full_content]
Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Security[#item_full_content]
The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”Robotics[#item_full_content]
Major AI platforms, including OpenAI and Anthropic, as well as social apps like Replika and Character.ai, are increasingly designing chatbots to be warm, friendly, and empathetic. However, new research from the Oxford Internet Institute at the University of Oxford finds that chatbots trained to sound warmer and more empathetic are significantly more likely to make factual errors and agree with false beliefs.Major AI platforms, including OpenAI and Anthropic, as well as social apps like Replika and Character.ai, are increasingly designing chatbots to be warm, friendly, and empathetic. However, new research from the Oxford Internet Institute at the University of Oxford finds that chatbots trained to sound warmer and more empathetic are significantly more likely to make factual errors and agree with false beliefs.Consumer & Gadgets[#item_full_content]