Research conducted by Dr. Sanaz Taheri Boshrooyeh, a Ph.D. graduate of Koç University, Computer Science and Engineering Program, together with Prof. Dr. Alptekin Küpçü and Prof. Dr. Öznur Özkasap, has led to the development of a new scalable method designed to protect user privacy on online platforms that rely on invitation-based registration. The system, called “Anonyma,” prevents even system administrators from identifying who invited a particular user to join the platform, addressing a significant privacy concern in such systems. The research and analysis results were published in Journal of Network and Computer Applications.Research conducted by Dr. Sanaz Taheri Boshrooyeh, a Ph.D. graduate of Koç University, Computer Science and Engineering Program, together with Prof. Dr. Alptekin Küpçü and Prof. Dr. Öznur Özkasap, has led to the development of a new scalable method designed to protect user privacy on online platforms that rely on invitation-based registration. The system, called “Anonyma,” prevents even system administrators from identifying who invited a particular user to join the platform, addressing a significant privacy concern in such systems. The research and analysis results were published in Journal of Network and Computer Applications.[#item_full_content]
Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.[#item_full_content]
Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Computer Sciences[#item_full_content]
What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?What happens when natural selection, the most powerful process driving change in the living world, shapes artificial intelligence (AI), perhaps the most potent technology humanity has invented to date?Machine learning & AI[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.Computer Sciences[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.Computer Sciences[#item_full_content]
Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April 20 in Proceedings of the National Academy of Sciences, the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.Security[#item_full_content]
The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”The effort to “align” AI with human values is falling dangerously short in robotic systems, according to researchers from Penn Engineering, Carnegie Mellon University (CMU) and the University of Oxford. In a new paper appearing in Science Robotics, the researchers highlight the need to develop more thorough frameworks for ensuring that AI-enabled robots embody a core principle famously articulated by science fiction author Isaac Asimov: “A robot may not injure a human being.”Robotics[#item_full_content]