Research conducted by Dr. Sanaz Taheri Boshrooyeh, a Ph.D. graduate of Koç University, Computer Science and Engineering Program, together with Prof. Dr. Alptekin Küpçü and Prof. Dr. Öznur Özkasap, has led to the development of a new scalable method designed to protect user privacy on online platforms that rely on invitation-based registration. The system, called “Anonyma,” prevents even system administrators from identifying who invited a particular user to join the platform, addressing a significant privacy concern in such systems. The research and analysis results were published in Journal of Network and Computer Applications.Research conducted by Dr. Sanaz Taheri Boshrooyeh, a Ph.D. graduate of Koç University, Computer Science and Engineering Program, together with Prof. Dr. Alptekin Küpçü and Prof. Dr. Öznur Özkasap, has led to the development of a new scalable method designed to protect user privacy on online platforms that rely on invitation-based registration. The system, called “Anonyma,” prevents even system administrators from identifying who invited a particular user to join the platform, addressing a significant privacy concern in such systems. The research and analysis results were published in Journal of Network and Computer Applications.[#item_full_content]
Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.Artificial intelligence that cannot explain how it makes decisions—often called “black box” AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.[#item_full_content]
The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.The new trend of “vibe coding” allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems has shown that users who want to develop apps and programs successfully with AI need not only a capacity for clear written expression, but also a basic knowledge of computer science.[#item_full_content]
In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.[#item_full_content]
Representing the pathway of participants in a study is a key element in clinical and epidemiological research. Flow diagrams are the standard tool to do so, as they allow the different stages of the process to be clearly visualized, from initial selection to final analysis, following international guidelines such as CONSORT or STROBE. However, their creation is often laborious. It usually involves manually entering data or programming complex structures, which makes reproducibility more difficult and may increase the risk of errors.Representing the pathway of participants in a study is a key element in clinical and epidemiological research. Flow diagrams are the standard tool to do so, as they allow the different stages of the process to be clearly visualized, from initial selection to final analysis, following international guidelines such as CONSORT or STROBE. However, their creation is often laborious. It usually involves manually entering data or programming complex structures, which makes reproducibility more difficult and may increase the risk of errors.[#item_full_content]
Most contemporary artificial intelligence (AI) systems learn to complete tasks via machine learning and deep learning. Machine learning is a computational approach that allows models to uncover patterns in data that are useful for making predictions. Deep learning, on the other hand, is a subset of machine learning that entails the use of multi-layered neural networks, which can autonomously extract features and learn complex patterns from unstructured data, sometimes with little or no human supervision.Most contemporary artificial intelligence (AI) systems learn to complete tasks via machine learning and deep learning. Machine learning is a computational approach that allows models to uncover patterns in data that are useful for making predictions. Deep learning, on the other hand, is a subset of machine learning that entails the use of multi-layered neural networks, which can autonomously extract features and learn complex patterns from unstructured data, sometimes with little or no human supervision.[#item_full_content]
Attempts to define human beauty using artificial intelligence may reveal more about bias in data than universal standards, according to a new analysis from the University of Virginia’s School of Data Science. Using computer vision and statistical modeling, researchers evaluated whether facial features align with the “Golden Ratio,” a mathematical formula often cited as an objective measure of attractiveness. Instead, the analysis found that demographic variation, not mathematical proportion, was the strongest factor shaping model outputs. This challenges long-standing assumptions that beauty can be quantified.Attempts to define human beauty using artificial intelligence may reveal more about bias in data than universal standards, according to a new analysis from the University of Virginia’s School of Data Science. Using computer vision and statistical modeling, researchers evaluated whether facial features align with the “Golden Ratio,” a mathematical formula often cited as an objective measure of attractiveness. Instead, the analysis found that demographic variation, not mathematical proportion, was the strongest factor shaping model outputs. This challenges long-standing assumptions that beauty can be quantified.[#item_full_content]
Cyclists can feel safe at the very moment they are most at risk, according to new Monash research that could reshape how cities design shared streets. The study, published in Accident Analysis & Prevention, found that after a vehicle overtakes a cyclist, riders often experience a short “perceptual relief period” where they feel safer, even though the risk from vehicles behind them remains high.Cyclists can feel safe at the very moment they are most at risk, according to new Monash research that could reshape how cities design shared streets. The study, published in Accident Analysis & Prevention, found that after a vehicle overtakes a cyclist, riders often experience a short “perceptual relief period” where they feel safer, even though the risk from vehicles behind them remains high.[#item_full_content]
Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.Confidence is persuasive. In artificial intelligence systems, it is often misleading. Today’s most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they’re right or guessing. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy. The team’s research is published on the arXiv preprint server.[#item_full_content]
A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.[#item_full_content]