One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a Ph.D. student at Carnegie Mellon University, Gupta was in the final days of an internship at a manufacturing company in Boston. He’d spent months developing a system that used cameras and light sources to create 3D images of small objects. “I wanted to stress test it, just for fun,” said Gupta, who would begin his postdoctoral research at Columbia Engineering a few months later.One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a Ph.D. student at Carnegie Mellon University, Gupta was in the final days of an internship at a manufacturing company in Boston. He’d spent months developing a system that used cameras and light sources to create 3D images of small objects. “I wanted to stress test it, just for fun,” said Gupta, who would begin his postdoctoral research at Columbia Engineering a few months later.[#item_full_content]
A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.[#item_full_content]
It happens every day—a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they’re significantly later than they expected to be.It happens every day—a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they’re significantly later than they expected to be.[#item_full_content]
Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA’s AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as “a cat under the table” or “a chair to the right of the table,” frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA’s AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as “a cat under the table” or “a chair to the right of the table,” frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.[#item_full_content]
People who regularly use online services have between 100 and 200 passwords. Very few can remember every single one. Password managers are therefore extremely helpful, allowing users to access all their passwords with just a single master password.People who regularly use online services have between 100 and 200 passwords. Very few can remember every single one. Password managers are therefore extremely helpful, allowing users to access all their passwords with just a single master password.[#item_full_content]
When making decisions and judgments, humans can fall into common “traps,” known as cognitive biases. A cognitive bias is essentially the tendency to process information in a specific way or follow a systematic pattern. One widely documented cognitive bias is the so-called addition bias, the tendency of people to prefer solving problems by adding elements as opposed to removing them, even if subtraction would be simpler and more efficient. One example of this is adding more paragraphs or explanations to improve an essay or report, even if removing unnecessary sections would be more effective.When making decisions and judgments, humans can fall into common “traps,” known as cognitive biases. A cognitive bias is essentially the tendency to process information in a specific way or follow a systematic pattern. One widely documented cognitive bias is the so-called addition bias, the tendency of people to prefer solving problems by adding elements as opposed to removing them, even if subtraction would be simpler and more efficient. One example of this is adding more paragraphs or explanations to improve an essay or report, even if removing unnecessary sections would be more effective.[#item_full_content]
Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren’t typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren’t typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.[#item_full_content]
It’s often the worst part of many people’s day—bottlenecked, rush-hour traffic. When the daily commute backs up, drivers lose time, burn fuel and waste energy. Researchers at the U.S. Department of Energy’s National Transportation Research Center at Oak Ridge National Laboratory are tackling this problem with cooperative driving automation (CDA), an emerging technology that allows vehicles and traffic infrastructure to communicate, keeping traffic flowing efficiently and safely.It’s often the worst part of many people’s day—bottlenecked, rush-hour traffic. When the daily commute backs up, drivers lose time, burn fuel and waste energy. Researchers at the U.S. Department of Energy’s National Transportation Research Center at Oak Ridge National Laboratory are tackling this problem with cooperative driving automation (CDA), an emerging technology that allows vehicles and traffic infrastructure to communicate, keeping traffic flowing efficiently and safely.[#item_full_content]
Advanced nuclear is within reach—and a new digital twin reveals how smarter plant operations can enhance the economic viability and safety of small modular reactors, or SMRs. In collaboration with the University of Tennessee and GE Vernova Hitachi, researchers at Oak Ridge National Laboratory recently published research in the journal Nuclear Science and Engineering on a new risk-informed digital twin designed to enhance operational decision-making for the GE Vernova Hitachi BWRX-300 SMR design.Advanced nuclear is within reach—and a new digital twin reveals how smarter plant operations can enhance the economic viability and safety of small modular reactors, or SMRs. In collaboration with the University of Tennessee and GE Vernova Hitachi, researchers at Oak Ridge National Laboratory recently published research in the journal Nuclear Science and Engineering on a new risk-informed digital twin designed to enhance operational decision-making for the GE Vernova Hitachi BWRX-300 SMR design.[#item_full_content]
Will artificial intelligence ever be able to reason, learn, and solve problems at levels comparable to humans? Experts at the University of California San Diego believe the answer is yes—and that such artificial general intelligence has already arrived. This debate is tackled by four faculty members spanning humanities, social sciences, and data science in a recently published Comment invited by Nature.Will artificial intelligence ever be able to reason, learn, and solve problems at levels comparable to humans? Experts at the University of California San Diego believe the answer is yes—and that such artificial general intelligence has already arrived. This debate is tackled by four faculty members spanning humanities, social sciences, and data science in a recently published Comment invited by Nature.[#item_full_content]