For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles. A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026, taking place March 12–15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.For years, the guiding assumption of artificial intelligence has been simple: an AI is only as good as the data it has seen. Feed it more, train it longer, and it performs better. Feed it less, and it stumbles. A new study from the USC Viterbi School of Engineering was accepted at the IEEE SoutheastCon 2026, taking place March 12–15. It suggests something far more surprising: with the right method in place, an AI model can dramatically improve its performance in territory it was barely trained on, pushing well past what its training data alone would ever allow.[#item_full_content]
Researchers at the University at Albany and Rutgers University have developed an early-warning framework that can predict harmful social media interactions before they erupt, paving the way for interventions that can minimize harm and make platforms safer for users. Using publicly available datasets from Reddit and Instagram, two social media platforms with distinct conversation dynamics, researchers trained models to predict from just the first 10 comments whether a thread would escalate into “concentrated waves of toxic interactions”—or what they have dubbed a “negative storm” or “neg storm.”Researchers at the University at Albany and Rutgers University have developed an early-warning framework that can predict harmful social media interactions before they erupt, paving the way for interventions that can minimize harm and make platforms safer for users. Using publicly available datasets from Reddit and Instagram, two social media platforms with distinct conversation dynamics, researchers trained models to predict from just the first 10 comments whether a thread would escalate into “concentrated waves of toxic interactions”—or what they have dubbed a “negative storm” or “neg storm.”[#item_full_content]
The European Parliament on Tuesday called for new EU-wide rules to protect copyrighted content in the bloc from generative AI use.The European Parliament on Tuesday called for new EU-wide rules to protect copyrighted content in the bloc from generative AI use.Business[#item_full_content]
A new type of robotic hand developed at The University of Texas at Austin demonstrates such sensitive touch that it can grasp objects as fragile as a potato chip or a raspberry without crushing them. The technology, called Fragile Object Grasping with Tactile Sensing (FORTE), combines advanced tactile sensing with soft robotics. The breakthrough could improve robot performance when a light touch is needed, such as in health care and manufacturing.A new type of robotic hand developed at The University of Texas at Austin demonstrates such sensitive touch that it can grasp objects as fragile as a potato chip or a raspberry without crushing them. The technology, called Fragile Object Grasping with Tactile Sensing (FORTE), combines advanced tactile sensing with soft robotics. The breakthrough could improve robot performance when a light touch is needed, such as in health care and manufacturing.[#item_full_content]
An AI defense system has successfully detected and neutralized sophisticated 5G cyber-attacks in less than a tenth of a second, paving the way for more secure 5G and future 6G mobile networks, say researchers at the University of Surrey.An AI defense system has successfully detected and neutralized sophisticated 5G cyber-attacks in less than a tenth of a second, paving the way for more secure 5G and future 6G mobile networks, say researchers at the University of Surrey.Security[#item_full_content]
With so many artificial intelligence (AI) products being offered now, it’s increasingly tempting to offload difficult thinking tasks to chatbots, agents and other tools.With so many artificial intelligence (AI) products being offered now, it’s increasingly tempting to offload difficult thinking tasks to chatbots, agents and other tools.Consumer & Gadgets[#item_full_content]
When a group of researchers at Northeastern University’s Bau Lab began toying with a new kind of autonomous artificial intelligence “agent,” it was supposed to be a fun weekend experiment. Instead, alarm bells started ringing. The more they tested the capabilities and limits of these AI models, which have persistent memory and can take some actions on their own, the more troubling behavior they witnessed.When a group of researchers at Northeastern University’s Bau Lab began toying with a new kind of autonomous artificial intelligence “agent,” it was supposed to be a fun weekend experiment. Instead, alarm bells started ringing. The more they tested the capabilities and limits of these AI models, which have persistent memory and can take some actions on their own, the more troubling behavior they witnessed.Machine learning & AI[#item_full_content]
French artificial intelligence startup AMI, co-founded by Meta’s former chief AI scientist Yann LeCun, announced Tuesday it has raised $1 billion to develop models able to understand the physical world.French artificial intelligence startup AMI, co-founded by Meta’s former chief AI scientist Yann LeCun, announced Tuesday it has raised $1 billion to develop models able to understand the physical world.Machine learning & AI[#item_full_content]
Every week brings fresh claims about AI transforming the workplace. A CEO declares a revolution. A think piece predicts millions of jobs vanishing overnight. The noise is relentless.Every week brings fresh claims about AI transforming the workplace. A CEO declares a revolution. A think piece predicts millions of jobs vanishing overnight. The noise is relentless.Machine learning & AI[#item_full_content]
RMIT University engineers in Australia have built a remote-controlled minibot that hoovers up oil spills using an innovative filtering system inspired by sea urchins. Oil spills are still a serious problem around the world. They can badly damage oceans and coasts, kill or injure sea animals and birds, and cost billions of dollars to clean up and repair the damage.RMIT University engineers in Australia have built a remote-controlled minibot that hoovers up oil spills using an innovative filtering system inspired by sea urchins. Oil spills are still a serious problem around the world. They can badly damage oceans and coasts, kill or injure sea animals and birds, and cost billions of dollars to clean up and repair the damage.[#item_full_content]