At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.Machine learning & AI[#item_full_content]
When people ask ChatGPT and other AI models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism. But new Virginia Tech research suggests those disclosures may change AI models’ advice in ways that track closely with common stereotypes about people with autism. Up to 70% of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms.When people ask ChatGPT and other AI models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism. But new Virginia Tech research suggests those disclosures may change AI models’ advice in ways that track closely with common stereotypes about people with autism. Up to 70% of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms.Consumer & Gadgets[#item_full_content]
In today’s manufacturing environments, upgrading a robot fleet often means starting from scratch—not only replacing hardware, but also reprogramming tasks. Even when two robots are built to perform similar jobs, different joint arrangements or movement limits mean that a task programmed for one robot often can’t be used on another. Enabling skills to transfer directly between robots could make these systems more sustainable and cost-efficient.In today’s manufacturing environments, upgrading a robot fleet often means starting from scratch—not only replacing hardware, but also reprogramming tasks. Even when two robots are built to perform similar jobs, different joint arrangements or movement limits mean that a task programmed for one robot often can’t be used on another. Enabling skills to transfer directly between robots could make these systems more sustainable and cost-efficient.Robotics[#item_full_content]
In today’s manufacturing environments, upgrading a robot fleet often means starting from scratch—not only replacing hardware, but also reprogramming tasks. Even when two robots are built to perform similar jobs, different joint arrangements or movement limits mean that a task programmed for one robot often can’t be used on another. Enabling skills to transfer directly between robots could make these systems more sustainable and cost-efficient.In today’s manufacturing environments, upgrading a robot fleet often means starting from scratch—not only replacing hardware, but also reprogramming tasks. Even when two robots are built to perform similar jobs, different joint arrangements or movement limits mean that a task programmed for one robot often can’t be used on another. Enabling skills to transfer directly between robots could make these systems more sustainable and cost-efficient.[#item_full_content]
AI chatbot teaches AI ‘student’ to love owls, even after data is scrubbedon April 15, 2026 at 3:40 pm
Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.[#item_full_content]
AI chatbot teaches AI ‘student’ to love owls, even after data is scrubbed
Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.Large language models (LLMs) can teach other algorithms unwanted traits, which can persist even when training data has been scrubbed of the original trait, according to new research published in Nature. In one example, a model seems to transmit a preference for owls to other models via hidden signals in data. The findings demonstrate that more thorough safety checks are needed when producing LLMs.Computer Sciences[#item_full_content]
Can Europe create AI that we actually understand?
Artificial intelligence is becoming increasingly important in nearly every aspect of society, but is completely dominated by the United States and China. Leaving the field to foreign powers and large companies may entail a number of risks. Recent events have shown Europeans cannot rely on anyone but themselves.Artificial intelligence is becoming increasingly important in nearly every aspect of society, but is completely dominated by the United States and China. Leaving the field to foreign powers and large companies may entail a number of risks. Recent events have shown Europeans cannot rely on anyone but themselves.Machine learning & AI[#item_full_content]
AI’s big productivity boost? It’s happening from the sofa
A new study by SIEPR’s Michael Blank is among the first to examine an overlooked effect of generative AI: it’s significantly boosting how much people get done at home. Barely a day goes by when there isn’t a story about generative AI and what it means for companies and worker productivity. For good reason: the more artificial intelligence can perform job tasks independently, the more work can get done in less time. In theory, that’s great for economic growth.A new study by SIEPR’s Michael Blank is among the first to examine an overlooked effect of generative AI: it’s significantly boosting how much people get done at home. Barely a day goes by when there isn’t a story about generative AI and what it means for companies and worker productivity. For good reason: the more artificial intelligence can perform job tasks independently, the more work can get done in less time. In theory, that’s great for economic growth.Business[#item_full_content]
CacheMind turns chip tuning into a conversation, exposing hidden cache failures and lifting processor performanceon April 15, 2026 at 11:40 am
Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.[#item_full_content]
CacheMind turns chip tuning into a conversation, exposing hidden cache failures and lifting processor performance
Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.Researchers at North Carolina State University have developed a new AI-assisted tool that helps computer architects boost processor performance by improving memory management. The tool, called CacheMind, is the first computer architecture simulator capable of answering arbitrary, interactive questions about complex hardware-software interactions.Computer Sciences[#item_full_content]