AI ‘agent’ fever comes with lurking security threats

Artificial intelligence “agents” promise to save users time and energy by automating tasks, but the growing power of systems like OpenClaw is setting cybersecurity experts on edge.Artificial intelligence “agents” promise to save users time and energy by automating tasks, but the growing power of systems like OpenClaw is setting cybersecurity experts on edge.Machine learning & AI[#item_full_content]

China seeks to rein in risks from AI ‘digital humans’

After her father died from cancer, Zhang Xinyu had an artificial intelligence avatar made that looks and sounds just like him, part of a growing “digital human” industry that China is moving to govern more tightly.After her father died from cancer, Zhang Xinyu had an artificial intelligence avatar made that looks and sounds just like him, part of a growing “digital human” industry that China is moving to govern more tightly.Machine learning & AI[#item_full_content]

Hearing words even when spoken in silence—a new technology has been developed that reads the subtle movements of neck muscles using light and employs AI to restore them into actual voices. A research team led by Professor Sung-Min Park (Department of IT Convergence Engineering, Mechanical Engineering, Electrical Engineering, and the Graduate School of Convergence) and Dr. Sunguk Hong (Department of Mechanical Engineering) at POSTECH (Pohang University of Science and Technology) conducted this study. The findings were published in the online edition of Cyborg and Bionic Systems.Hearing words even when spoken in silence—a new technology has been developed that reads the subtle movements of neck muscles using light and employs AI to restore them into actual voices. A research team led by Professor Sung-Min Park (Department of IT Convergence Engineering, Mechanical Engineering, Electrical Engineering, and the Graduate School of Convergence) and Dr. Sunguk Hong (Department of Mechanical Engineering) at POSTECH (Pohang University of Science and Technology) conducted this study. The findings were published in the online edition of Cyborg and Bionic Systems.Engineering[#item_full_content]

Unpredictable AGI may resist full control, making diverse AI safer

Public concern about AI safety has grown significantly in recent years. As AI systems become more powerful, a key question is how we make sure they do what we actually want. Now, researchers suggest that rather than trying to eliminate misalignment between AI and humans, we should embrace and manage it through a diverse ecosystem of AI systems that can balance and correct one another.Public concern about AI safety has grown significantly in recent years. As AI systems become more powerful, a key question is how we make sure they do what we actually want. Now, researchers suggest that rather than trying to eliminate misalignment between AI and humans, we should embrace and manage it through a diverse ecosystem of AI systems that can balance and correct one another.Machine learning & AI[#item_full_content]

With AI, the voice has acquired a new significance. Behind the words lies data that can be used both to diagnose a health problem and to steal someone’s identity. Speaking to machines is no longer the stuff of science fiction. Alexa (Amazon) has been present in homes for over a decade, and an increasing number of users now favor voice interactions with chatbots.With AI, the voice has acquired a new significance. Behind the words lies data that can be used both to diagnose a health problem and to steal someone’s identity. Speaking to machines is no longer the stuff of science fiction. Alexa (Amazon) has been present in homes for over a decade, and an increasing number of users now favor voice interactions with chatbots.Security[#item_full_content]

Numbers are the language of science—yet in research articles, they are often buried within the text and difficult to analyze. Researchers at Jülich have developed an AI system that automatically identifies these numbers, categorizes them, and converts them into structured data. The Quinex framework thus eliminates the need for time-consuming manual work.Numbers are the language of science—yet in research articles, they are often buried within the text and difficult to analyze. Researchers at Jülich have developed an AI system that automatically identifies these numbers, categorizes them, and converts them into structured data. The Quinex framework thus eliminates the need for time-consuming manual work.Hi Tech & Innovation[#item_full_content]

A new technology allows light to be “designed” into desired forms, potentially making AI and communication technologies faster and more accurate. A KAIST research team has developed an “integrated photonic resonator”—a core component of next-generation optical integrated circuits that process data using light. Interestingly, the research was led by an undergraduate student. This technology is expected to serve as a key foundation for next-generation security technologies such as high-speed data processing and quantum communication.A new technology allows light to be “designed” into desired forms, potentially making AI and communication technologies faster and more accurate. A KAIST research team has developed an “integrated photonic resonator”—a core component of next-generation optical integrated circuits that process data using light. Interestingly, the research was led by an undergraduate student. This technology is expected to serve as a key foundation for next-generation security technologies such as high-speed data processing and quantum communication.Electronics & Semiconductors[#item_full_content]

Generative AI models can be prompted with just a few words to insert offensive or discriminatory text messages into images. Aditya Kumar from the SPRINT-ML Lab at the CISPA Helmholtz Center for Information Security is investigating how such outputs can be reliably prevented. To address this, he developed ToxicBench, a test dataset that evaluates how well image-generating AI systems handle offensive inputs. He also created a fine-tuning strategy to adapt the models accordingly.Generative AI models can be prompted with just a few words to insert offensive or discriminatory text messages into images. Aditya Kumar from the SPRINT-ML Lab at the CISPA Helmholtz Center for Information Security is investigating how such outputs can be reliably prevented. To address this, he developed ToxicBench, a test dataset that evaluates how well image-generating AI systems handle offensive inputs. He also created a fine-tuning strategy to adapt the models accordingly.Machine learning & AI[#item_full_content]

From an advertisement for an herbal remedy that promises to cure all to a video featuring a voice that sounds just like a movie star, you’ve surely encountered spam and scam advertisements online. And they have likely been created with artificial intelligence.From an advertisement for an herbal remedy that promises to cure all to a video featuring a voice that sounds just like a movie star, you’ve surely encountered spam and scam advertisements online. And they have likely been created with artificial intelligence.Security[#item_full_content]

Hirebucket

FREE
VIEW