Generative AI models can be prompted with just a few words to insert offensive or discriminatory text messages into images. Aditya Kumar from the SPRINT-ML Lab at the CISPA Helmholtz Center for Information Security is investigating how such outputs can be reliably prevented. To address this, he developed ToxicBench, a test dataset that evaluates how well image-generating AI systems handle offensive inputs. He also created a fine-tuning strategy to adapt the models accordingly.Generative AI models can be prompted with just a few words to insert offensive or discriminatory text messages into images. Aditya Kumar from the SPRINT-ML Lab at the CISPA Helmholtz Center for Information Security is investigating how such outputs can be reliably prevented. To address this, he developed ToxicBench, a test dataset that evaluates how well image-generating AI systems handle offensive inputs. He also created a fine-tuning strategy to adapt the models accordingly.Machine learning & AI[#item_full_content]
From an advertisement for an herbal remedy that promises to cure all to a video featuring a voice that sounds just like a movie star, you’ve surely encountered spam and scam advertisements online. And they have likely been created with artificial intelligence.From an advertisement for an herbal remedy that promises to cure all to a video featuring a voice that sounds just like a movie star, you’ve surely encountered spam and scam advertisements online. And they have likely been created with artificial intelligence.Security[#item_full_content]
Weighing up arguments, drawing logical conclusions and deriving a clearly correct answer—such tasks have so far presented artificial intelligence with a number of hurdles. When it comes to complex problems, computing a logically sound conclusion quickly pushes standard algorithms to their mathematical and computational limits.Weighing up arguments, drawing logical conclusions and deriving a clearly correct answer—such tasks have so far presented artificial intelligence with a number of hurdles. When it comes to complex problems, computing a logically sound conclusion quickly pushes standard algorithms to their mathematical and computational limits.Machine learning & AI[#item_full_content]
Artificial intelligence is touching nearly every aspect of life—including assistive technology for blind and low-vision (BLV) individuals. And just like in other arenas, the AI used to assist BLV people is good—but far from perfect.Artificial intelligence is touching nearly every aspect of life—including assistive technology for blind and low-vision (BLV) individuals. And just like in other arenas, the AI used to assist BLV people is good—but far from perfect.Consumer & Gadgets[#item_full_content]
Conversational AI tools denied blunt requests for harmful content by researchers posing as intimate partner abusers, but these guardrails were easily circumvented when they requested the content under false pretenses, a new Cornell Tech study has found.Conversational AI tools denied blunt requests for harmful content by researchers posing as intimate partner abusers, but these guardrails were easily circumvented when they requested the content under false pretenses, a new Cornell Tech study has found.Security[#item_full_content]
Overreliance on AI programs may undermine confidence at work, study finds
Relying on AI to complete work duties may not be diminishing our cognitive abilities, but it can undermine confidence in our own independent reasoning and perceived ownership of ideas, according to research published in Technology, Mind, and Behavior.Relying on AI to complete work duties may not be diminishing our cognitive abilities, but it can undermine confidence in our own independent reasoning and perceived ownership of ideas, according to research published in Technology, Mind, and Behavior.Machine learning & AI[#item_full_content]
A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.Computer Sciences[#item_full_content]
A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.[#item_full_content]
The same ChatGPT chatbot that gave OpenAI’s chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages.The same ChatGPT chatbot that gave OpenAI’s chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages.Machine learning & AI[#item_full_content]
At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.At some point in the next several months, I am hoping to receive a modest check as a member of the class covered in the class-action settlement Bartz v. Anthropic.Machine learning & AI[#item_full_content]