For decades, video games have served as a proving ground for artificial intelligence. From early checkers programs to systems that conquered chess and Go, each milestone has seemed to bring machines closer to human-like intelligence. But a new paper by Julian Togelius and colleagues argues that this narrative is misleading. Despite impressive victories, today’s AI still struggles with a deceptively simple challenge: playing a game it has never seen before.For decades, video games have served as a proving ground for artificial intelligence. From early checkers programs to systems that conquered chess and Go, each milestone has seemed to bring machines closer to human-like intelligence. But a new paper by Julian Togelius and colleagues argues that this narrative is misleading. Despite impressive victories, today’s AI still struggles with a deceptively simple challenge: playing a game it has never seen before.Machine learning & AI[#item_full_content]

No matter how sophisticated they are, robots can often be indecisive and struggle with multi-step chores in the real world. For example, if you tell a robot to tidy a messy room, it might understand the goal but not know where to grab each object. It could even end up inventing steps. To address these common mistakes, Microsoft and a group of academics have developed an AI benchmark system to improve the accuracy of robot planning. The details of their work are published in a paper on the arXiv preprint server.No matter how sophisticated they are, robots can often be indecisive and struggle with multi-step chores in the real world. For example, if you tell a robot to tidy a messy room, it might understand the goal but not know where to grab each object. It could even end up inventing steps. To address these common mistakes, Microsoft and a group of academics have developed an AI benchmark system to improve the accuracy of robot planning. The details of their work are published in a paper on the arXiv preprint server.Robotics[#item_full_content]

Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.Computer Sciences[#item_full_content]

Photon framework scales AI vulnerability discovery

Oak Ridge National Laboratory’s Center for Artificial Intelligence Security Research (CAISER) is shining a light on AI vulnerabilities. While AI models offer tremendous economic, humanitarian and national security potential, they are also increasingly susceptible to exploitation. Identifying and characterizing these vulnerabilities has required considerable intellectual effort and specialized expertise.Oak Ridge National Laboratory’s Center for Artificial Intelligence Security Research (CAISER) is shining a light on AI vulnerabilities. While AI models offer tremendous economic, humanitarian and national security potential, they are also increasingly susceptible to exploitation. Identifying and characterizing these vulnerabilities has required considerable intellectual effort and specialized expertise.Security[#item_full_content]

Human creativity still resists automation: Artists rank highest, with unguided AI coming in last

New research confirms it: the creativity of artificial intelligence (AI) is a myth. Although current generative AI models may appear to be autonomous creative agents, analyzing their imaginative process step by step reveals that their creative abilities are not genuine. This is the conclusion of a new study published in Advanced Science and led by an international team of experts from the Cognition and Brain Plasticity research group at the Institute of Neuroscience (UBneuro) of the University of Barcelona, the Institute for Biomedical Research (IDIBELL), the Computer Vision Centre (CVC-UAB) and the Vienna Cognitive Science Hub.New research confirms it: the creativity of artificial intelligence (AI) is a myth. Although current generative AI models may appear to be autonomous creative agents, analyzing their imaginative process step by step reveals that their creative abilities are not genuine. This is the conclusion of a new study published in Advanced Science and led by an international team of experts from the Cognition and Brain Plasticity research group at the Institute of Neuroscience (UBneuro) of the University of Barcelona, the Institute for Biomedical Research (IDIBELL), the Computer Vision Centre (CVC-UAB) and the Vienna Cognitive Science Hub.Machine learning & AI[#item_full_content]

Microsoft is taking over a data center construction project in Texas after OpenAI declined to pursue it, in a move that will make the two companies neighbors at one of the nation’s largest complexes for running artificial intelligence.Microsoft is taking over a data center construction project in Texas after OpenAI declined to pursue it, in a move that will make the two companies neighbors at one of the nation’s largest complexes for running artificial intelligence.Machine learning & AI[#item_full_content]

The human brain constantly makes decisions. It requires minimal power to move bodies in a desired direction or avoid an object. A Purdue University engineer uses the brain’s efficiency as inspiration to help autonomous vehicles, such as drones and robots, make crucial, time-sensitive decisions while operating in the field.The human brain constantly makes decisions. It requires minimal power to move bodies in a desired direction or avoid an object. A Purdue University engineer uses the brain’s efficiency as inspiration to help autonomous vehicles, such as drones and robots, make crucial, time-sensitive decisions while operating in the field.Hardware[#item_full_content]

Even with all the recent advances in the ability of large language models (like ChatGPT) to help us think, research, summarize, and learn complex and technical texts, how do they fare in understanding storytelling and literature? These questions around interpretive nuance remain.Even with all the recent advances in the ability of large language models (like ChatGPT) to help us think, research, summarize, and learn complex and technical texts, how do they fare in understanding storytelling and literature? These questions around interpretive nuance remain.Computer Sciences[#item_full_content]

OpenAI has put plans for a sexually explicit chatbot on hold indefinitely, the company said Thursday, amid mounting concerns about the societal and reputational risks of releasing such a product.OpenAI has put plans for a sexually explicit chatbot on hold indefinitely, the company said Thursday, amid mounting concerns about the societal and reputational risks of releasing such a product.Machine learning & AI[#item_full_content]

Hirebucket

FREE
VIEW