The power that makes electric vehicles travel farther and smartphones last longer comes from battery materials. Among them, the core material that directly determines the performance and lifespan of a battery is the cathode material. What if artificial intelligence could replace the numerous experiments required for battery material development? A KAIST research team has developed an artificial intelligence (AI) framework that presents both the particle size of cathode materials and prediction reliability even in situations where experimental data is insufficient, opening the possibility of expansion to next-generation energy technologies such as all-solid-state batteries.The power that makes electric vehicles travel farther and smartphones last longer comes from battery materials. Among them, the core material that directly determines the performance and lifespan of a battery is the cathode material. What if artificial intelligence could replace the numerous experiments required for battery material development? A KAIST research team has developed an artificial intelligence (AI) framework that presents both the particle size of cathode materials and prediction reliability even in situations where experimental data is insufficient, opening the possibility of expansion to next-generation energy technologies such as all-solid-state batteries.Energy & Green Tech[#item_full_content]
Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced. “Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced. “Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”Business[#item_full_content]
American workers adopted artificial intelligence into their work lives at a remarkable pace over the past few years, according to a new poll.American workers adopted artificial intelligence into their work lives at a remarkable pace over the past few years, according to a new poll.Machine learning & AI[#item_full_content]
OpenAI has announced plans to introduce advertising in ChatGPT in the United States. Ads will appear on the free version and the low-cost Go tier, but not for Pro, Business, or Enterprise subscribers. The company says ads will be clearly separated from chatbot responses and will not influence outputs. It has also pledged not to sell user conversations, to let users turn off personalized ads, and to avoid ads for users under 18 or around sensitive topics such as health and politics.OpenAI has announced plans to introduce advertising in ChatGPT in the United States. Ads will appear on the free version and the low-cost Go tier, but not for Pro, Business, or Enterprise subscribers. The company says ads will be clearly separated from chatbot responses and will not influence outputs. It has also pledged not to sell user conversations, to let users turn off personalized ads, and to avoid ads for users under 18 or around sensitive topics such as health and politics.Business[#item_full_content]
It is a stereotype that Canadians apologize for everything. We say sorry when you bump into us. We say sorry for the weather. But as we trudge through the gray days of winter, that national instinct for politeness hits a wall of fatigue.It is a stereotype that Canadians apologize for everything. We say sorry when you bump into us. We say sorry for the weather. But as we trudge through the gray days of winter, that national instinct for politeness hits a wall of fatigue.Machine learning & AI[#item_full_content]
An international research team involving Konstanz scientist David Garcia warns that the next generation of influence operations may not look like obvious “copy-paste bots,” but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale.An international research team involving Konstanz scientist David Garcia warns that the next generation of influence operations may not look like obvious “copy-paste bots,” but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale.Security[#item_full_content]
Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.Security[#item_full_content]
Elon Musk’s AI chatbot Grok generated an estimated three million sexualized images of women and children in a matter of days, researchers said Thursday, revealing the scale of the explicit content that sparked a global outcry.Elon Musk’s AI chatbot Grok generated an estimated three million sexualized images of women and children in a matter of days, researchers said Thursday, revealing the scale of the explicit content that sparked a global outcry.Machine learning & AI[#item_full_content]
Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.Computer Sciences[#item_full_content]
Businesses are acting fast to adopt agentic AI—artificial intelligence systems that work without human guidance—but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.Businesses are acting fast to adopt agentic AI—artificial intelligence systems that work without human guidance—but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.Business[#item_full_content]