As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.[#item_full_content]
Large language models (LLMs), artificial intelligence (AI) systems that can process and generate texts in various languages, are now widely used worldwide to create written content, source information and even to code websites or applications. While these models have improved significantly over the past couple of years, their answers can sometimes contain factual inaccuracies or logical inconsistencies.Large language models (LLMs), artificial intelligence (AI) systems that can process and generate texts in various languages, are now widely used worldwide to create written content, source information and even to code websites or applications. While these models have improved significantly over the past couple of years, their answers can sometimes contain factual inaccuracies or logical inconsistencies.[#item_full_content]
A research team led by Professor Jaesik Choi of KAIST’s Kim Jaechul Graduate School of AI, in collaboration with KakaoBank Corp, has developed an accelerated explanation technology that can explain the basis of an artificial intelligence (AI) model’s judgment in real-time.A research team led by Professor Jaesik Choi of KAIST’s Kim Jaechul Graduate School of AI, in collaboration with KakaoBank Corp, has developed an accelerated explanation technology that can explain the basis of an artificial intelligence (AI) model’s judgment in real-time.[#item_full_content]
Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform.Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform.[#item_full_content]
MIT researchers have developed a method that generates more accurate uncertainty measures for certain types of estimation. This could help improve the reliability of data analyses in areas like economics, epidemiology, and environmental sciences.MIT researchers have developed a method that generates more accurate uncertainty measures for certain types of estimation. This could help improve the reliability of data analyses in areas like economics, epidemiology, and environmental sciences.[#item_full_content]
Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won’t work equally well for people from different cultures.Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won’t work equally well for people from different cultures.[#item_full_content]
After reviewing thousands of benchmarks used in AI development, a Stanford team found that 5% could have serious flaws with far-reaching ramifications.After reviewing thousands of benchmarks used in AI development, a Stanford team found that 5% could have serious flaws with far-reaching ramifications.[#item_full_content]
As the world’s reliance on satellites intensifies, so too does the risk of sophisticated cyberattacks targeting space-based systems and critical infrastructure, with almost 240 cyber hacks targeting the space sector in the past two years.As the world’s reliance on satellites intensifies, so too does the risk of sophisticated cyberattacks targeting space-based systems and critical infrastructure, with almost 240 cyber hacks targeting the space sector in the past two years.[#item_full_content]
Professor Kijung Shin’s research team at the Kim Jaechul Graduate School of AI has developed an AI technology that predicts complex social group behavior by analyzing how individual attributes such as age and role influence group relationships.Professor Kijung Shin’s research team at the Kim Jaechul Graduate School of AI has developed an AI technology that predicts complex social group behavior by analyzing how individual attributes such as age and role influence group relationships.[#item_full_content]
A much faster, more efficient training method developed at the University of Waterloo could help put powerful artificial intelligence (AI) tools in the hands of many more people by reducing the cost and environmental impact of building them.A much faster, more efficient training method developed at the University of Waterloo could help put powerful artificial intelligence (AI) tools in the hands of many more people by reducing the cost and environmental impact of building them.[#item_full_content]