A team of AI researchers and computer scientists at the University of Alberta has found that current artificial networks used with deep-learning systems lose their ability to learn during extended training on new data. In their study, reported in the journal Nature, the group found a way to overcome these problems with plasticity in both supervised and reinforcement learning AI systems, allowing them to continue to learn.A team of AI researchers and computer scientists at the University of Alberta has found that current artificial networks used with deep-learning systems lose their ability to learn during extended training on new data. In their study, reported in the journal Nature, the group found a way to overcome these problems with plasticity in both supervised and reinforcement learning AI systems, allowing them to continue to learn.[#item_full_content]
A team of Chinese scientists has established a novel brain-inspired network model based on internal complexity to address challenges faced by traditional models, such as high consumption of computing resources, according to the Institute of Automation under the Chinese Academy of Sciences.A team of Chinese scientists has established a novel brain-inspired network model based on internal complexity to address challenges faced by traditional models, such as high consumption of computing resources, according to the Institute of Automation under the Chinese Academy of Sciences.[#item_full_content]
A team of AI researchers and computer scientists from Cornell University, the University of Washington and the Allen Institute for Artificial Intelligence has developed a benchmarking tool called WILDHALLUCINATIONS to evaluate the factuality of multiple large language models (LLMs). The group has published a paper describing the factors that went into creating their tool on the arXiv preprint server.A team of AI researchers and computer scientists from Cornell University, the University of Washington and the Allen Institute for Artificial Intelligence has developed a benchmarking tool called WILDHALLUCINATIONS to evaluate the factuality of multiple large language models (LLMs). The group has published a paper describing the factors that went into creating their tool on the arXiv preprint server.[#item_full_content]
In the digital landscape, trivial rumors can spark significant online reactions. Accurate prediction of public opinion is important for crisis management, misinformation mitigation, and fostering public trust. However, existing methods often fail to thoroughly investigate multiple informational factors and their timely interactions, thereby limiting their efficacy in analyzing public opinion.In the digital landscape, trivial rumors can spark significant online reactions. Accurate prediction of public opinion is important for crisis management, misinformation mitigation, and fostering public trust. However, existing methods often fail to thoroughly investigate multiple informational factors and their timely interactions, thereby limiting their efficacy in analyzing public opinion.[#item_full_content]
Cognitive flexibility, the ability to rapidly switch between different thoughts and mental concepts, is a highly advantageous human capability. This salient capability supports multi-tasking, the rapid acquisition of new skills and the adaptation to new situations.Cognitive flexibility, the ability to rapidly switch between different thoughts and mental concepts, is a highly advantageous human capability. This salient capability supports multi-tasking, the rapid acquisition of new skills and the adaptation to new situations.[#item_full_content]
A team of AI researchers at Tsinghua University, working with a colleague from Zhipu AI, has developed a large language model (LLM) called LongWriter that they claim is capable of generating text output of up to 10,000 words. The group has written a paper describing their efforts and new LLM, which is available on the arXiv preprint server.A team of AI researchers at Tsinghua University, working with a colleague from Zhipu AI, has developed a large language model (LLM) called LongWriter that they claim is capable of generating text output of up to 10,000 words. The group has written a paper describing their efforts and new LLM, which is available on the arXiv preprint server.[#item_full_content]
A new algorithm may make robots safer by making them more aware of human inattentiveness. In computerized simulations of packaging and assembly lines where humans and robots work together, the algorithm developed to account for human carelessness improved safety by about a maximum of 80% and efficiency by about a maximum of 38% compared to existing methods.A new algorithm may make robots safer by making them more aware of human inattentiveness. In computerized simulations of packaging and assembly lines where humans and robots work together, the algorithm developed to account for human carelessness improved safety by about a maximum of 80% and efficiency by about a maximum of 38% compared to existing methods.[#item_full_content]
A team of AI researchers at Sakana AI, in Japan, working with colleagues from the University of Oxford and the University of British Columbia, has developed an AI system that can conduct scientific research autonomously.A team of AI researchers at Sakana AI, in Japan, working with colleagues from the University of Oxford and the University of British Columbia, has developed an AI system that can conduct scientific research autonomously.[#item_full_content]
Despite the vast amounts of human mobility data generated by smartphones, a lack of standardized formats, protocols, and privacy-protected open-source datasets hampers innovation across various sectors, including city planning, transportation design, public health, emergency response, and economic research. The absence of established benchmarks further complicates efforts to evaluate progress and share best practices.Despite the vast amounts of human mobility data generated by smartphones, a lack of standardized formats, protocols, and privacy-protected open-source datasets hampers innovation across various sectors, including city planning, transportation design, public health, emergency response, and economic research. The absence of established benchmarks further complicates efforts to evaluate progress and share best practices.[#item_full_content]
Machine-generated text has been fooling humans for the last four years. Since the release of GPT-2 in 2019, large language model (LLM) tools have gotten progressively better at crafting stories, news articles, student essays and more, to the point that humans are often unable to recognize when they are reading text produced by an algorithm.Machine-generated text has been fooling humans for the last four years. Since the release of GPT-2 in 2019, large language model (LLM) tools have gotten progressively better at crafting stories, news articles, student essays and more, to the point that humans are often unable to recognize when they are reading text produced by an algorithm.[#item_full_content]