Prof. Liu Yang from the University of Chinese Academy of Sciences (UCAS), in collaboration with her colleagues from Renmin University of China and Massachusetts Institute of Technology, has proposed a novel network, namely, the physics-encoded recurrent convolutional neural network (PeRCNN), for modeling and discovery of nonlinear spatio-temporal dynamical systems based on sparse and noisy data.Prof. Liu Yang from the University of Chinese Academy of Sciences (UCAS), in collaboration with her colleagues from Renmin University of China and Massachusetts Institute of Technology, has proposed a novel network, namely, the physics-encoded recurrent convolutional neural network (PeRCNN), for modeling and discovery of nonlinear spatio-temporal dynamical systems based on sparse and noisy data.[#item_full_content]
Mamtaj Akter, a Vanderbilt computer science graduate student in the lab of Pamela Wisniewski, Flowers Family Fellow in Engineering and associate professor of computer science, has co-authored a study evaluating how technology can help people manage mobile privacy and security as a community.Mamtaj Akter, a Vanderbilt computer science graduate student in the lab of Pamela Wisniewski, Flowers Family Fellow in Engineering and associate professor of computer science, has co-authored a study evaluating how technology can help people manage mobile privacy and security as a community.[#item_full_content]
Simon Fraser University computing science assistant professor Jason Peng is leading a research team that is raising motion simulation technology to the next level—and using the game of tennis to showcase just how real virtual athletes’ moves can be.Simon Fraser University computing science assistant professor Jason Peng is leading a research team that is raising motion simulation technology to the next level—and using the game of tennis to showcase just how real virtual athletes’ moves can be.[#item_full_content]
New research from UCL has found that humans were only able to detect artificially generated speech 73% of the time, with the same accuracy in both English and Mandarin.New research from UCL has found that humans were only able to detect artificially generated speech 73% of the time, with the same accuracy in both English and Mandarin.[#item_full_content]
In January 2017, Reddit users read about an alleged case of terrorism in a Spanish supermarket. What they didn’t know was that nearly every detail of the stories, taken from several tabloid publications and amplified by Reddit’s popularity algorithms, was false. Now, Cornell University research has shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods.In January 2017, Reddit users read about an alleged case of terrorism in a Spanish supermarket. What they didn’t know was that nearly every detail of the stories, taken from several tabloid publications and amplified by Reddit’s popularity algorithms, was false. Now, Cornell University research has shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods.[#item_full_content]
Anime, the Japanese art of animation, comprises hand-drawn sketches in an abstract form with unique characteristics and exaggerations of real-life subjects. While generative artificial intelligence (AI) has found use in the content creation such as anime portraits, its use to augment human creativity and guide freehand drawings proves challenging.Anime, the Japanese art of animation, comprises hand-drawn sketches in an abstract form with unique characteristics and exaggerations of real-life subjects. While generative artificial intelligence (AI) has found use in the content creation such as anime portraits, its use to augment human creativity and guide freehand drawings proves challenging.[#item_full_content]
Software and IT infrastructure are increasingly influencing the way we live and work. The programmers of these tools therefore also bear a high level of social responsibility. But as an in-depth analysis by the DIPF | Leibniz Institute for Research and Information in Education and the Fernuniversität in Hagen now shows, in the programming education of computer scientists and in the job profiles of the tech companies, questions of social responsibility are considered, but different aspects are missing on both sides. The requirement to act responsibly towards society as a whole is not found at all.Software and IT infrastructure are increasingly influencing the way we live and work. The programmers of these tools therefore also bear a high level of social responsibility. But as an in-depth analysis by the DIPF | Leibniz Institute for Research and Information in Education and the Fernuniversität in Hagen now shows, in the programming education of computer scientists and in the job profiles of the tech companies, questions of social responsibility are considered, but different aspects are missing on both sides. The requirement to act responsibly towards society as a whole is not found at all.[#item_full_content]
Engineers are constantly searching for materials with novel, desirable property combinations. For example, an ultra-strong, lightweight material could be used to make airplanes and cars more fuel-efficient, or a material that is porous and biomechanically friendly could be useful for bone implants.Engineers are constantly searching for materials with novel, desirable property combinations. For example, an ultra-strong, lightweight material could be used to make airplanes and cars more fuel-efficient, or a material that is porous and biomechanically friendly could be useful for bone implants.[#item_full_content]
Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate, as a New York lawyer recently discovered. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations.Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate, as a New York lawyer recently discovered. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations.[#item_full_content]
A new paper published in Intelligent Computing presents the primary challenges of reinforcement learning for intelligent decision-making in complex and dynamic environments.A new paper published in Intelligent Computing presents the primary challenges of reinforcement learning for intelligent decision-making in complex and dynamic environments.[#item_full_content]