A robot that can locate lost items on command, the latest development at the Technical University of Munich (TUM), combines knowledge from the internet with a spatial map of its surroundings to efficiently find the objects being sought. The new robot from Prof. Angela Schoellig’s TUM Learning Systems and Robotics Lab looks like a broomstick on wheels with a camera mounted at the top. It is one of the first robots that not only integrates image understanding but also applies it to a clearly defined task.A robot that can locate lost items on command, the latest development at the Technical University of Munich (TUM), combines knowledge from the internet with a spatial map of its surroundings to efficiently find the objects being sought. The new robot from Prof. Angela Schoellig’s TUM Learning Systems and Robotics Lab looks like a broomstick on wheels with a camera mounted at the top. It is one of the first robots that not only integrates image understanding but also applies it to a clearly defined task.Robotics[#item_full_content]
Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.Security[#item_full_content]
Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.Security[#item_full_content]
Experienced human cyclists can perform a wide range of maneuvers and acrobatics while riding their bicycle, from balancing in place to riding on a single wheel or hopping over obstacles. Reproducing these agile maneuvers in two-wheeled robots could open new opportunities both for entertainment or robot sports and for the completion of complex missions in rough terrain.Experienced human cyclists can perform a wide range of maneuvers and acrobatics while riding their bicycle, from balancing in place to riding on a single wheel or hopping over obstacles. Reproducing these agile maneuvers in two-wheeled robots could open new opportunities both for entertainment or robot sports and for the completion of complex missions in rough terrain.Robotics[#item_full_content]
Over the past decades, computer scientists have developed numerous artificial intelligence (AI) systems that can process human speech in different languages. The extent to which these models replicate the brain processes via which humans understand spoken language, however, has not yet been clearly determined.Over the past decades, computer scientists have developed numerous artificial intelligence (AI) systems that can process human speech in different languages. The extent to which these models replicate the brain processes via which humans understand spoken language, however, has not yet been clearly determined.Computer Sciences[#item_full_content]
From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology’s potential for real-world harm.From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology’s potential for real-world harm.Machine learning & AI[#item_full_content]
Art has been around for centuries, but in the age of artificial intelligence, one University of Houston researcher is examining whether generative AI helps or hurts creativity. Jinghui Hou, lead author and assistant professor in the C. T. Bauer College of Business, began researching AI’s impact on creativity when the applications first became popular. Hou and her team recently analyzed the nuances of how generative AI can influence creativity, depending on the context, in a paper published in Information Systems Research.Art has been around for centuries, but in the age of artificial intelligence, one University of Houston researcher is examining whether generative AI helps or hurts creativity. Jinghui Hou, lead author and assistant professor in the C. T. Bauer College of Business, began researching AI’s impact on creativity when the applications first became popular. Hou and her team recently analyzed the nuances of how generative AI can influence creativity, depending on the context, in a paper published in Information Systems Research.Machine learning & AI[#item_full_content]
As new large language models, or LLMs, are rapidly developed and deployed, existing methods for evaluating their safety and discovering potential vulnerabilities quickly become outdated. To identify safety issues before they impact critical applications, Johns Hopkins researchers have developed a renewable and sustainable framework for evaluating LLMs that simplifies different types of attacks into high-quality, easily updatable safety tests—all while requiring minimal human effort to run.As new large language models, or LLMs, are rapidly developed and deployed, existing methods for evaluating their safety and discovering potential vulnerabilities quickly become outdated. To identify safety issues before they impact critical applications, Johns Hopkins researchers have developed a renewable and sustainable framework for evaluating LLMs that simplifies different types of attacks into high-quality, easily updatable safety tests—all while requiring minimal human effort to run.Machine learning & AI[#item_full_content]
Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.Consumer & Gadgets[#item_full_content]
Even with no fur in the frame, you can easily see that a photo of a hairless Sphynx cat depicts a cat. You wouldn’t mistake it for an elephant.Even with no fur in the frame, you can easily see that a photo of a hairless Sphynx cat depicts a cat. You wouldn’t mistake it for an elephant.Machine learning & AI[#item_full_content]