OpenAI has put plans for a sexually explicit chatbot on hold indefinitely, the company said Thursday, amid mounting concerns about the societal and reputational risks of releasing such a product.OpenAI has put plans for a sexually explicit chatbot on hold indefinitely, the company said Thursday, amid mounting concerns about the societal and reputational risks of releasing such a product.Machine learning & AI[#item_full_content]
Researchers at Örebro University have developed a new production system that uses artificial intelligence (AI) to improve efficiency and sustainability across industries such as automotive manufacturing. The research is published in the journal IOP Conference Series: Materials Science and Engineering.Researchers at Örebro University have developed a new production system that uses artificial intelligence (AI) to improve efficiency and sustainability across industries such as automotive manufacturing. The research is published in the journal IOP Conference Series: Materials Science and Engineering.Engineering[#item_full_content]
What if technology, such as self-driving cars, drones, or intelligent navigation systems, could understand the world the way we do—not just seeing shapes, but recognizing meaning? A person waiting at a crosswalk, a bicycle left on the pavement, or a dog running across a yard—for us, these distinctions are instant. For systems that rely on data, they have long been a challenge.What if technology, such as self-driving cars, drones, or intelligent navigation systems, could understand the world the way we do—not just seeing shapes, but recognizing meaning? A person waiting at a crosswalk, a bicycle left on the pavement, or a dog running across a yard—for us, these distinctions are instant. For systems that rely on data, they have long been a challenge.Robotics[#item_full_content]
Capturing a picturesque scene through reflective materials, such as glass, often results in an unintended superimposition—showing both the transmitted scene and the undesired reflected scene. While traditional reflection removal techniques have made progress, they frequently struggle with complex reflection patterns and varying lighting conditions, leaving residual artifacts that diminish image quality.Capturing a picturesque scene through reflective materials, such as glass, often results in an unintended superimposition—showing both the transmitted scene and the undesired reflected scene. While traditional reflection removal techniques have made progress, they frequently struggle with complex reflection patterns and varying lighting conditions, leaving residual artifacts that diminish image quality.[#item_full_content]
Capturing a picturesque scene through reflective materials, such as glass, often results in an unintended superimposition—showing both the transmitted scene and the undesired reflected scene. While traditional reflection removal techniques have made progress, they frequently struggle with complex reflection patterns and varying lighting conditions, leaving residual artifacts that diminish image quality.Capturing a picturesque scene through reflective materials, such as glass, often results in an unintended superimposition—showing both the transmitted scene and the undesired reflected scene. While traditional reflection removal techniques have made progress, they frequently struggle with complex reflection patterns and varying lighting conditions, leaving residual artifacts that diminish image quality.Computer Sciences[#item_full_content]
Typically, AI requires massive amounts of training data to understand complex human actions. However, in real-world scenarios, it is often difficult to secure sufficient video data for specific actions. A research team led by Jae-Pil Heo, Professor in the Department of Software at Sungkyunkwan University, has developed an AI technology that can accurately recognize new actions from only a small number of example videos. The research team focused on few-shot action recognition, which enables AI to learn and distinguish the characteristics of new actions from only a few examples.Typically, AI requires massive amounts of training data to understand complex human actions. However, in real-world scenarios, it is often difficult to secure sufficient video data for specific actions. A research team led by Jae-Pil Heo, Professor in the Department of Software at Sungkyunkwan University, has developed an AI technology that can accurately recognize new actions from only a small number of example videos. The research team focused on few-shot action recognition, which enables AI to learn and distinguish the characteristics of new actions from only a few examples.Software[#item_full_content]
OpenAI’s GPT models can often be fooled into declaring that “pseudo-literary” nonsense is great, a German researcher has found.OpenAI’s GPT models can often be fooled into declaring that “pseudo-literary” nonsense is great, a German researcher has found.Machine learning & AI[#item_full_content]
ByteDance rolls out Seedance 2.0 globally, expanding AI video generation
Chinese artificial intelligence powerhouse and TikTok creator ByteDance has quietly rolled out its latest video generator Seedance 2.0 worldwide, while its US rival OpenAI called time on a similar product.Chinese artificial intelligence powerhouse and TikTok creator ByteDance has quietly rolled out its latest video generator Seedance 2.0 worldwide, while its US rival OpenAI called time on a similar product.Machine learning & AI[#item_full_content]
Video-based AI gives robots a visual imagination
In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots “envision” their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots “envision” their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.Robotics[#item_full_content]
Video-based AI gives robots a visual imagination
In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots “envision” their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots “envision” their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.[#item_full_content]