While the statement, “Humanoid robots are coming,” might cause anxiety for some, for one Georgia Tech research team, working with humanlike robots couldn’t be more exciting. The researchers have developed a new “thinking” technology for two-legged robots, increasing their balance and agility.While the statement, “Humanoid robots are coming,” might cause anxiety for some, for one Georgia Tech research team, working with humanlike robots couldn’t be more exciting. The researchers have developed a new “thinking” technology for two-legged robots, increasing their balance and agility.[#item_full_content]

AI systems already decide how ambulances are routed, how supply chains operate and how autonomous drones plan their missions. Yet when those systems make a risky or counterintuitive choice, humans are often expected to accept it without challenge, warns a new study from the University of Surrey.AI systems already decide how ambulances are routed, how supply chains operate and how autonomous drones plan their missions. Yet when those systems make a risky or counterintuitive choice, humans are often expected to accept it without challenge, warns a new study from the University of Surrey.Machine learning & AI[#item_full_content]

One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a Ph.D. student at Carnegie Mellon University, Gupta was in the final days of an internship at a manufacturing company in Boston. He’d spent months developing a system that used cameras and light sources to create 3D images of small objects. “I wanted to stress test it, just for fun,” said Gupta, who would begin his postdoctoral research at Columbia Engineering a few months later.One night in 2010, Mohit Gupta decided to try something before leaving the lab. Then a Ph.D. student at Carnegie Mellon University, Gupta was in the final days of an internship at a manufacturing company in Boston. He’d spent months developing a system that used cameras and light sources to create 3D images of small objects. “I wanted to stress test it, just for fun,” said Gupta, who would begin his postdoctoral research at Columbia Engineering a few months later.[#item_full_content]

Nvidia is on the cusp of investing $30 billion in OpenAI, scaling back a plan to pump $100 billion into the ChatGPT maker, the Financial Times reported Thursday.Nvidia is on the cusp of investing $30 billion in OpenAI, scaling back a plan to pump $100 billion into the ChatGPT maker, the Financial Times reported Thursday.Business[#item_full_content]

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.Machine learning & AI[#item_full_content]

Many people use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging.”Many people use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the “AI agent ecosystem” suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is “dangerously lagging.”Consumer & Gadgets[#item_full_content]

Stanford d.school’s Jeremy Utley wants people to stop using AI. Instead, he wants them to work with it. “If you’re ‘using’ AI, I know you’re misusing it,” said Utley, an adjunct professor at the Hasso Plattner Institute of Design (aka the “d.school”). Utley argues that people fall into two categories when it comes to AI: underperformers who treat it like a tool and outperformers who treat it like a teammate.Stanford d.school’s Jeremy Utley wants people to stop using AI. Instead, he wants them to work with it. “If you’re ‘using’ AI, I know you’re misusing it,” said Utley, an adjunct professor at the Hasso Plattner Institute of Design (aka the “d.school”). Utley argues that people fall into two categories when it comes to AI: underperformers who treat it like a tool and outperformers who treat it like a teammate.Machine learning & AI[#item_full_content]

Alphabet Inc.’s Google and Apple Inc. are adding music-focused generative artificial intelligence features to their core consumer apps, underscoring how advanced AI tools are moving into mainstream use.Alphabet Inc.’s Google and Apple Inc. are adding music-focused generative artificial intelligence features to their core consumer apps, underscoring how advanced AI tools are moving into mainstream use.Business[#item_full_content]

A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.Computer Sciences[#item_full_content]

A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, and less computationally expensive training of LLMs. But it also exposes potential vulnerabilities.[#item_full_content]

Hirebucket

FREE
VIEW