A research team has shown for the first time that reinforcement learning—i.e., a neural network that learns the best action to perform at each moment based on a series of rewards—allows autonomous vehicles and underwater robots to locate and carefully track marine objects and animals.A research team has shown for the first time that reinforcement learning—i.e., a neural network that learns the best action to perform at each moment based on a series of rewards—allows autonomous vehicles and underwater robots to locate and carefully track marine objects and animals.[#item_full_content]

For the first time, researchers have trained a machine learning model in outer space, on board a satellite. This achievement could enable real-time monitoring and decision making for a range of applications, from disaster management to deforestation.For the first time, researchers have trained a machine learning model in outer space, on board a satellite. This achievement could enable real-time monitoring and decision making for a range of applications, from disaster management to deforestation.Hi Tech & Innovation[#item_full_content]

A new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation, who detailed their work in Science Robotics.A new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation, who detailed their work in Science Robotics.[#item_full_content]

In a study published in special issue of the journal IET Cyber-Systems and Robotics, researchers from Zhejiang University experienced in legged robot motion and control, pre-trained the neural network (NN) using data from a robot operated by conventional model-based controllers.In a study published in special issue of the journal IET Cyber-Systems and Robotics, researchers from Zhejiang University experienced in legged robot motion and control, pre-trained the neural network (NN) using data from a robot operated by conventional model-based controllers.Robotics[#item_full_content]

The Korea Institute of Civil Engineering and Building Technology (KICT) developed a digital model designed to identify dangerous roads where traffic accidents frequently occur while further finding optimal measures to improve the safety of such roads, thereby minimizing the risk of traffic accidents.The Korea Institute of Civil Engineering and Building Technology (KICT) developed a digital model designed to identify dangerous roads where traffic accidents frequently occur while further finding optimal measures to improve the safety of such roads, thereby minimizing the risk of traffic accidents.Automotive[#item_full_content]

Researchers at Carnegie Mellon University’s Robotics Institute have shown that computer vision systems can more easily detect objects in motion—like a car driving down the street or a person walking in a crosswalk—than stationary objects.Researchers at Carnegie Mellon University’s Robotics Institute have shown that computer vision systems can more easily detect objects in motion—like a car driving down the street or a person walking in a crosswalk—than stationary objects.[#item_full_content]

Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers.Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers.[#item_full_content]

EV Fill Up Savings

With a significant uptake in electric vehicles (EVs) hitting the road, drivers will need to learn a new skill: determining how much it costs to fill up their EV. Unlike gasoline prices that are clearly displayed on roads nationwide, most people don’t know the cost of electricity in their state or city, let alone what a kilowatt hour is. Most EV fill up calculators require users to enter this information or enter the battery size of their vehicle. Additionally, gas-powered vehicles average over 400 miles on one fill up, while average EV range hit 291 miles last year. Comparing the cost to fill up a gas-powered vehicle to an electric vehicle isn’t an apple-to-apples comparison. The EV Fill Up Tool is designed to remove these barriers for a driver (or potential driver) of an EV.  The tool knows the average gasoline and electricity prices in a selected state. It also knows a vehicle’s average range on a fully charged battery or tank of gas. It will give the user a true comparison of what it costs to fill up a gas-powered vehicle when compared to EV alternatives.

The post EV Fill Up Savings appeared first on Energy Innovation: Policy and Technology.

With a significant uptake in electric vehicles (EVs) hitting the road, drivers will need to learn a new skill: determining how much it costs to fill up their EV. Unlike gasoline prices that are clearly displayed on roads nationwide, most…
The post EV Fill Up Savings appeared first on Energy Innovation: Policy and Technology.[#item_full_content]

Just a few years ago, Berkeley engineers showed us how they could easily turn images into a 3D navigable scene using a technology called Neural Radiance Fields, or NeRF. Now, another team of Berkeley researchers has created a development framework to help speed up NeRF projects and make this technology more accessible to others.Just a few years ago, Berkeley engineers showed us how they could easily turn images into a 3D navigable scene using a technology called Neural Radiance Fields, or NeRF. Now, another team of Berkeley researchers has created a development framework to help speed up NeRF projects and make this technology more accessible to others.Engineering[#item_full_content]

Hirebucket

FREE
VIEW