A robot with a view—how drones and machines can navigate on their own [video]

来源:互联网 发布:coc5级女武神升级数据 编辑:程序博客网 时间:2024/06/05 16:33

Robots have captured our imaginations for more than 70 years. When you think about a robot, you might picture Rosie from the Jetsons or a robotic arm in a manufacturing facility. However, the next generation of robots will be very different. These robots and drones will be seamlessly integrated into our everyday lives. Think small flying cameras that will change the way we take selfies, or home-assistant robots that will simplify the tedium of everyday, time-consuming tasks.

Why now? Historically, robots have been limited to industrial settings. But thanks to the same technology powering our smartphones, robots are poised to evolve into intelligent, intuitive machines capable of perceiving their environments, understanding our needs, and helping us in unprecedented ways.

For years Qualcomm Research, the R&D division of Qualcomm Technologies, Inc., has been at the forefront of sensor fusion, computer vision and machine learning technologies—innovations that will enable smarter robots and drones to “see” in 3D, sense their surroundings, avoid collision, and autonomously navigate their environments.

How we’re enabling robots to move autonomously 

In order for a robot to autonomously navigate its environment, it has to accurately estimate its relative position and orientation while moving through an unknown environment. Known as position and orientation tracking with 6 degrees of freedom (6-DOF pose tracking), this capability is essential not only for robotics, but also for many other applications, such as virtual reality (VR), augmented reality (AR) gaming, and indoor navigation.

To solve this challenge, Qualcomm Research uses a technique called Visual-Inertial Odometry (VIO). This fuses information from a camera and inertial sensors, specifically gyroscopes and accelerometers, to estimate device pose without relying on GPS and GNSS (Global Navigation Satellite System).

VIO takes advantage of the complementary strengths of the camera and inertial sensors. For example, a single camera can estimate relative position, but it cannot provide absolute scale—the actual distances between objects, or the size of objects in meters or feet. Inertial sensors provide absolute scale and take measurement samples at a higher rate, thereby improving robustness for fast device motion. However, sensors, particularly low-cost MEMS varieties, are prone to substantial drifts in position estimates when compared with cameras. So VIO blends together the best of both worlds to accurately estimate device pose.

Technology designed for the mobile environment

At Qualcomm Research, we’ve designed VIO from the ground up for power-efficient operation on mobile and embedded devices, and we’ve achieved a very high level of accuracy across a wide variety of device motions and scene environments. All this was made possible through our breakthrough algorithmic innovations and optimizations using the Qualcomm Snapdragon processor’s vector processing and parallel computation abilities. The result? Faster execution time and lower memory consumption.

Our optimizations also made VIO work across a wide range of smartphones, despite several impairments, including rolling shutter, inaccurate sensor time stamps, and limited field of view (FOV) lenses.

Qualcomm Research’s joint work with the University of Pennsylvania’s GRASP Lab is a testament to what’s possible using only a common smartphone. We recently demonstrated in a video the world’s first smartphone autonomously flying a Quad-copter with all processing, including our VIO.

Additionally, the video below illustrates the level of robustness our VIO solution was able to maintain across a wide variety of device motions and scene environments, including walking, running, biking, and VR-like head motions both indoors and outdoors. 

The demo video is using a global shutter camera with a wide FOV lens and accurate time stamps. By using VIO to combine landmark measurements from the camera with inertial sensor measurements in an extended Kalman filter (EKF) framework, we were able to accurately estimate not only the device pose, but also inertial calibration parameters (biases, scale factors, and so on).

What we see is the trajectory of the device projected on a horizontal plane. The device tracks visual landmarks, such as corners, and displays the estimated depth (distance) and associated uncertainty of this estimation (shown as orange-colored numbers). At the end of the video, the device returns to its starting location with a less than 1 percent residual error in computed trajectory—end-to-end drift—for the total trajectory length. This highlights the accuracy and robustness of our VIO solution across different user motions.

By taking advantage of the heterogeneous compute capabilities of Snapdragon, we are further optimizing VIO to enable breakthrough experiences at lower power consumption. VIO is just one way Qualcomm Research is bringing the future forward faster. In upcoming blog posts, we’ll walk you through other technologies we’re developing to build smarter and safer robots, drones, cars, and many other machines and devices. 


转自:https://www.qualcomm.com/news/onq/2015/12/16/robot-view-how-drones-and-machines-can-navigate-their-own

0 0
原创粉丝点击