Simultaneous Location and Mapping (SLAM) 

SLAM is a set of algorithms that work together to make a map of a robot's surroundings and to find the robot’s location within that map. Just as humans rely on our eyes and inner ears to orient ourselves in space, Spiri uses its stereo cameras and inertial measurement sensors.

Oriented FAST and Rotated BRIEF (ORB)

FAST and BRIEF are feature detection and description algorithms in computer vision. They underpin the kind of SLAM running on Spiri. FAST (features from accelerated segment test) is a feature detector. It provides a way to look for corners in the architecture around you. The algorithm scans every pixel in a frame and locates those that are in high contrast to the pixels around them. BRIEF (binary robust independent elementary features) is a feature descriptor. It describes the detected features by comparing the intensity of pairs of pixels in the patch around each feature.


The SLAM algorithm running on Spiri is ORB_SLAM2 enhanced with stereo vision and graphics acceleration. This algorithm works in three steps that run continuously in parallel while Spiri is in flight. The first step, tracking, recognizes features in the frame using a FAST detector and describes them using a BRIEF descriptor. At this point a tentative estimate is made of Spiri’s pose (its position and angular orientation) relative to those features. The frame is compared with the local maps in Spiri’s memory, if any. This brings us to the second step, local mapping. The best matching map, at this point, may be updated by adding or culling features. The third step, loop closing, checks the tentative pose and map estimates for validity, corrects errors, and performs final refinements on the global pose estimate for Spiri.

Better Performance on Spiri

We take full advantage of Spiri’s capabilities to make ORB_SLAM2 run better. First, we enhance every step of the operation by using graphics acceleration. The main computer on Spiri is designed with graphics processing in mind. We compile and set up our SLAM system with CUDA support — this puts 256 parallel processors to work on the many, many simultaneous calculations the algorithm demands. Second, we use stereo vision. This helps keep the scale from one local map to the next consistent. Third, we provide pose estimates obtained using SLAM as a ROS topic. This makes them available for use by the navigation system.

Applications for SLAM

The primary application for SLAM is to enable precise navigation indoors, or otherwise where global navigational satellites are unreachable. SLAM also makes navigation more precise than GPS or similar navigation is on its own. Maps can be stored, meaning they can also be shared and cooperatively built and optimized. And SLAM can work in tandem with obstacle avoidance algorithms.