Machine Vision

We're going to show you some of the work we've been doing on Spiri's machine vision, but first here is a quick update on production and shipping.

The last batch of robots to ship out seem to have stood up well to use in the university settings where most of them went. In the next six weeks we expect to be running another batch through assembly, and we will work down the list of backers, letting you know when yours is ready and verifying your current address.

There are a few small changes in the next batch: we are switching the ESC and motors to newer versions.

We have been working on Spiri's installer and on its vision system.

The installer is almost where we would like it to be. Our ongoing work is to remove the Linux desktop GUI, reduce the ROS packages to those relevant to Spiri, ensure everything that uses Python has Python 3 bindings, and provide efficient vision pipelines that shift more of the burden onto the GPU. We also want to make sure we provide you with good examples, in C++, in shell scripts, in ROS, and in Python of basic functions, such as running the cameras and moving the robot.

We have been working on configuring a system for simultaneous location and mapping (SLAM) on board. It allows Spiri to navigate using its cameras, and this can be helpful where GPS is unavailable, such as indoors, and it will eventually fuse with GPS measurements to give you much more precision in navigation. Here is a screenshot of Spiri identifying "features" (marked in green) it will use to track its position in the room.


We have some tuning to do, but fundamentally, it is working.

You can also record video and images with the cameras, and stream them back over Wi-Fi to your laptop or other device in real time. We have image rectification working in real time, and we are working on color correction. Here is a rectified, color corrected image from a Spiri video stream.

Caroline Glass