Please help me build a cloud visual SLAM system for cellphones

mjyc profile image Michael Jae-Yoon Chung ・1 min read

Hello hackers, tinkers, webdevs, sysdevs, roboticists, and all coders! I've been excited about cloud robotics, a field of robotics that utilizes the power of cloud computing, and want to share the excitement with you and suggest a project we can potentially work together. The project that I'm thinking of is "cellphone visual SLAMing". The idea is to run a visual SLAM system on cloud so mobile devices like a cellphone can build 3D maps by simply uploading camera data to the cloud.

Here are the steps I'm thinking:

  1. Try creating a 3D map using ORB_SLAM2 and desktop camera images. The main goal of this step is to get comfortable with a visual SLAM library and feel out the limitations.
  2. Try creating 3D maps using ORB_SLAM2 running on a desktop and cellphone camera images. ORB_SLAM2 supports ROS. So one can easily capture device camera images using HTML5's MediaDevices.getUserMedia(), turn them into ROS image messages, and publish them using roslibjs so ORB_SLAM2 can use the images collected from a remote device.
  3. Run the ORB_SLAM2 to cloud. I have not tried it, but it seems like it is fairly easy to containerize a ROS package and deploy it on cloud.

That's it! Are you interested in trying this idea out? If you have experiences with visual SLAM and have suggestions? Let me know, I'd love to hear your thoughts.

Posted on by:


Editor guide

Update: it seems like se2lam github.com/izhengfan/se2lam could be used instead of ORB_SLAM2.