Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iPhone Robotics Visual-SLAM algorithm implementation

I have a tracked robot with an iPhone for brains. It has two cameras - one from iPhone, another a standalone camera. The phone has GPS, gyroscope, accelerometer and magnetometer with sensor fusion separating user-induced acceleration from gravity. Together the phone can detect it's own attitude in space. I would like to teach the robot to at least avoid walls if the robot bumped into them before.

Can anyone suggest a starter project for Visual Simultaneous Location and Mapping done by such robot? I would be very grateful for an objective-c implementation, as I see some projects written in C/C++ at OpenSlam.org, but those are not my strongest programming languages.

I do not have access to laser rangefinders, so any articles, keywords or scholarly papers on Visual SLAM would also help.

Thank you for your input!

like image 737
Alex Stone Avatar asked Dec 06 '25 10:12

Alex Stone


1 Answers

I can suggest you to take a look to FAB-MAP, it has an implementation in OpenCV

I'm doing my MS thesis on visual-SLAM, particularly on the loop-closure problem.

Its in C++ but if you follow the examples you will find it very easy to use.

like image 173
Michele mpp Marostica Avatar answered Dec 08 '25 01:12

Michele mpp Marostica



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!