OpenVSLAM: A Versatile Visual SLAM Framework
-
Updated
Feb 25, 2021
{{ message }}
OpenVSLAM: A Versatile Visual SLAM Framework
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
An Invitation to 3D Vision: A Tutorial for Everyone
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
Robotics with GPU computing
A general framework for map-based visual localization. It contains 1) Map Generation which support traditional features or deeplearning features. 2) Hierarchical-Localizationvisual in visual(points or line) map. 3)Fusion framework with IMU, wheel odom and GPS sensors.
Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction
Depth and Flow for Visual Odometry
This repository is C++ OpenCV implementation of Stereo Odometry
Learning Depth from Monocular Videos using Direct Methods, CVPR 2018
Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, not for real world applications !!!)
MATLAB Implementation of Visual Odometry using SOFT algorithm
Efficient monocular visual odometry for ground vehicles on ARM processors
深度学习和三维视觉相关的论文
RGB-D Encoder SLAM for a Differential-Drive Robot in Dynamic Environments
A bunch of state estimation algorithms
Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
Simultaneous Visual Odometry, Object Detection, and Instance Segmentation
This repository intends to enable autonomous drone delivery with the Intel Aero RTF drone and PX4 autopilot. The code can be executed both on the real drone or simulated on a PC using Gazebo. Its core is a robot operating system (ROS) node, which communicates with the PX4 autopilot through mavros. It uses SVO 2.0 for visual odometry, WhyCon for visual marker localization and Ewok for trajectoy planning with collision avoidance.
Ros package for Edge Alignment with Ceres solver
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
Code for T-ITS paper "Unsupervised Learning of Depth, Optical Flow and Pose with Occlusion from 3D Geometry" and for ICRA paper "Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks".
Implementation of DeepVO (ICRA 2017)
Visual odometry using optical flow and neural networks
Deep Monocular Visual Odometry using PyTorch (Experimental)
Training Deep SLAM on Single Frames https://arxiv.org/abs/1912.05405
Add a description, image, and links to the visual-odometry topic page so that developers can more easily learn about it.
To associate your repository with the visual-odometry topic, visit your repo's landing page and select "manage topics."