Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 131 articles
Browse latest View live

rplidar A2 amcl parameters

$
0
0
I am using an rplidar A2 for mapping and for localization. I have two different platforms, one is a Raspberry Pi 3B+ and the other is an Intel NUC. Both are running ROS Kinetic. I am wondering if someone has optimized the amcl parameters for use with the A2.

HectorSLAM catkin_make crashes Pi - Can I copy workspaces?

$
0
0
Hey guys I'm relatively new to ROS so my understanding may be a bit average but here's my situation. I'm trying to build an autonomous car using RPLiDAR and Hector slam with a Raspberry Pi as the onboard processor. I'm running ROS Kinetic and Ubuntu Mate). When I try to build a catkin workspace with the hector SLAM and RPLiDAR packages the Pi crashes at 5%. I've tried the -j1 option with catkin_make and the build process gets to about 90% and crashes again. I assume this is because the Pi doesn't have the processing power to handle this task. I have two solutions in mind. I've built the same workspace on my laptop and it works but when I copy it to my Pi and try to run it i get an error saying the file doesn't exist. Is it possible to simply copy a prebuilt catkin workspace to another machine and use it? Another solution I'm thinking of is to run the workspace on my laptop and retrieve the USB data from the Pi over SSH to build the maps. Is this possible? Hopefully this isn't too much rambling its my first post thanks in advance!

i have rplidar a3 launch error(error code -6)

$
0
0
I have an error for rplidar a3. SUMMARY ======== PARAMETERS * /rosdistro: indigo * /rosversion: 1.11.21 * /rplidarNode/angle_compensate: True * /rplidarNode/frame_id: laser * /rplidarNode/inverted: False * /rplidarNode/scan_mode: Sensitivity * /rplidarNode/serial_baudrate: 256000 * /rplidarNode/serial_port: /dev/ttyUSB0 NODES / rplidarNode (rplidar_ros/rplidarNode) rviz (rviz/rviz) auto-starting new master process[master]: started with pid [4082] ROS_MASTER_URI=http://localhost:11311 setting /run_id to e480e554-b499-11e8-80d3-e0d55eaaa97a process[rosout-1]: started with pid [4095] started core service [/rosout] process[rplidarNode-2]: started with pid [4103] process[rviz-3]: started with pid [4113] [ INFO] [1536543369.462962123]: RPLIDAR running on ROS package rplidar_ros. SDK Version:1.7.0 RPLIDAR S/N: 6BF39AF2C1EA9FC0A2EB92F1F8203C02 [ INFO] [1536543369.966742531]: Firmware Ver: 1.24 [ INFO] [1536543369.966776559]: Hardware Rev: 6 [ INFO] [1536543369.967689753]: RPLidar health status : 0 [ INFO] [1536543370.582106429]: current scan mode: Sensitivity, max_distance: 25.0 m, Point number: 15.9K , angle_compensate: 4 terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc [rplidarNode-2] process has died [pid 4103, exit code -6, cmd /opt/ros/indigo/lib/rplidar_ros/rplidarNode __name:=rplidarNode __log:=/home/ksj/.ros/log/e480e554-b499-11e8-80d3-e0d55eaaa97a/rplidarNode-2.log]. log file: /home/ksj/.ros/log/e480e554-b499-11e8-80d3-e0d55eaaa97a/rplidarNode-2*.log It is the same error as above. Please provide a solution to this problem. Thank you.

ERROR: cannot launch node of type [rviz/rviz]: can't locate node [rviz] in package [rviz]

$
0
0
Hi guys, I'm using raspberry pi 3B to run rplidar and show the scanning result through rviz, however, I follow the github rplidar lesson but when I execute "roslaunch rplidar_ros view_rplidar.launch" this command, it always show up **"ERROR: cannot launch node of type [rviz/rviz]: can't locate node [rviz] in package [rviz]**", therefore, I try to do "rviz rviz" to check rviz, then it can open rviz successfully. Also I'v tried 1.roslaunch rplidar_ros rplidar.launch 2.rosrun rplidar_ros rplidarNodeClient, it worked successfully to read the information form lidar and show on terminal. Please help me :( thanks in advance!

About rplidar A1 scan_mode

$
0
0
Hi guys, I'm now using slamtec rplidar A1, and I found the default scanning mode is 2k per second, **I want to change it to 8k per second(which is the highest)**, then I try to change the scan_mode(Boost) in launch file but still can not changed to 8k(problems show up). Is there anyone have idea about changing rplidar scanning mode? Thanks in advance!!

RESULT_OPERATION_TIMEOUT error on raspberry pi 3 B when interfacing RPLIDAR A1 with rplidar_ros package

$
0
0
I am trying to interface **RPLIDAR A1** with my **Raspberry Pi 3 B 2015** Model running ubuntu 16.04 and ROS **kinetic**. I am receiving the following error: **RESULT_OPERATION_TIMEOUT** when I use **roslaunch rplidar_ros rplidar.launch** I tried using multiple OS images and the standalone SDK but the error persists. Using the SDK: I am receiving the following error when I run **ultra_simple /dev/ttyUSB0**: Error, cannot bind to the specified serial port /dev/ttyUSB0. I receive the following error when I run **simple_grabber /dev/ttyUSB0**: Error, operation timed out It works fine on my PC running Ubuntu 16.04 and ROS kinetic with same USB cable and usb to serial converter. I have the correct permissions set for /dev/ttyUSB0 and the correct udev rules and the user is a member of the dialout group. Is this due to Raspberry Pi 3 B only having USB 2.0 ports? Or is the port blocked? Help me find a solution

Mobile Robot Senior Design advice

$
0
0
Hello everyone, I'm in the process of building a Mobile Robot for my senior design for college and was wondering if someone could suggest some packages for me. We finished building a the MR with 4 wheel drive with each wheel getting a value (255 - 0 - -255). we have all the electronics in the MR we are using: 1. **Raspberry Pi 3 model B (Ubuntu MATE 16.4 - ROS Kinetic)** 2. **Adafruit Motor drive HAT (Have it's own packages that requires 4 values to drive the base)** 3. **RPLidar (used rplidar_ros)** 4. **Encoders on the 2 front wheels (don't have a package for that yet)** The idea of the project is to build a MR to navigate a known environment. It will always start in a specific location, then it need to navigate to a locations using user input to define the goal then it will drive autonomously to the goal with the right pose needed (We are trying to make this part autonomous too). I till be something like an autonomous valet system, where people would drop there car in a specific spot and the car would drive itself to a goal and the map is pre-built and known to the MR. We already build the map using **hector_slam** and were able to save the map as yuml file. (the map scaling is so bad like we need to zoom in so much to see what we got from **map_server map_saver** command using the **rplidar.** we also can move the robot using teleop that was provided by the HAT package. If you have any suggestions, advice or would like to help please let me know - we are realllyyyyyy new to ROS and would appreciate your time and help!

Laser Filter and Hector Mapping Issue

$
0
0
Hi all, I recently implemented the `angular_bounds_filter` from the `laser_filters` package to hector SLAM for map building. I decided to limit my RPLidar A2 laserscan from 360deg to 180deg. However, by doing so, when I launch Hector mapping, there seems to be a high tendency of drift and multiple warnings of `SearchDir angle change too large`while I see my frame on RVIZ jump around. All this happens while my robot is stationary. Is there a reason why this is happening? This particular filter is also not documented on the `laser_filters` wiki page too .

[noobie] Need help with custom mobile robot navigation (multiple questions)

$
0
0
Good evening guys, (questions on the bottom) I'm a Robotics student working on my senior design and REALLY NEED SOME HELP. The idea of the project is that we will build our own environment. Then build a 2D map using "hector_slam", or "gmapping". Then send this map to the MR to navigate the map. The MR will always start from the same spot and will give it a goal and pose using "rviz" (ultimate goal is to have the goals and poses of a specific spot on the map predefined. In other words, I want to send it a number (1 or 2 or 3) of predefined goals and poses and it drives from point A (the predefined start point) to goal 1 or 2 or 3. I want to send the number using another computer over network). I'm VERY new to ROS and trying my best to get learn and finish this project so I can graduate. My team and I finished building the 4 wheel drive robot using: - mobile-robot chassis: http://www.lynxmotion.com/p-603-aluminum-4wd1-rover-kit.aspx - raspberry pi: 3 model B running Ubuntu MATE 16.4 & ROS Kinetic - adafruit motor drive: https://learn.adafruit.com/adafruit-dc-and-stepper-motor-hat-for-raspberry-pi/overview - Optical Encoder on 2 of the 4 wheels - lidar sensor: https://www.slamtec.com/en/Lidar/A3Spec My current achievements:- - Installed all the ROS, dependences, and the packages - Using roslaunch rplidar_ros rplidar.launch to have the data and "/scan" node - Using roslaunch hector_slam tutorial.launch to build a map - Using rosrun map_server map_saver -f mymap to save a map - Using rosrun map_server map_server mymap.yaml to publish the saved map - Using motor-hat-node package & it's teleop (W-A-S-D) to manually move the mobile robot around to build the map - Built the environment with dimensions in gazebo - Built the mobile robot with dimensions in gazebo Stuck on: - Publishing odometery (is that the encoder data?) - Navigation (which package to use? I have navigation stack installed but no idea how to use it) - Localization (is it done with the navigation stack?) - How to add encoders for the wheels in gazebo? - How to add the right rplidar that I have in real life on top of the robot in gazebo? - How to establish the communication between any of the navigation packages and my robot simulation or in real life? Packages that I looked at but have no idea how to use for navigation: - navigation stack - mapping - hector_slam

How to find obstacle using rplidar and hector-slam/hector-mapping

$
0
0
In an industrial field, one robot will pick up the apples and sort them out. The robot will move fast. In that case, if any human is near to robot it should be slow down. For that purpose, I want to use Rplidar A2 which will be in a fixed position. using Rplidar I wanted to detect any human or other obstacle is approaching towards the danger zone. So far using Rplidar python package I was able to extract the data from it. As I am totally new I do not know how to achieve this. I was thinking I could do the environment mapping using hector slam beforehand which i have seen [here](https://www.youtube.com/watch?v=pCF7P7u8pDk), so that robot can sense the environment and later on when the environment is changing it could take the decision whether any human or obstacle is near to the robot or not. After I got the image of the environment what would be the next step? I will be so glad if i anyone give me an idea how i can achieve this,

rplidar_ros fails to build MacOS X

$
0
0
Dear ROS community, while trying to build the rplidar_ros package in the catkin_ws I get the following error: Scanning dependencies of target rplidarNodeClient Scanning dependencies of target rplidarNode [ 11%] Building CXX object CMakeFiles/rplidarNodeClient.dir/src/client.cpp.o [ 33%] Building CXX object CMakeFiles/rplidarNode.dir/sdk/src/arch/macOS/net_serial.cpp.o [ 33%] Building CXX object CMakeFiles/rplidarNode.dir/src/node.cpp.o [ 44%] Building CXX object CMakeFiles/rplidarNode.dir/sdk/src/arch/macOS/net_socket.cpp.o [ 55%] Building CXX object CMakeFiles/rplidarNode.dir/sdk/src/arch/macOS/timer.cpp.o [ 66%] Building CXX object CMakeFiles/rplidarNode.dir/sdk/src/hal/thread.cpp.o [ 77%] Building CXX object CMakeFiles/rplidarNode.dir/sdk/src/rplidar_driver.cpp.o In file included from /Users/alessandrodortenzio/catkin_ws/src/rplidar_ros/sdk/src/rplidar_driver.cpp:41: /Users/alessandrodortenzio/catkin_ws/src/rplidar_ros/./sdk/src/hal/locker.h:102:21: error: use of undeclared identifier 'pthread_mutex_timedlock' switch (pthread_mutex_timedlock(&_lock,&wait_time)) ^ 1 error generated. make[2]: *** [CMakeFiles/rplidarNode.dir/sdk/src/rplidar_driver.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... [ 88%] Linking CXX executable /Users/alessandrodortenzio/catkin_ws/devel_isolated/rplidar_ros/lib/rplidar_ros/rplidarNodeClient [ 88%] Built target rplidarNodeClient make[1]: *** [CMakeFiles/rplidarNode.dir/all] Error 2 make: *** [all] Error 2 <== Failed to process package 'rplidar_ros': Command '['/Users/alessandrodortenzio/catkin_ws/install_isolated/env.sh', 'make', '-j4', '-l4']' returned non-zero exit status 2 Reproduce this error by running: ==> cd /Users/alessandrodortenzio/catkin_ws/build_isolated/rplidar_ros && /Users/alessandrodortenzio/catkin_ws/install_isolated/env.sh make -j4 -l4 By looking online I have found out that the methods causing the errors are not available in the OS X environment. Although, on the rplidar github page they have listed the OS X as a supported operative system. My question is, what am I doing wrong? Is there any workaround for this error? Thanks in advance for your time. Sincerely, Alessandro

RPLIDAR A1 scan error while using rplidar_ros

$
0
0
I am getting the following error while using RPLIDAR A1 and rplidar_ros package on Ubuntu 16.04 and ROS Kinetic. When I use `roslaunch rplidar_ros test_rplidar.launch` I get: Scan mode `Sensitivity' is not supported by lidar supported modes: [ERROR] [1549015837.483212801]: Standard: max_distance: 12.0 m, Point number: 2.0K [ERROR] [1549015837.483229470]: Express: max_distance: 12.0 m, Point number: 4.0K [ERROR] [1549015837.483247611]: Boost: max_distance: 12.0 m, Point number: 8.0K [ERROR] [1549015837.483264352]: Can not start scan: 80008001! What is the meaning of this? Is my RPLIDAR unit faulty?

Remote Rviz visualisation / Hector_SLAM

$
0
0
Hello ! Actually I'm working on hector_slam with my raspberry pi 3 ( ROS Kinetic / Ubuntu Mate ), and it works pretty well. Btw I use a RPLIDAR A1 from slamtech using a USB port from my PI3. But I want to see the Rviz visualisation remotly on my PC and not on my PI. I did all the network configurations between my PI and py PC. But when I try my launch command ( on my pc ) : $ export ROS_MASTER_URI=http://IP_ADDRESS_OF_RASPBERRY_PI:11311 $ export ROS_IP=IP_ADDRESS_OF_LOCAL_PC $ roslaunch rplidar_ros view_slam.launch I get this error : [rplidarNode-1] process has died [pid 2570, exit code 255, cmd /home/janken/catkin_ws/devel/lib/rplidar_ros/rplidarNode __name:=rplidarNode __log:=/home/janken/.ros/log/195e88a2-2868-11e9-a4ec-b827eb0ab788/rplidarNode-1.log]. log file: /home/janken/.ros/log/195e88a2-2868-11e9-a4ec-b827eb0ab788/rplidarNode-1*.log I suppose that there is a problem from my RPLIDAR. I actually suppose that I get this error because my LIDAR is not directly wired to my PC , maybe I didn't understand well the utility of ROS_MASTER. Thanks for reading, I hope you will help me ! I'm pretty new in this world :)

AMCL / Hector_Slam problem

$
0
0
Hello ! Actually I'm working on a SLAM robot using a Raspberry PI3B with Ubuntu Mate 16.04 et ROS Kinetic. I started with the mapping (using Hector_Slam) and the result was pretty good. For the localization, I use in a first time a [laser_scan_matcher ](http://wiki.ros.org/laser_scan_matcher) node to get the pose and rotation of the robot and the result was pretty good without any IMU data and ACML or EKF techniques. But using the scan_matching I have big problems when I make big rotation with my robot and the rotation drift and my mapping is wrong. So I search how to use AMCL to make better localisation/rotation prediction. But the result is even worse ! And I get a drift instantly when I rotate the robot. Below you will see all my information, do you see any problems in my using of AMCL ? Do I strictly need an IMU for get good rotations predictions ? Can I just do not use the rotation for my SLAM project and just use the pose prediction ? Lidar ( RPLIDAR A1) ---------- Mapping / Localization ---------------------- ![image description](/upfiles/15500140063666351.jpg) ![image description](/upfiles/15500140239384906.jpg) ![image description](/upfiles/15500140418354803.jpg)

evarobot_gazebo using the topic /lidar in c++ node

$
0
0
Hello, I have a problem reading values from the topic lidar in via a node: Firstly I execute "roslaunch evarobot_gazebo evarobot.launch". After that I can read values by "rostopic echo /lidar -n1" command in another terminal, without a problem and it prints out 720 different distance values But when it comes to my node it can get only 6 different distance values. And my node doesn't get effected by obstacles so it reads always as it's "inf" value. I had also tried the code lines in "https://answers.ros.org/question/60239/how-to-extract-and-use-laser-data-from-scan-topic-in-ros-using-c/" but it gave compile error. The problem documentation: "https://drive.google.com/open?id=1MHL9s7wWuKziYhN4nqojooWMplPTxX5z". ulidar is the package in ~/catkin_ws/src/. ps: I'm using ros-kinetic-desktop-full. Much obliged...

SCB Recommendation?

$
0
0
I am looking to start a project where I'll be using SLAM with a Lidar. I want my robot to be able to run SLAM on its own (on board) without the help of another computer. I was told that the RPI3B+ is not going to do the job. Which board ($150 is my limit) is powerful enough as well as well supported by the community would you guys recommend? Or whats are the recommend specs to run SLAM with no problem on a SCB? Similiarities with the raspberry pi would help (the display and camera ports)

Unable to avoid obstacles & robot can't go in a straight line using ros-kinetic, move_base, and SLAM_gmapping.

$
0
0
Attempting to use the ros navigation stack for autonomy. I'm able to get Laserscan information in rviz, but unable to avoid obstacles using the rplidar, SLAM_gmapping, and move_base. When giving the robot a 2d nav_goal in rviz the robot shifts right and left then fails to go in a straight line. Any feedback would be appreciated. Most of the necessary files are below! gmapping.launch move_base.launch move_base.yaml shutdown_costmaps: false track_unkown_sources: true controller_frequency: 5.0 controller_patience: 15.0 planner_frequency: 2.0 planner_patience: 2.0 oscillation_timeout: 0.0 oscillation_distance: 0.5 recovery_behavior_enabled: true clearing_rotation_allowed: true base_local_planner.yaml TrajectoryPlannerROS: # Robot Configuration Parameters # Set the aceleration limits of the robot acc_lim_x: 2.5 acc_im_y: 0.0 # The rotational acceleration limit of the robot (default setting) acc_lim_theta: 3.2 # Set the velocity limits of the robot max_vel_x: 2.0 min_vel_x: -2.0 max_vel_theta: 4.0 min_vel_theta: -4.0 max_rotational_vel: 1.0 min_in_place_rotational_vel: 2.0 min_in_place_vel_theta: 2.0 y_vels: [0, 0, 0, 0] # The velocity the robot will command when trying to escape from a stuck situation escape_vel: -1.0 holonomic_robot: false # Goal Tolerance Default Parameters yaw_goal_tolerance: 0.05 xy_goal_tolerance: 0.10 latch_xy_goal_tolerance: false # Forward Simulation Parameters sim_time: 1.5 sim_granularity: 0.025 angular_sim_granularity: 0.02 vx_samples: 15.0 vtheta_samples: 20.0 controller_frequency: 10.0 # Trajectory scoring parameters meter_scoring: true occdist_scale: 0.1 pdist_scale: 0.75 heading_lookahead: 0.325 heading_scoring: false heading_scoring_timestep: 0.8 dwa: false simple_attractor: false publish_cost_grid_pc: true oscillation_reset_dist: 0.05 escape_reset_dist: 0.1 escape_reset_theta: 0.1 costmap_common_params.yaml map_type: costmap_2d origin_z: 0.0 z_resolution: 1 z_voxels: 2 obstacle_range: 2.5 raytrace_range: 3.0 publish_voxel_map: false transform_tolerance: 0.5 meter_scoring: true footprint: [[-0.21, -0.165], [-0.21, 0.165], [0.21, 0.165], [0.21, -0.165]] footprint_padding: 0.1 observation_sources: laser_scan_sensor laser_scan_sensor: [sensor_frame: /laser , data_type: LaserScan, topic: /scan, marking: true, clearing: true] plugins: - {name: obstacles_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflater_layer, type: "costmap_2d::InflationLayer"} obstacles_layer: observation_sources: scan enabled: true scan: {sensor_frame: /laser, data_type: LaserScan, topic: /scan, marking: true, clearing: true, min_obstacle_height: -2.0, max_obstacle_height: 2.0, obstacle_range: 2.5, raytrace_range: 3.0} inflater_layer: enabled: true cost_scaling_factor: 5.0 inflation_radius: 0.10 local_costmap_params.yaml local_costmap: global_frame: "map" robot_base_frame: "base_link" tansform_tolerance: 0.2 update_frequency: 1.0 publish_frequency: 10.0 width: 15.0 height: 15.0 resolution: 0.05 origin_x: 0.0 origin_y: 0.0 static_map: false rolling_window: true always_send_full_costmap: false plugins: #- {name: static_layer, type: "costmap_2d::StaticLayer"}# - {name: obstacles_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflater_layer, type: "costmap_2d::InflationLayer"} ##static_layer: ##unknown_cost_value: -1.0 ##lethal_cost_threshold: 100 ##map_topic: "map" ##first_map_only: false ##subscribe_to_updates: false ##track_unknown_space: true ##use_maximum: false ##trinary_costmap: true obstacles_layer: track_unknown_space: false footprint_clearing_enabled: true inflater_layer: inflation_radius: 0.10 cost_scaling_factor: 5.0 global_costmap_params.yaml global_costmap: global_frame: "map" robot_base_frame: "base_link" tansform_tolerance: 0.2 update_frequency: 1.0 publish_frequency: 10.0 width: 15.0 height: 15.0 resolution: 0.05 origin_x: 0.0 origin_y: 0.0 static_map: true rolling_window: false always_send_full_costmap: false plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: obstacles_layer, type: "costmap_2d::ObstacleLayer"} - {name: inflater_layer, type: "costmap_2d::InflationLayer"} static_layer: unknown_cost_value: -1.0 lethal_cost_threshold: 100 map_topic: "map" first_map_only: false subscribe_to_updates: false track_unknown_space: true use_maximum: false trinary_costmap: true obstacles_layer: track_unknown_space: false footprint_clearing_enabled: true inflater_layer: inflation_radius: 0.10 cost_scaling_factor: 5.0

Is rplidar enough to use gmapping?

$
0
0
Hello, I am new to ROS. Excuse me if this question sounds trivial. I am using rplidar A1 to create a map of the environment using gmapping. I have recorded the laserscan in /scan topic into a bag file using the rplidar_ros package. The required topics for gmapping are mentioned as /scan and /tf. I have also created the tf tree using the static_transform_publisher. The tf tree follows this order. odom->base_footprint->base_link->base_laser. I have nothing set up with the rplidar. I know that the map->odom transform is created by the slam_gmapping node itself. I can also view the laser points in rviz when i run the command roslaunch rplidar_ros view_rplidar.launch Here is my launch file that i am using to generate the transforms. My odometry_publisher node publishes no data for now as i dont have wheel encoders but it has a transform broadcaster for odom->base_footprint. Could someone please tell me if need any other data such as odometry or imu messages for gmapping to work or if i am missing something from the steps that i did. Thank you.

Random movement with collision avoidance (LIDAR)

$
0
0
Hi ROS Community! I'm fairly new to ROS and have been experimenting with an RPLidar. I wanted to ask, if there is an example I can deploy on a Robot, to let it randomly drive around and avoid obstacles with an RPLidar. I don't need SLAM algorithms, just a simple random movement. Thanks for your help in advance!

Creating /tf (position) from IMU, 2D/3D SLAM (LiDAR+IMU+Octomap) (no need for high accuracy))

$
0
0
Hello, Iam trying to build a simple SLAM system consisted of: - RPLiDAR A2 M8 - Razor IMU 9DOF M0 **used packages:** - octomap_mapping, - razor_imu_9dof (razor-pub.launch) - rplidar_ros (rplidar.launch) **own packages** - mctr-132_ros (sends transform of world 0,0,0 and transform of laser which has location 0,0,0 and orientation is quaternion from imu) - mctr-132_laser (only converting LaserScan to PointCloud) **System: Ubuntu 16.04, ROS Kinetic** Iam in the phase that I put those points into map and I can visualize it in Rviz, but I only take IMU rotation into account. The position is being published into the /tf topic as 0, 0, 0. I can therefore build map of the surrounding environment. But I'd like to move my device around. So I need a proper /tf with position transform. I was thinking about robot_pose_ekf, but it needs odometry and I have no idea how to create fake odometry from 9dof imu. **Q1: Is it possible to create fake odometry from IMU and if so how?** Many thanks ps.: I would upload also a photo but 5 points are required... ps.: I tried google cartographer, hector_localization, rf2o_laser_odometry but I failed to build/run them...
Viewing all 131 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>