Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 131 articles
Browse latest View live

RPLIDAR RVIZ

$
0
0
Hello guys! My question is quite simple, i have been searching for 2 days and i couldn't find the answer. I'm building my own robot, using RPi3 with a RPLidar and SLAM for mapping. How can I see the position of the robot (I can see the data and the map, but not where my robot is) , or at least the frame in the RViz output? I tried whit a lot of configs in the RViz control panel and nothing happend. Tyy!!

What is the benefit of using LIDAR and Encoders vs Indoor Beacons

$
0
0
Im trying to figure out why there is so much documentation on using LIDAR and Odometry data for indoor localization, but not much on using Indoor Bluetooth beacons for robotic localization . A lot of people seem to use either LIDAR + Encoder or Beacons only (bluetooth beacon triangulation). Im confused as to why there are no mentions of usage for indoor localization solutions using both LIDAR + Encoder, and Bluetooth beacons. Im wondering too, how would a 1 plane LIDAR (such as RPLidar) be able to determine a robot's location in a large environment (50 x 50 meter room), especially within similar hallways which give similar laser scan readings? It seems that all of my research on indoor localization points to LIDAR + Odometry, but there are hardly any mentions for indoor navigation, and localization by using bluetooth or radio beacon triangulation. Why is that, and what are the advantages and disadvantages of each system? If I were to incorporate beacons into my indoor localization system, what would be the approprate route given that I have LIDAR, and Odometry data for AMCL (where would the blue tooth beacon system fit into the equation?)

hector_slam with Sick TIM561

$
0
0
Hi, we are working with a robot in a garage or production hall environment to test some car sensors. For mapping we tried an RPlidar (360°) with hector_mapping which is working very well out of the box. Unfortunally the RPlidar has problems to detect parts with black paint or car tires so we tried an Sick TiM561-2050101 which has an angle of 270° and better resolution. Despite the higher frame rate (10Hz vs. 15Hz) and better resolution (0,9°/~400pts vs. 0,33°/~800pts) hector_mapping can't match the map on rotations. It's only working on very slow speeds. Mapping with linear movement seems to work on moderate speed. We tried: - reduce the frame rate - reduce the number of points - moved to a faster computer (from UDOO x86 Ultra on the robot to Intel NUC with i7 cpu) - reduce the FOV The laser data and slam_cloud is published with ~15Hz. Why does it work out of the box with RPLidar but not with the Sick LIDAR?

Im looking ROS distro for BeagleBone Black Rev.C

$
0
0
Hello everyone. For my robotic project im looking distro of ROS compatible with BeagleBone Black RevC. dev. platform. Unfortunatelly when I try install standard Kinetic distro for Debian from here: [http://wiki.ros.org/kinetic/Installation/Debian](http://wiki.ros.org/kinetic/Installation/Debian) the system takes me this: E: Unable to locate package ros-kinetic-desktop-full What is going on? Im used wrong version of Debian or something else? Or maybe someone had similar problem with BeagleBone Black? Could someone help me to find solution of this problem? I would like to use this platform with RPlidarA1 in my robot project bud I can't do anything without ROS. My kernel and system: Linux beaglebone 4.4.113-ti-r147 #1 SMP Tue Feb 20 20:44:47 UTC 2018 armv7l GNU/Linux I think thats the Debian Jessie 8.10 so ROS should been works without problems? Thanks for help. Greetings Tom root@beaglebone:~# apt-get update && sudo dpkg -l ros-kinetic* Hit http://repos.rcn-ee.com jessie InRelease Ign http://deb.debian.org jessie InRelease Hit http://packages.ros.org trusty InRelease Hit https://deb.nodesource.com jessie InRelease Hit http://deb.debian.org jessie-updates InRelease Hit http://deb.debian.org jessie/updates InRelease Hit http://deb.debian.org jessie Release.gpg Get:1 http://repos.rcn-ee.com jessie/main armhf Packages [578 kB] Hit http://deb.debian.org jessie Release Get:2 http://packages.ros.org trusty/main armhf Packages [750 kB] Get:3 https://deb.nodesource.com jessie/main armhf Packages [1,003 B] Get:4 http://deb.debian.org jessie-updates/main armhf Packages [23.1 kB] Get:5 http://deb.debian.org jessie-updates/contrib armhf Packages [20 B] Get:6 http://deb.debian.org jessie-updates/non-free armhf Packages [450 B] Get:7 http://deb.debian.org jessie/updates/main armhf Packages [614 kB] Get:8 http://deb.debian.org jessie/updates/contrib armhf Packages [994 B] Get:9 http://deb.debian.org jessie/updates/non-free armhf Packages [20 B] Get:10 http://deb.debian.org jessie/main armhf Packages [8,864 kB] Get:11 http://deb.debian.org jessie/contrib armhf Packages [44.3 kB] Get:12 http://deb.debian.org jessie/non-free armhf Packages [74.9 kB] Fetched 11.0 MB in 21s (512 kB/s) Reading package lists... Done dpkg-query: no packages found matching ros-kinetic*

RPlidar robot

$
0
0
Im a bit new to all of this, I understand how to setup and start mapping with an RPlidar A1, however im unsure as to how i would get this communicating with motors. Any help would be greatly appreciated.

turtlebot estimates wrong 2D pose on rviz when using amcl_demo

$
0
0
Heres what I am trying to do: I am trying to basically create a SLAM map using a turtlebot and RPLIDAR A2 to get a slam map. After doing so I want to launch amcl_demo and use the orbecc astra and give it some points to navigate to on the map. Heres whats happening: The robot restimates its obstacles incorrectly, it will think that the wall is closer to it than it actually is. A lot of times it will go in a direction and think that its somewhere in the map which is way off from where it actually is. I am using the turtlebot_navigation stack. Could someone suggest me a way for the turtlebot to move slowly and get a more accurate 2 D pose estimate on the map??

How to get a custom fixed frame tf with hector_slam?

$
0
0
Hello! I'm currently using the [slam branch](https://github.com/robopeak/rplidar_ros/tree/slam) of the rplidar_ros package which uses Hector_slam in order to localize a mecanum wheel robot in a room. Since the robot will only need to be inside this room, I would like to fix a reference frame in the middle of the room, but I fail to see how to use the tf package to do so. I tried creating a fixed frame broadcaster with the wanted origin frame **fixed_map** as parent of the usual **base_link** frame given by the hector_pose_estimation node: #!/usr/bin/env python # import roslib import rospy import tf import time from time import sleep import math from geometry_msgs.msg import Pose2D if __name__ == '__main__': rospy.init_node('fixed_tf_broadcaster') br = tf.TransformBroadcaster() rate = rospy.Rate(10.0) while not rospy.is_shutdown(): br.sendTransform((0.0, 2.0, 0.0), (0.0, 0.0, 0.0, 1.0), rospy.Time.now(), "base_link", "fixed_map") rate.sleep() and modifing the given launch by commenting the **link1_broadcaster**, creating a new broadcaster called **origin** and changing the **base_frame** parameter by the **fixed_frame** I believe I'm broadcasting: The result is I can launch the launch file wihtout problem: SUMMARY ======== PARAMETERS * /hector_height_mapping/advertise_map_service: True * /hector_height_mapping/fixed_frame: base_link * /hector_height_mapping/laser_max_dist: 3.7 * /hector_height_mapping/map_pub_period: 0.5 * /hector_height_mapping/map_resolution: 0.05 * /hector_height_mapping/map_size: 1024 * /hector_height_mapping/map_start_x: 0.5 * /hector_height_mapping/map_start_y: 0.5 * /hector_height_mapping/map_update_angle_thresh: 0.1 * /hector_height_mapping/map_update_distance_thresh: 0.02 * /hector_height_mapping/map_with_known_poses: False * /hector_height_mapping/odom_frame: base_link * /hector_height_mapping/output_timing: False * /hector_height_mapping/pub_map_odom_transform: True * /hector_height_mapping/scan_topic: scan * /hector_height_mapping/update_factor_free: 0.45 * /hector_height_mapping/use_tf_pose_start_estimate: False * /hector_height_mapping/use_tf_scan_transformation: True * /rosdistro: kinetic * /rosversion: 1.12.13 * /rplidarNode/angle_compensate: True * /rplidarNode/frame_id: laser * /rplidarNode/inverted: False * /rplidarNode/serial_baudrate: 115200 * /rplidarNode/serial_port: /dev/ttyUSB0 NODES / hector_height_mapping (hector_mapping/hector_mapping) origin (bot/map_tf.py) rplidarNode (rplidar_ros/rplidarNode) rviz (rviz/rviz) auto-starting new master process[master]: started with pid [3183] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 5cc94820-2468-11e8-ae56-b827ebc52417 process[rosout-1]: started with pid [3196] started core service [/rosout] process[rplidarNode-2]: started with pid [3213] process[origin-3]: started with pid [3214] process[hector_height_mapping-4]: started with pid [3223] process[rviz-5]: started with pid [3233] RPLIDAR running on ROS package rplidar_ros SDK Version: 1.5.7 HectorSM map lvl 0: cellLength: 0.05 res x:1024 res y: 1024 HectorSM map lvl 1: cellLength: 0.1 res x:512 res y: 512 HectorSM map lvl 2: cellLength: 0.2 res x:256 res y: 256 [ INFO] [1520689135.328928743]: HectorSM p_base_frame_: base_link [ INFO] [1520689135.329487337]: HectorSM p_map_frame_: map [ INFO] [1520689135.329763014]: HectorSM p_odom_frame_: base_link [ INFO] [1520689135.329990201]: HectorSM p_scan_topic_: scan [ INFO] [1520689135.330214785]: HectorSM p_use_tf_scan_transformation_: true [ INFO] [1520689135.330413170]: HectorSM p_pub_map_odom_transform_: true [ INFO] [1520689135.330544785]: HectorSM p_scan_subscriber_queue_size_: 5 [ INFO] [1520689135.330684785]: HectorSM p_map_pub_period_: 0.500000 [ INFO] [1520689135.330814056]: HectorSM p_update_factor_free_: 0.450000 [ INFO] [1520689135.330958066]: HectorSM p_update_factor_occupied_: 0.900000 [ INFO] [1520689135.331090358]: HectorSM p_map_update_distance_threshold_: 0.020000 [ INFO] [1520689135.331232024]: HectorSM p_map_update_angle_threshold_: 0.100000 [ INFO] [1520689135.331365514]: HectorSM p_laser_z_min_value_: -1.000000 [ INFO] [1520689135.331501139]: HectorSM p_laser_z_max_value_: 1.000000 failed to get the current screen resources RPLIDAR S/N: C5F5FBF8C5E299F1C4E592F734205B3F Firmware Ver: 1.20 Hardware Rev: 0 RPLidar health status : 0 QXcbConnection: XCB error: 170 (Unknown), sequence: 163, resource id: 90, major code: 146 (Unknown), minor code: 20 But once the rviz terminal opens, there is an error: **Fixed Frame [map] does not exist** and therefore I can't get the **/scan** topic nor the map. While echoing the **/tf** topic I get the transformation I imposed: --- transforms: - header: seq: 0 stamp: secs: 1520688972 nsecs: 640590906 frame_id: "fixed_map" child_frame_id: "base_link" transform: translation: x: 0.0 y: 2.0 z: 0.0 rotation: x: 0.0 y: 0.0 z: 0.0 w: 1.0 --- So, I was wondering what's the correct way to set up a custom fixed frame in a map, which doesn't correspond to the initial pose of the robot. I'm running ROS Kinetic on a ubuntu mate 16.04.4 LTS on a raspberry pi 3. Thank you!

Unable to launch lidar during boot

$
0
0
Hi, I have an [rplidar](https://github.com/robopeak/rplidar_ros) and i want to launch it during the booting. I followed a rosanswer [here](https://answers.ros.org/question/245089/systemd-roslaunch/) and was able to launch roscore during the bootup. But I am unable to launch rplidar. /etc/systemd/system/ *********************rosCore.service************ [Unit] Description=start roscore After=network.target Restart=on-abort [Service] ExecStart=/usr/local/bin/rosCoreService.sh [Install] WantedBy=default.target ********************************************** /usr/local/bin/ *******************rosCoreService.sh ********** #!/usr/bin/env bash bash -c "source /home/bc/.bashrc && roslaunch rplidar view_rplidar.launch" **************************************************** I tried ./rosCoreService.sh directly and its working fine. In order to have port permissions I added KERNEL="ttyUSB0", GROUP:"bc", MODE = "0666" in /etc/udev/rules.d/50-usb-serial.rules Since i need some port permission is "After=network.target" is correct ? or any suggestions ? I replaced "roslaunch rplidar view_rplidar.launch" with "roscore" and roscore is launching while bootup.

rplidar a2 error time out

$
0
0
Hello, I was trying to get a map with a rplidar a2 in combination with hector_mapping in Ubuntu, however when killing the rplidar launchfile something went wrong, the lidar was still rotating. After this I restarted the launchfile which eventually stopped the lidar, but now I am not able to restart the lidar again. When I try to start the lidar I receive this error message: Error, operation time out. [rplidarNode-2] process has died [pid 3114, exit code 255, cmd /home/pim/catkin_ws/devel/lib/rplidar_ros/rplidarNode __name:=rplidarNode __log:=/home/pim/.ros/log/c4f67012-5370-11e8-bb7d-9 cd21e0c8747/rplidarNode-2.log]. log file: /home/pim/.ros/log/c4f67012-5370-11e8-bb7d-9cd21e0c8747/rplidarNode-2*.log I found out that it had something to do with the serial port but I cannot find a solution for this, could you help me?

how to change the RPLIDAR frequency?

$
0
0
hi i want to know how to change the LIDAR's frequency from 5.5 Hz to say 10 Hz . I know that it involves changing the PWM singal to the motors to increase its scan rate but how do i do it ? like where is the setting in the source code ? thanks in advance

Path planning for hector_slam

$
0
0
Hello, i'm using view_hectorSlam.launch file for visualization and i just want to add path planning feature to this launch file. I only use rplidar connected to ttyUSB0 on Ubuntu 16.04 with ros-kinetic and i just want to see a path from rplidar position to desired position. Glad if you can help.

Hector exploration costmap get robot pose error

$
0
0
Hi, i connect an rplidar to ubuntu 16.04 with ros-kinetic. I'm using hector_navigation to get path planning. I use roslaunch rplidar_ros view_slam.launch and roslaunch hector_exploration_node exploration_planner.launch but exploration planner gives could not get robor pose error. I only have rplidar connect no additional sensors or robot. [ WARN] [1527263734.508645133]: Could not get robot pose, cancelling reconfiguration [ WARN] [1527263735.608135645]: Costmap2DROS transform timeout. Current time: 1527263735.6081, global_pose stamp: 1527263735.3022, tolerance: 0.3000

Hector exploration path to arduino serial

$
0
0
Hi, thanks to this great community i've managed to get hector exploration work. I can see the exploration path in rviz, next thing i want to do is sending this path to arduino serial to drive motors. I think a package that calculates angle between rplidar pose and path line can do the trick, is there a such package? I only use rplidar with ubuntu 16.04 ros-kinetic no additional software or robots.

[rplidarNode-1] process has died

$
0
0
Hello.. please help..me.. ubuntu@ubuntu:~$ source /home/ubuntu/catkin_ws/devel/setup.bash ubuntu@ubuntu:~$ roslaunch rplidar_ros view_rplidar.launch ... logging to /home/ubuntu/.ros/log/89ccce0a-e323-11e6-87cc-0022cffa2d4b/roslaunch-ubuntu-4625.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://ubuntu:54659/ SUMMARY ======== PARAMETERS * /rosdistro: indigo * /rosversion: 1.11.20 * /rplidarNode/angle_compensate: True * /rplidarNode/frame_id: laser * /rplidarNode/inverted: False * /rplidarNode/serial_baudrate: 115200 * /rplidarNode/serial_port: /dev/ttyUSB0 NODES / rplidarNode (rplidar_ros/rplidarNode) rviz (rviz/rviz) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found process[rplidarNode-1]: started with pid [4643] process[rviz-2]: started with pid [4644] RPLIDAR running on ROS package rplidar_ros SDK Version: 1.5.7 Error, cannot bind to the specified serial port /dev/ttyUSB0. [rplidarNode-1] process has died [pid 4643, exit code 255, cmd /home/ubuntu/catkin_ws/devel/lib/rplidar_ros/rplidarNode __name:=rplidarNode __log:=/home/ubuntu/.ros/log/89ccce0a-e323-11e6-87cc-0022cffa2d4b/rplidarNode-1.log]. log file: /home/ubuntu/.ros/log/89ccce0a-e323-11e6-87cc-0022cffa2d4b/rplidarNode-1*.log [rviz-2] process has died [pid 4644, exit code -11, cmd /opt/ros/indigo/lib/rviz/rviz -d /home/ubuntu/catkin_ws/src/rplidar_ros/rviz/rplidar.rviz __name:=rviz __log:=/home/ubuntu/.ros/log/89ccce0a-e323-11e6-87cc-0022cffa2d4b/rviz-2.log]. log file: /home/ubuntu/.ros/log/89ccce0a-e323-11e6-87cc-0022cffa2d4b/rviz-2*.log all processes on machine have died, roslaunch will exit shutting down processing monitor... ... shutting down processing monitor complete done

Help with rplidar sdk installation and port error on ubuntu 16.10

$
0
0
Hello I'm a beginner on the ROS and linux world I'm using a rplidar to get the laser map, however after the installation instruction from ROS kinetic tutorial I'm getting an error when after writing the command roslaunch rplidar_ros view_rplidar.lanuch it gives me this error Error,cannot bind to the specfifed serial port/dev/ttyUSB0 Then, I thought I had to download the driver which the SDK zip file from the rplidar website After that, when I go to my downloads directory cd downloads ls it show that I have the zip file rplidar_sdk_v1.5.7.zip and now I'm stuck, I don't how to unzip this file I have tried this but it did not work unzip"rplidar_sdk_v1.5.7.zip" Also, I tried to look online for the port issue, but did not found a solution I tried the ultra_simple /dev/ttyUSB0 and also chmod /dev/ttyUSB0 but none of them work Sorry for the bad explanation but I'm kinda lost Thank you

An Error in use ira_laser_tools to merge laser datas.

$
0
0
I have a rpldiarA2 sensor and a kinect camera sensor.I want to merge datas from kinect and rplidar ,then give the merge data to gmapping node.But when i roslaunch the ira_laser_tools to merge the data,there is a problem. it said,"[pcl::concatenatePointCloud] Number of fields in cloud1 (4) !=Number of fields in cloud2(5)". cloud1 is kinect's fake laser scan;cloud2 is rplidar's laser scan. So,i want to know reason for it, and how to solve this problem. Thank you for your watching.~~

Help with rplidar and autuonmus navigation

$
0
0
Hello everyone I'm a beginner on ROS, I'm trying to make an autuonmus robot using an rplidar and encoder. Here is what I have done so far 1. I installed ROS kinetic on a Raspberry pi 3 2. I got the rplidar running and I can get the data on rviz map 3. I have my encoders and motors controlled by a UNO Arduino Now, I'm stuck on the next step I don't how to use the rplidar data, I'm not sure if I need to install any more packages ( like SLAM ) and I also, I have no clue how to connect my raspberry pi ( ROS ) to the arduino and transfer the rplidar data to the motor PLEASE help with a clear ( step by step ) on what to do so I can make my research based on that. Thank you

Ground plane detection using ROS, 2D Lidar and Hough Transform

$
0
0
Hi all. We have built our first prototype robot (UGV) recently as a hobbyist project and are using ROS Kinetic and OpenCV for several obstacle detection and avoidance tasks. As our main 3D environment mapping device, we use a stereo camera and we hoped that the point cloud data would be good enough to detect ground planes with high precision. The robot should only drive on ground with differences not higher than 5-6cm or more. Unfortunately, point cloud data generated from the stereo camera (ZED cam from Stereolabs) is probably not precise enough for our use case. Therefore, we bought a 2D LIDAR (RPLidar A2M8) and placed in on the "nose" of the robot, but with 45° angle (showing downwards to the ground). **Our plan is to use the LIDAR laserscan data to recognize where the ground starts to be "non-plane". We wanted to use the detected ground line under the center of the robot as a reference point / reference line, and every point or line that's left and right from it with a height difference higher than 5-6cm (maybe 1-2cm more) will be in its "no-drive / danger zone".** *The question is as follows:* We thought about using Hough Transform or some algorithm with similar output to draw the reference ground line that lies under the center of the robot (width probably 40-50cm). As backup / driving improvement strategy, we'd like to use the camera data too and apply a Canny Edge Filter + Hough Transform to detect edges on the ground, so our robot can detect its "no drive zone" better if LIDAR data is not accurate or precise enough. Do you think that this is the right approach for our specific goal, or can you point me into some other direction which will lead to a better result in your opinion? I will probably just test it the next few days and weeks and see if it works out, but I'd be very glad if some of you know a better approach so we don't lose too much time "driving in the false direction" :-) Thanks a lot in advance!

Help with controlling my motors with rplidar data

$
0
0
I'm trying to build an autonomous mapping and navigation robot with rplidar, arduino ( controlling the motors and ROS) So far I I have got lidar, hector slam, hector exploration planner, and run the simple_exploration_controller and of course rosserial working fine. I'm getting values from cmd_vel from simple_exploration_controller. Can I controll my motors speed by using these cmd_values ? and just try to connect it to my arduino vis rosserial ? Thank you

How to know the history about rplidar wiki

$
0
0
How to get the edit history about [rplidar](http://wiki.ros.org/rplidar)?
Viewing all 131 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>