Government & Politics

6 pages
3 views

A Fully Autonomous Indoor Mobile Robot using SLAM

of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Description
A Fully Autonomous Indoor Mobile Robot using SLAM
Transcript
  978-1-4244-8003-6/10/$26.00 ©2010 IEEE  Abstract   —This paper presents a complete Simultaneous Localization and Mapping (SLAM) solution for indoor mobile robots, addressing feature extraction, autonomous exploration and navigation using the continuously updating map. The platform used is Pioneer PeopleBot equipped with SICK Laser Measurment System (LMS) and odometery. Our algorithm uses Hough Transform to extract the major representative features of indoor environment such as lines and edges. Localization is accomplished using Relative Filter which depends directly on the perception model for the correction of error in the robot state. Our map for localization is in the form of a landmark network whereas for navigation we are using occupancy grid. The resulting algorithm makes the approach computationally lightweight and easy to implement. Finally, we present the results of testing the algorithm in Player/Stage as well as on PeopleBot in our Robotics and Control Lab. I.   I  NTRODUCTION  utonomous motion for robots is a prerequisite for many robotic applications. A mobile robot needs to perceive its environment in sufficient detail to allow completion of a task with reasonable accuracy. Particularly for mobile robots, the autonomous motion presents a very challenging  problem because the robot has to know its position at all times relative to the environment. In order to know its  position, the robot has to incrementally increase its knowledge of the environment which being relative knowledge, depends directly on the robot state. The solution for this problem was developed by works of Self, Cheeseman[1] and Hugh F. Durrant-Whyte[2]  ,   in the form a  probabilistic framework which modeled the uncertainty in feature location. SLAM problem has been the subject of substantial research for the last couple of decades in robotics research community. The most popular of the complete SLAM algorithm is the estimation-theoretic or Kalman filter based approach [2]. Other approaches include Relative Filter [3], Fast SLAM [4] and DP-SLAM [5]. We have selected Relative Filter for our implementation due to its ability to handle more features without increasing computational complexity as compared to Kalman Filter. There are two main approaches for using Laser based  perception model. First one uses Scan Matching incorporating raw laser data for localization [6] which consumes more time than the second approach which is Feature based [7], converts higher dimensional measurement space into low dimensional feature space thus increasing computational efficiency by orders of magnitude. A number of exploration techniques have been devised for navigation of mobile robots especially in the absence of map as in the start of the SLAM problem. Wall Follow, Random Exploration and Seek Open [8] are few of them. Advanced form of exploration algorithms includes Frontier Based [9] etc. We addressed this problem by unifying multiple approaches. II.   M ETHODOLOGY  In general, the task of acquiring models of unknown environments requires the solution of three subtasks, which are mapping, localization, and motion control. Mapping is the problem of integrating the information gathered with the robot’s sensors into a given representation. Localization is the problem of estimating the position of the robot. Finally, the motion control problem involves the question of how to steer a vehicle in order to efficiently guide it to a desired location or along a planned trajectory. The diagram in Fig. 1 depicts also the overlapping areas of these three tasks. Simultaneous localization and mapping, also called SLAM, is the problem of building a map based on pose estimates while simultaneously localizing the robot within the map constructed so far. Active localization seeks to guide the robot to locations within the map to improve the  pose estimate. III.   FEATURE   EXTRACTION The Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image Figure 1. Sub-tasks that need to be solved by a robot in order to acquire accurate models of the environment [10]. The overlapping areas represent combinations of these sub-tasks A Fully Autonomous Indoor Mobile Robot using SLAM Z. Riaz , A. Pervez, M. Ahmer, J. Iqbal Department of Mechatronics Engineering College of E.M.E  National University of Sciences and Technology Rawalpindi, Pakistan A    processing [11].The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so called Accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. It maps features in a sensorial space to sets of points in a parameter space, through the accumulation of evidence in a model space. The normal parameterization [7] specifies a straight line by the angle θ    of its normal and its algebraic distance r from the srcin as shown in Fig. 2. The Hough Transform represents straight line equation using  parameters r and θ  as follows (1)    A. Segmentation The segmentation represents scan points clusters that  belong together [7]. It is based on the computation of the distance between two consecutive points or separation  between two sets of points due to scan readings which are out of the set detection range. Each segment is processed separately for the presence of lines. This helps avoid the extraction of non-existing edges by the intersection of lines from different segments. The result of scanning a simulated environment and the segments made are shown in Fig. 3. Each segment was then treated with Hough transform [11].  B. Hough Transform  Lines were extracted from each segment by finding local maxima in the parameter space as shown in Fig. 9a. Two Figure 2. Data Lines represented by ρ  and θ    Figure 3. Matlab simulation: Robot scanning the environment types of edges were targeted. First were those ends of the segments which were supported by lines in the same segment and second were those which resulted from the intersection of almost perpendicular lines in the same segment. Lines and extracted edges are shown in Fig. 9b. IV.   L OCALIZATION  Localization is the process of updating the position of the robot. We have used Relative Filter for this purpose. Relative filter has been implemented before [3] for creating relative map. We have made absolute map by using Relative Filter. The performance of the filter directly depends upon the performance of measurement model. As measurement model becomes more accurate the map becomes accurate to. There is no need for correlation between landmarks in Relative Filter making it faster than the other methods. The theory behind Relative Filter is very simple. It states that if we know the three sides and two vertices of a triangle than we can determine its third vertex. As in Fig. 4 a robot saw two landmarks with position given  by  x and  y coordinates and distances from the robot as r  . If the robot knows their position and distance of the robot from them then it can determine its own position. We can also correct the orientation of the robot by using landmark  positions. Like Kalman Filter [12] it also works on the  prediction correction Algorithm as shown in Fig. 5.  A. Assumptions We have some assumptions for applying relative filter. And every one of them is true for real world. Figure 4. Robot Observing Landmarks Figure 5. General Cycle for Estimation Process r1 r2 (x1,y1) (x2,y2)    1)   The error in range measuring device (like laser range finders) is much less than the error in the odometry. 2)   The error in range measuring device is non diverging. (The error in odometry can diverge but at each state the error in range measuring device is finite and very small). 3)   Landmarks should be readily available and at each state robot should be able to see at least two landmarks which it has seen before. 4)   Landmarks should be stationary. 5)   Landmarks should be easily re-observable.  B. SLAM using Relative Filter As shown in Fig. 5 SLAM also undergoes through initialization, prediction and update. The Robot sees landmarks and determines the angles and distances between them and thus forms a network of landmarks Fig. 6. These distances and angles will be referred in the later part of the  paper as link lengths and link angles. Each landmark is defined with respect to its neighboring landmarks. C. Control input and Prediction Then after control input has been applied, decided according to a navigation algorithm discussed later in the  paper, robot moves to the next state and predicts its new  position and the position of the landmarks. Due to the error in odometry, which is normally 2-3cm/m, robot state is not  perfectly known and that’s why the position of the landmarks came out to be different.  D. Update 1) Position Update The robot has an estimate of its position due to its motion model. If the robot sees more than two landmarks it determines the link length and angle between them. Then the robot searches for similar link length and angle in its  previous map. The landmarks which maximize the likelihood of similar link lengths and angles will be recognized as the previously seen landmarks. Then the robot localizes itself relative to these previously seen landmarks. 2) Map Update After updating its own position robot will now update the map. The link lengths L and angle Ө   are updated for (K+1) iteration according to equations (2)-(3) Figure 6. Network of Landmarks   L K+1 = L K   + α  [L'   - L K  ] (2) Ө K+1 = Ө K   + α  [ Ө '   - Ө K  ] (3) Where α  = [1/(K+1)] Where L' is the measured link lengths and Ө ' is the measured link angles in the most recent measurement and as the number of iterations K increases the update becomes less active which means that the map is converging to its true estimate. For the update to hold true the measurement noise should be Gaussian. The new landmarks seen in the map are inserted relative to their neighboring landmarks and will be updated when visited again. The beauty of SLAM lies in the fact that the more a landmark is revisited; more reliable estimates of its link lengths are obtained making the map more reliable for navigation algorithm. V.   E XPLORATION  A naive approach could be to combine a SLAM algorithm with an exploration procedure. Since exploration strategies often try to cover unknown terrain as fast as possible, they avoid repeated visits to known areas. This strategy, however, is suboptimal in the context of the SLAM problem because the robot typically needs to re-visit places to localize itself again. A good pose estimate is necessary to make the correct data association, i.e., to determine if the current measurements fit into the map built so far. If the robot uses an exploration strategy that avoids multiple visits to the same place, the probability of making the correct associations is reduced. It can be expected that a system, which takes this decision into account, can improve the quality of the resulting map [13].  A.    Frontier-Based Exploration The central question in exploration is: Given what you know about the world, where should you move to gain as much new information as possible? Frontiers are regions on the boundary between open space and unexplored space. When a robot moves to a frontier, it can see into unexplored space and add the new information to its map. As a result, the mapped territory expands, pushing back the boundary  between the known and the unknown as shown in Fig. 7. By moving to successive frontiers, the robot can constantly increase its knowledge of the world. This strategy is known to be frontier-based exploration [9], [14]. Figure 7. A grid map formed as a result of Frontier Based Exploration. [9]. Occupied Space Robot Current Position Frontier Frontier Centroid Path to Frontier Unknown Space Open Space     B.   Seek Open The SeekOpen movement algorithm causes a robot to move toward open areas in the map. SeekOpen is executed  by first calculating the average obstacle vector for all obstacles in sensor range. The average obstacle vector is computed by summing the vectors pointing to all of the objects within sensor range and dividing by the number of vectors summed. The magnitude of the vector must be large for objects close to the robot and small for objects far away. This is accomplished by setting the magnitude of a perceived obstacle vector equal to the maximum range of the sensors minus the perceived distance a given obstacle is from the robot, or by using some other function which decreases with distance, as done when using artificial potential fields for navigation [15]. After the average obstacle vector is computed, the goal of the robot becomes to move in the opposite direction of the average obstacle vector. The robot turns toward the direction of the negative obstacle vector. The rate of turn is determined by the magnitude of the average obstacle vector. This allows the SeekOpen algorithm to not run into. This is illustrated in Fig. 8. C.   Our Approach We used an integrated approach so that the robot efficiently explores the environment with minimizing the uncertainty in the pose of the robot. The Seek Open was modified by making the robot to follow open areas  preferring its right side and this was combined with Frontier-Based exploration and thus the grid mapping was achieved which enabled the robot to navigate in its environment. The grids were incrementally updated as robot discovered new frontiers and thus exploring and building the whole environment. VI.   H ARDWARE D ESCRIPTION  The testbed for our experiment is a differential drive robot called PeopleBot. It is equipped with odometery, Sonar arrays and a Sick LMS laser range finder. Figure 8. The dotted arrow shows the obtained average obstacle vector and the continuous arrow shows the calculated direction of motion.  A.   The Hardware The hardware of the robot is quite important. To do SLAM there is the need for a mobile robot and a range measurement device. The mobile robots we consider are wheeled indoor robots .We here present some basic measurement devices commonly used for SLAM on mobile robots.  B.   The Odometry Important parameters to consider are ease of use, odometry performance and price. The odometry  performance measures how well the robot can estimate its own position, just from the rotation of the wheels. The robot should not have an error of more than 2 cm per meter moved and 2° per 45° degrees turned [16]. Typical robot drivers allow the robot to report its (x,y) position in some Cartesian coordinate system and also to report the robots current  bearing/heading. Peoplebot uses P2OS controller of odometry for this purpose. C   . The range measurement device   The range measurement device used is usually a laser scanner. The use of laser scanner is very advantageous in mapping due to its accuracy and long range with roughly millimeter resolution. Laser scanners cover a two-dimensional planer distance around them with a certain angle of view e.g. SICK LMS 200 scans with angel range of 180º. As compared to sonar range finder has better accuracy  but only a planner view while sonar produces a cone to sense the obstacles. Problems with laser scanners are looking at certain surfaces including glass, where they can give very  bad readings (data output). Second there is the option of sonar. Sonar was used intensively some years ago. They are very cheap compared to laser scanners. Laser scanners have a single straight line of measurement emitted from the scanner with a width of as little as 0.25 degrees a sonar can have beams up to 30 degrees in width. VII.   R  ESULT  The proposed approach was simulated in MATLAB and Player/Stage before implementation on our test bed. Player/Stage being the most reliable software for robotics simulation [17], includes the model for Pioneer’s robot which is used to verify our results. The MATLAB results of Hough Transform are shown in Fig. 9, where it extracted features such as lines and edges from the given environment which included multiple shapes.    Figure 9. a) On the left: edges and lines extracted are shown b) Intensity graph generated by Hough Transform algorithm The Player/Stage results of feature extraction confirmed the successful working of the algorithm as shown in Fig. 10. The MATLAB results of SLAM are found to be very accurate as deviation from srcinal position of about 0.8cm/10m moved was discovered. As shown in Fig. 11 the true position of robot is in red while estimated position is shown in blue. The Player/Stage results combined approach clearly revealed the success of the algorithm as shown in Fig.12 Figure 10. The circles marked represent the edges extracted in part of the simulation environment shown in zoomed condition. Figure 11. Matlab simulation of SLAM Figure 12. The upper Fig. represents the mapping without SLAM by only using odometry and the lower Fig. shows the map built by SLAM in Player/Stage environment. The final implementation was done on PeopleBot in Department of Mechatronics Engineering’s Robotics and Control Lab at NUST. The Fig. 13 shows the comparison of the implemented algorithm to the odometry results thus verifying the accuracy of our approach. The developed exploration algorithm resulted in efficient exploration of the simulation environment as shown in Fig. 14. It can be seen that the robot has completely explored the environment as it has moved to every room in the environment. The Fig. 15 shows a zoomed portion of the grid map of the Player/Stage environment. This grid map is incrementally built and the grids are updated and used by the robot to navigate by implementing frontier based approach. Figure 13. Results of run in real environment measuring 50*35 feet. Shown on the left is map resulted by Odometry and on the right is the map formed as a result of implemented SLAM algorithm.
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks