Skip to main content

Simultaneous Localization and Mapping (SLAM) | Robotic Mapping | Car Location GPS

In navigationrobotic mapping and odometry for virtual reality or augmented realitySLAM, stands for Simultaneous Localization & Mapping. It means to generates the map of a vehicle’s surroundings and locates the vehicle in that map at the same time. The SLAM system uses the depth sensor to gather a series of views (something like 3D snapshots of its environment), with approximate position and distance. Than, it stores these  3d views in memory. SLAM is a pure work of Deep Learning and Machine Learning. It could further be enhanced with the help fo Artificial Intelligence. It is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.[1][2][3][4] While this initially appears to be a chicken-and-egg problem there are several algorithms known for solving it, at least approximately, in tractable time for certain environments. Popular approximate solution methods include the particle filterextended Kalman filterCovariance intersection, and GraphSLAM.
SLAM algorithms are tailored to the available resources, hence not aimed at perfection, but at operational compliance. Published approaches are employed in self-driving carsunmanned aerial vehiclesautonomous underwater vehiclesplanetary rovers, newer domestic robots and even inside the human body.

Simultaneous localization and mapping, or SLAM for short, is the process of creating a map using a robot or unmanned vehicle that navigates that environment while using the map it generates. SLAM is technique behind robot mapping or robotic cartography.  The robot or vehicle plots a course in an area, but at the same time, it also has to figure out where its own self is located in the place. The process of SLAM uses a complex array of computations, algorithms and sensory inputs to navigate around a previously unknown environment or to revise a map of a previously known environment.  SLAM enables the remote creation of GIS data in situations where the environment is too dangerous or small for humans to map.

How do SLAM Robots Navigate?

SLAM is similar to a person trying to find his or her way around an unknown place. First, the person looks around to find familiar markers or signs. Once the person recognizes a familiar landmark, he or she can figure out where they are in relation to it. If the person does not recognize landmarks, he or she will be labeled as lost. However, the more that person observes the environment, the more landmarks the person will recognize and begin to build a mental image, or map, of that place. The person may have to navigate this certain environment several times before becoming familiar with a previously unknown place.
In a related way, a SLAM robot tries to map an unknown environment while figuring out where it is at. The complexity comes from doing both these things at once. The robot needs to know its position before answering the question of what the environment looks like. The robot also has to figure out where it is at without the benefit of already having a map. Simultaneous localization and mapping, developed by Hugh Durrant-Whyte and John L. Leonard, is a way of solving this problem using specialized equipment and techniques.
The process of solving the problem begins with the robot or unmanned vehicle itself. The type of robot used must have an exceptional odometry performance. Odometry is the measure of how well the robot can estimate its own position. This is normally calculated by the robot using the position of its wheels. Something to keep in mind, however, is that there is normally a small margin of error with odometry readings. The robot might be off in its measurements by several centimeters. Consequently, the robot is not where it thinks it is in a given location. These errors must be taken into account in algorithms. Also, areas are often remapped to make up for this deficiency.

Requirements of SLAM

One requirement of SLAM is a range measurement device, the method for observing the environment around the robot. The most common form of measurement is a laser scanner such as LiDAR. Laser scanners are easy to use and very precise. However, they are also extremely expensive. There are other options, though. Sonar can be used, and this device is especially useful for mapping underwater environments. Imaging devices can also be used for SLAM. These optical readers can came in 2D or even 3D formats. The measurement device used depends on several variables, including preferences, costs, and availability.
Another key component in the SLAM process is acquiring data about the environmental surroundings of the robot. Just like a human, the robot uses landmarks to determine its location using its sensors, the laser, sonar, or whichever measuring device is used. A robot will use different landmarks for different environments. However, there are certain requirements for landmarks used in SLAM. First of all, landmarks should be stationary. A robot cannot determine its own location if a nearby landmark is constantly moving. Additionally, landmarks should be unique and distinguishable from the surrounding environment. Landmarks also need to be plentiful and should be able to be viewed from many different angles.
Once a robot has sensed a landmark, it can then determine its own location by extracting the sensory input and identifying the different landmarks. A method needs to be in place in order for the robot to do this. This landmark extraction can be done in a variety of ways from algorithms like Spike extraction to scan-matching. The important factor to remember is that the robot needs a way to identify a landmark. Robots can also use data from previously scanned landmarks and match them up with each other in order to determine its location.
SLAM is the mapping of an environment using the continual interplay between the mapping device, the robot, and the location it is in. As the robot interacts with the environment, it not only maps the area but also determines its own position simultaneously. Like other mapping technologies, SLAM is undergoing constant improvement as a tool for exploring the environments around us.

Problem definition

Given a series of controls  and sensor observations  over discrete time steps , the SLAM problem is to compute an estimate of the agent's location  and a map of the environment . All quantities are usually probabilistic, so the objective is to compute:
Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function ,
Similarly the map can be updated sequentially by
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of EM algorithm.

Algorithms

Statistical techniques used to approximate the above equations include Kalman filters and particle filters (aka. Monte Carlo methods). They provide an estimation of the posterior probability function for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using Covariance intersection are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.[7] Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty.[8]
Set-membership techniques are mainly based on interval constraint propagation.[9][10] They provide a set which encloses the pose of the robot and a set approximation of the map. Bundle adjustment, and more generally Maximum a posteriori estimation (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their previous augmented reality project 'Tango'. MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability.
New SLAM algorithms remain an active research area,[3] and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.

Mapping

Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.[11]
In contrast, grid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent in order to simplify computation. Under such assumption,  are set to 1 if the new map's cells are consistent with the observation  at location  and 0 if inconsistent.
Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance. This can include map annotations to the level of marking locations of individual white line segments and curbs on the road. Location-tagged visual data such as Google's StreetView may also be used as part of maps. Essentially such systems simplify the SLAM problem to a simpler localisation only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime.

Sensing

SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.[12] Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms whose assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference is unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
DRONE FOR SLAM [[File:Drone for Slam.jpg|thumb|Drone for Slam]]
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world whose location can be estimated by a sensor—such as wifi access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model  directly as a function of the location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D High Definition LiDAR, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras.[12] Since 2005, there has been intense research into VSLAM (visual SLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.[13] Visual and LIDAR sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM[14] (sensing by local touch only), radar SLAM,[15] acoustic SLAM,[16] and wifi-SLAM (sensing by strengths of nearby wifi access points). Recent approaches apply quasi-optical wireless ranging for multi-lateration (RTLS) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings. by an indoor positioning system.[17]
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential GPS sensors. From a SLAM perspective, these may be viewed as location sensors whose likelihoods are so sharp that they completely dominate the inference. However GPS sensors may go down entirely or in performance on occasions, especially during times of military conflict which are of particular interest to some robotics applications.

Kinematics modeling

The  term represents the kinematics of the model, which usually include information about action commands given to a robot. As a part of the model, the kinematics of the robot is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model.
For 2D robots, the kinematics are usually given by a mixture of rotation and "move forward" commands, which are implemented with additional motor noise. Unfortunately the distribution formed by independent noise in angular and linear directions is non-Gaussian, but is often approximated by a Gaussian. An alternative approach is to ignore the kinematic term and read odometry data from robot wheels after each command—such data may then be treated as one of the sensors rather than as kinematics.

Multiple objects

The related problems of data association and computational complexity are among the problems yet to be fully resolved, for example the identification of multiple confusable landmarks. A significant recent advance in the feature-based SLAM literature involved the re-examination of the probabilistic foundation for Simultaneous Localisation and Mapping (SLAM) where it was posed in terms of multi-object Bayesian filtering with random finite sets that provide superior performance to leading feature-based SLAM algorithms in challenging measurement scenarios with high false alarm rates and high missed detection rates without the need for data association.[18]
Popular techniques for handling multiple objects include Joint Probabilistic Data Association Filter (JPDAF) and probability hypothesis density filter (PHD).

Multiple cameras

Collaborative SLAM combines 3D maps that are reconstructed using multiple cameras.[19]

Moving objects

Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges.[20][4] SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself.[21]

Loop closure

Loop closure is the problem of recognizing a previously-visited location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and re-set the location priors when a match is detected. For example, this can be done by storing and comparing bag of words vectors of SIFT features from each previously visited location.

Exploration

"Active SLAM" studies the combined problem of SLAM with deciding where to move next in order to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the entropy of the map under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots coordinating themselves to explore optimally.

Biological inspiration

In neuroscience, the hippocampus appears to be involved in SLAM-like computations, giving rise to place cells, and has formed the basis for bio-inspired SLAM systems such as RatSLAM.

Complexity

Researchers and experts in artificial intelligence have struggled to solve the SLAM problem in practical settings: that is, it required a great deal of computational power to sense a sizable area and process the resulting data to both map and localize.[25] A 2008 review of the topic summarized: "[SLAM] is one of the fundamental challenges of robotics . . . [but it] seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger."[26] Generally, complete 3D SLAM solutions are highly computationally intensive as they use complex real-time particle filters, sub-mapping strategies or hierarchical combination of metric topological representations, etc.
[27] Robots that use embedded systems cannot fully implement SLAM because of their limitation in computing power. Nguyen V., Harati A., & Siegwart R. (2007) presented a fast, lightweight solution called OrthoSLAM, which breaks down the complexity of the environment into orthogonal planes. By mapping only the planes that are orthogonal to each other, the structure of most indoor environments can be estimated fairly accurately. OrthoSLAM algorithm reduces SLAM to a linear estimation problem since only a single line is processed at a time.

Implementations

Various SLAM algorithms are implemented in the open-source robot operating system (ROS) libraries, often used together with the Point Cloud Library for 3D maps or visual features from OpenCV.

Uses of SLAM

SLAM technology is being used in autonomous vehicles and robots. Idea is the vehicle will map, process, analyze and react to the input data. The SLAM technology could be used in mining, warfare, autonomous cars, construction sites, after earthquake or other major disaster to map the area where its not possible to go easily.
SLAM, which is also used extensively in augmented/virtual reality applications, can use a variety of sensing and location techniques such as lidar, GPS, and cameras.
SLAM could be used in warehouses, construction sites, etc.) and for cleaning tasks done by robotic vacuum cleaners or drones.
It could be used for drone delivery.Do you know why GPS is not suitable for delivery purposes? The reason is that the accuracy of GPS is about 6 meters, so drone working on GPS can deliver goods close to the house, but it can’t autonomously land the parcel on the rug in front of the door. A drone will either slams into the door or does not reach it at all – the accuracy of the GPS system does not allow it to do so.

History

A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.[28][29] Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s.[30] which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution.
The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners.[31] Self-driving cars by Google and others have now received licenses to drive on public roads in some US states.

Comments

Popular posts from this blog

Claytronics | What is Claytronics | Definition and meaning of Claytronics | Nanoscale Robotics and Computer Science

Claytronics Claytronics  is an abstract future concept that combines  nanoscale  robotics and  computer science  to create individual nanometer-scale computers called claytronic atoms, or  catoms , which can interact with each other to form tangible 3D objects that a user can interact with. This idea is more broadly referred to as  programmable matter .  Claytronics has the potential to greatly affect many areas of daily life, such as telecommunication, human-computer interfaces, and entertainment. Current research Current research is exploring the potential of  modular reconfigurable robotics  and the complex software necessary to control the “shape changing” robots. “Locally Distributed Predicates or LDP is a distributed, high-level language for programming modular reconfigurable robot systems (MRRs)”. There are many challenges associated with programming and controlling a large number of discrete modular systems due to the...

10 Awesome Q&A Sites to Answer Your Burning Questions

1. askburg AskBurg is a site where people post Questions , you can answer, You post questions and people will answer. There is a wide range of categories of questions available to ask about. You create your profile, people can follow you and you can also follow others on this platform. In Questions, you can add Polls , You can add Files, Documents, Images, Videos and all type of attachments. You can also follow categories on this platform. You get points for asking questions, points for answering questions, points on following someone, and with these points you are assigned with respective leagues. In-short, it is a full on interesting platform to learn new things and Grow your knowledge. Everyone must give it a try! 2. QUORA Quora is a site where people post answers to your questions. It also allows you to follow  Topics ,  People , and specific  Questions,  which is great for keeping up with trends and questions that you never ran into yet. Its ad...

Websites to Earn Money | 9 Websites that make you Earn Extra Money

It doesn't take much time to run across someone pitching a "get-rich-quick" scheme online. We've all seen them — people who promise to show you how to make thousands at home in your spare time. Guess what? Like any get-rich-quick scheme, it's only going to make money for the person running the scheme. That doesn't mean there aren't reputable ways to earn a little bit of money online during your spare time. There are many easy and legal ways to earn a few bucks on the internet. Here are nine to try:  1. Mechanical Turk Mechanical Turk  is run by Amazon. Anyone can sign up and complete simple tasks — like choosing which of two pictures depicts a bridge — to earn a few cents per task. With some practice, you can earn a few dollars an hour while just sitting on your couch watching television. And with enough focus, you can earn an amount roughly equal to minimum wage. 2. YouTube YouTube  allows anyone to post nonexplicit videos on pr...