Jump to content

Mindstorms Robotics

25% developed
From Wikibooks, open books for an open world
A robot made with Lego Mindstorms NXT.

This text explains some robotics concepts with refers to an example: NXT Lego Mindstorms kit.

The Future of Robots

[edit | edit source]

The great success of robots so far has been in automating repetitive tasks in process control and assembly, yielding dramatic cuts in production, but the next step towards cognition and more human-like behaviour has proved elusive. It has been difficult to make robots that can truly learn and adapt to unexpected situations in the way humans can, while it has been equally challenging trying to develop a machine capable of moving smoothly like any animal. There is still no robot capable of walking properly without jerky slightly unbalanced movements. Today's robot designers will have to solve some fundamental problems before robots can become as versatile, independent and useful as the ones we've seen for years in the movies.

The future of robotics is split between four main categories:

Telepresence

[edit | edit source]
A Telepresence robot.

This kind of robot allows people to hold remote controlled videoconferences from far distances, rather than traveling great distances in order to have face to face meeting. On the other hand, telepresence is another way of letting the users perceive no difference from the actual presence. This ability to be able to manipulate a remote object or environment is an important aspect of real telepresence systems. In remote controlled telepresence the movements of the user's hands are sensed by wired gloves and inertial sensors, and then a robot in a remote location copies those movements.

The ability to see and manipulate objects in remote locations, particularly hostile ones ("dull, dirty and dangerous jobs"), remains a key application for mobile robots. In the military, this translates into a multitude of applications. For example, iRobot's PackBots are deployed in Iraq and Afghanistan to scope out who is in a building, ahead of soldiers entering it. Healthcare is also taking advantage of telepresence applications, such as InTouch Health's mobile videoconferencing robots, which let physicians interact remotely with patients in the hospital.

Rescue and lifting

[edit | edit source]

This kind of robot capability is mostly used in warfare. They are used on the battlefield to help soldiers clear out buildings, and can also sense if there is danger inside buildings before they enter.

The U.S. Army is developing robots that can retrieve and carry a wounded solder from a battle site, a very risky task for human soldiers. Vecna Technologies' Battlefield Extraction-Assist Robot (BEAR) is a prototype robot that can detect people through the use of infrared, pick them up and carry them to safety. The technology could also be used for healthcare and home care applications in the longer term.

[edit | edit source]

Navigation could allow the robots to be able to create their own routes to deliver items or scan for assets. On the other hand, a robot programmed for navigation purposes, for example, might guide people through a museum.

Aethon's Tug robots offer a sophisticated navigation capability that lets them download a map of a building (such as a hospital) and use dead reckoning to find their way from one location to another. They can sense objects and obstructions and create a new route on the fly if necessary. Tugs are being used in a number of hospitals to deliver drugs, meals or other supplies.

Humanoid robots

[edit | edit source]

These kinds of robots are autonomous, because they change to their environment while reaching their goal. A fully functioning autonomous robot has the ability to do the following tasks:

  • Gain information about the environment.
  • Work without the help of human in extended period of time.
  • Able to mobilize all its parts without human’s assistance in its environment.
  • Capable of avoiding situations that are harmful to people, property.

Since humanoid robots try to simulate the human structure and behavior they are most of the time more complex than any other kind of robots. Humanoid robots are created to imitate some of the physical and mental tasks that humans undergo daily. Scientists from different fields combine efforts so that one day humanoid robots will be able to understand human intelligence reason and act like humans themselves.

There is very little U.S. activity in humanoid robots, or androids, a field which is dominated by Japan and Korea (such as Honda's Asimo and Kokoro's Actroid). However, one U.S. company, Hanson Robotics, is pioneering lifelike heads with realistic-looking skin and features. The heads can talk when someone approaches and maintain eye contact during the conversation.

Programming Concepts

[edit | edit source]

The programming concept for robots is to describe the desired robot behavior and must be supported by a programming system. These programming systems can be distinguished by both their aim and by their method of programming. Programming system is typically undertaken by developers before the low or medium level tasks. Level tasks programming is the allocation of tasks by consumers. Robot programming concepts is a sophisticated human-robot interactions techniques programming. The robot must model and reason about the human programmer's intentions and be able to recognize plans presented by human.

Robots systems have special programming demands related to their complex interactions in real environment, their complex sensors, and actuators. These programming demands provide appropriate human-robot programming interactions such as programming languages, tools, and distributed infrastructures. Robot programming systems have three important conceptual components that include:

  1. The programming component that includes designs for programming languages, libraries and application programming interfaces (APIs), which enable a programmer to describe desired robot behavior.
  2. The underlying infrastructure includes the designs for architectures that support and execute robot behavior descriptions, especially in distributed environments.
  3. The design of interactive systems that allow the human programmer to interact with the programming component, to create, modify and examine programs and system resources, both statically and during execution.

The programming system consists of the two main programming methods which are the manual programming and automatic programming.

Manual programming: involves text based or graphical system, which is common in industry where languages are used in simple robots. Text based systems have diverged from robot specific languages to develop more higher level programming such as C++, Java, Haskell, Robot C.

Automatic programming: includes the programming by demonstration (PBD), which is currently popular for training specific tasks, particularly in industry. Of the two main programming methods, considerable effort is aimed at improving the PBD system. For example, in automatic programming a robot may pick out important movements and plan on its own path between points, or execute the key steps in different order.


Robot Hardware example

[edit | edit source]

The NXT Lego Mindstorms kit consists of the following robot hardware:

  • NXT Intelligent Brick
  • Ultrasonic distance and movement sensor
  • Sound sensor, with sound pattern
  • Light sensor, detecting light intensity
  • Touch sensor (press/release/bump detection)
  • Three interactive servo motors
  • Three output ports, six-wire digital platform
  • Four input ports, six-wire digital platform
  • Loudspeaker, 8 kHz sound quality
  • Power source: six AA batteries
  • User Guide
  • Easy to use software
  • 577 LEGO Technic parts

Robot Control

[edit | edit source]

The main component in robot control is the brick shaped computer called the NXT Intelligent Brick. It is the state-of the art programmable intelligent brick that acts as the brain for robots. The NXT Intelligent Brick can take input from up to four sensors and control up to three motors. The brick has a 100 × 64 pixel monochrome LCD display and four buttons that can be used to navigate a user interface using hierarchical menus. The speaker is also on the brick that can play sound files at sampling rates up to 8 kHz. Power is supplied by 6 AA (1.5 V each) batteries and by a Li-ion rechargeable battery and with a charger. In addition to the NXT intelligent Brick, robot control has the following capabilities:

  • Motors: the motors are the primary source of power. The robot will use the motors to move around, lift loads, operate arms, grab objects, pump air, and perform any other task that requires power. There are also different kinds of electric motors that share the property of converting electrical energy into mechanical energy.
  • Touch Sensors: the touch sensors work more or less like the push button portion of the doorbell, when pressed a circuit it completed and electricity flows through it.
  • Light Sensors: the light sensors detect light and measure its intensity. In spite of its limitations you can use it for a broad range of applications.

Probably the biggest problem facing a robot control is the overall system reliability. A robot might face any combination of the following failure modes:

  • Mechanical failures: these might range from a temporarily jammed movement to wedged gear trains or a serious mechanical breakdown.
  • Electrical failures: this is when a computer itself will not fail but loose connections of motors and sensors which are a common problem.
  • Sensor unreliability: sensors will provide noisy data (data that is sometimes accurate, sometimes not) or data that is simply incorrect (touch sensor fails to be triggered).

The task of robot control can be solved by introducing three different robot behaviors:

  • Pushing an object
  • Avoiding the black line>
  • Wandering around

The robot has no way of checking that the area is free of objects that must be removed and new objects may be dropped in the area; this means that it may look as if the robot is patrolling, searching eagerly for unwanted objects to throw out.

The behaviors should be implemented by a kind of servo-mechanism, which we can consider as a principle of performing a selected behavior only in a very short time interval before the sensors are checked again.

You will need to first describe these behaviors in more detail and for each specify under which conditions of trigger readings they are relevant to perform. Specify also an order of importance in which the different behaviors should be activated, when more than one is possible.

Mathematics of Robot Control

[edit | edit source]

Mathematics of robot control is a powerful tool for system design and control especially models of system's dynamics. This applies to any system but more specifically on human and humanoid robots. The dynamic control is generally a desirable solution if the dynamics can be calculated so the dynamic effects are important for a particular control problem. Computation time reduces rapidly with new computers, some dynamic problems (like link flexibility) are still considered time consuming.

Robotics was started from mathematical modeling of robot kinematics and dynamic. Two leg walking was considered with the aim of generating a stable gait, and the arm and hand were modeled in order to allow manipulation. For example, rehabilitation devices appeared to suitable where prostheses were made for leg, arm and hand, and also prostheses for arm and legs, thus resulting in an active exoskeleton. Some of these multilegged robots were designed while trying to solve the transport on rough terrain. Regarding the mathematical description full kinematic and dynamics were involved from the very beginning of robotics. In this mathematical modeling of a walking robot, Newton's and Euler's equations were used to describe the mathematical of robot control. 

Some other mathematics tools are also used, for example for sensors data acquisition, there are three sequential math sections in robot programming:

  • Averages
  • Interpolation
  • Hysteresis

Averages are a useful instrument to soften the differences between single readings and to ignore temporary peaks. They allow you to group a set of readings and consider it as a single value. When you are dealing with a flow of data coming from a sensor, the moving average is the right tool to process the last n readings.

This is how to use a moving average for three values in a program:

int avg, v1, v2, v3;
v2 = Sensor_1;
v3 = Sensor_1;

while(true)
{
  v1 = v2;
  v2 = v3;
  v3 = Sensor_1;
  avg = (v1+v2+v3) / 3;
}

Interpolation is a class of mathematical tools designed to estimate values from known data. The interpolation technique proves useful when you want to estimate the value of a quantity that falls between two known limits. Linear interpolation draws a straight line across two points in a graph. You then can use that line to calculate any value in the interval.

Hysteresis will help you in reducing the number of corrections your robot has to make to keep within a required behavior. By adding some hysteresis to your algorithms, your robot will be less reactive to changes. Hysteresis can also increase the efficiency of your system.

This sample program demonstrates hysteresis. The program plays tones to ask you to turn left or right.

#define GRAY 50
#define H 3

task main()
{
  SetSensor(Sensor_1, Sensor_Light);
  while(true)
  {
    If (Sensor_1>GRAY+H)
    PlayTone(440,20);
    Else If (Sensor_1<GRAY-H) // -H to +H is hysteresis domain
    PlayTone(1760, 20);
    Wait(20);
  }
}

Robot Programming Languages

[edit | edit source]

The development of modern robot programming languages started in the mid '70s. Some examples of programming languages in the mid '70s are VAL (predecessor of Adept's V+) and AML. These programming languages represent some of the early robot programming languages which had sophisticated data structures.

There is no single robot programming language available, allowing flexible specifications of functional interdependences of path properties. The choice on which language to choose depends on following few points:

  • Having previous experiences and making sure that you are comfortable with the kind of programming language to use.
  • The time and effort you plan to work on the program, because not every programming language is equally hard or easy to work with.

Text based programming is common in industry where simple robot languages are used, they are typically provided by the robot developer. With the development of technology nowadays, text based system has diverged from these robot languages to develop more general purposed in higher level programming languages that are suitable for any robot. Typically this involves extending existing languages such as C++, Java, Haskell.

The LEGO MINDSTORMS NXT kit comes with programmable languages called NXC. The NXC stands for Not eXactly C, this is a simple programming language for the Lego Mindstorms. The NXT has a bytecode interpreter which is provided by LEGO that can be used to execute programs. The NXC compiler translates a source program into NXT btyecodes, which can then be executed on the target itself.

The NXC is not a general purpose programming language there are many restrictions that stem from limitations of the NXT bytecode interpreter. The NXC Application Programming Interface (API) describes the system functions, constants, and macros that can be used by programs. All Application Programming Interface are defined in a special file known as the "header file" which is automatically included when compiling a program. Both the NXC languages and the API provide information needed to write the NXC programs.

NXC is also a case-sensitive language just like C and C++, which means that the identifier for example "xYz" is not the same identifier as "Xyz". Similarly the "if" statement begins with the keyword "if" but "IF", "If", or "IF" are all just identifiers. NXC also uses lexical rules to describe how sources file break into individual tokens. This basically includes the way comments are written and valid characters for identifiers.

There are also many other programming languages available.

NXT-G (Windows, Mac)
  • Pros
    • Easy to quickly create simple programs
    • Programming flow is easy to see
    • Included in standard kit
  • Cons
    • Somewhat limited capabilities
    • Integers only – floating point numbers not supported
    • Each basic math operation (addition, subtraction, multiplication, division) requires a separate block
    • Comparatively slow execution speeds
    • High memory usage.
Robolab (Windows, Mac)
  • Pros
    • Fairly easy to use
    • Fairly advanced programming possible
    • Very similar to LabVIEW environment
    • Included in standard educational kit
  • Cons
    • Block connections can become confusing
    • No good method for creating block set functions for reuse
RobotC (Windows)
  • Pros
    • Fast execution
    • Advanced programming
  • Cons
    • Text-based language is harder for beginners
    • Must be bought separately from kit
LabVIEW Toolkit (Windows, Mac)
  • Pros
    • Free
    • Can create blocks for use with NXT-G programming
    • Advanced data analysis
    • Common industry programming environment
  • Cons
    • Intermediate skill required
    • Advanced programming more limited than text based languages
BricxCC (Windows)
  • Free Windows IDE that supports many programming languages
NQC (C-based language for the RCX
  • NXC/NBC (C-based and assembly code for the NXT)
  • C/C++
  • Pascal
  • Java

Obstacle Avoidance

[edit | edit source]
Sumo robots made with Mindstorms. Knowledge of obstacles is critical for many tasks.

Obstacle avoidance is one of the most important aspects of the robotics world. Without having a way to avoid obstacles, robots would be very limited in what they could do, especially those that rely on a program that requires them to navigate around or to a specific location. For instance, if a robot was programmed to move from point A to point B and had no way of detecting obstacles so that it can avoid it, that robot would not be able to get to its location if the path to the designated location was not a straight line with no obstacles in its way. The robot will eventually run into an obstacle and will continue to try to move against it because it could not detect the obstacle in its path.

With a way to avoid obstacles, robots would have a better navigations system and be able to traverse obstacles with ease. Detectors such as touch or light sensors help robots to detect obstacles. Through the use of programs, the robot can be directed to move in a number of ways around the obstacle to avoid it once it is detected. The robot could be programmed to reverse and then turn to find a different path or simply turn left or right as soon as the obstacle is detected. Using the light sensor, the robot can detect lines on the floor that it can either avoid or can be programmed to follow. The line can be laid out in a path and once the robot picks up the dark line, it can be programmed to follow it without leaving its path.

Task Planning and Navigation

[edit | edit source]
A robot arm made with Mindstorms NXT. Without good knowledge of its current state, robot arms can not reliably operate.

Task planning and navigation refers to the programming of the robot and how it goes about performing its assigned tasks and how it is able to navigate itself around its environment and be able to avoid obstacles to avoid collisions to be able to continue to function properly. The navigations system of the robot is very important. If a robot is not able to move around in its environment with ease and avoid obstacles then it will not be able to complete its task properly.

Task planning is also important as it lays out the order in which a robot will complete its tasks. A robot can be programmed to complete several tasks, but if the tasks aren’t completed in a certain order, the end result may not be desirable. Task planning can range from plotting out tasks in a routine order or branch off into different decision making paths. For instance if a robot is programmed to go straight until it either hits an obstacle or detects a dark line on its path, it can be programmed to do many tasks depending on the programmer. Not only does task planning have to do with decision making for the robot, but it also controls loops that the robot will perform upon meeting specific requirements. Task planning is important because it keeps the robot in check to make sure that it performs all of its tasks as desired. A robot that isn't able to plan out its tasks properly would just go about performing random tasks as it sees fit and may be completely undesirable to the way the programmer wants the robot to perform.

A robot navigates around its environment using sensors to detect whether it is too close to an object to avoid a collision. This is only one type of sensor that a robot may use to avoid collisions. Other sensors include light sensors that a robot may use to follow a specified path drawn out on the floor. For instance, the robots light sensor can be programmed to pick up the dark colors. Once the sensor picks up a dark color, which can be a black line on the floor, the robot will navigate itself to follow that line. This can be used to program the robot to follow a specific laid out path. Touch sensors can be used to help a robot avoid collisions with obstacles and walls. The robot can be programmed to move around in its environment freely, once it bumps or runs into an obstacle, the touch sensor will activate and depending on how the robot is programmed, it will back up and plot out a new path until it runs into another obstacle.

Navigation can be defined as the combination of the three fundamental competences:

  • Self-Localisation
  • Path Planning
  • Map-Building and Map-Interpretation

Self-Localisation is the ability of an autonomous robot to estimate its position while moving about its environment. Self-Localisation implies measurement with respect to a certain coordinate frame: this can be either pre-determined by some external input or defined by the robot automatically. The coordinate frame itself, though, is not of fundamental importance: what matters is how the robot can estimate the relative positions of features of interest in the world (landmarks, obstacles, targets, etc.) and its own position with respect to them.

Path Planning is used to determine a route from one coordinate location to another along a set of waypoints. For example, if you had an image of a maze and you needed to determine the best path from where the robot is currently located to where it needs to be you would use Path Planning to determine the shortest or best path to the desired location.

Map-Building is when a robot generates a map of the environment using sensor information, while localizing itself relative to the map. This is especially challenging because for localization the robot needs to know where the features are, whereas for map-building the robot needs to know where it is on the map. In addition, there are inherent uncertainties in discerning the robot's relative movement from its various sensors.

Robot Vision

[edit | edit source]

Vision systems are generally used in manufacturing companies to perform simple tasks such as counting objects that pass by on a conveyor belt, reading serial numbers or searching for surface defects. These tasks are simple programs that work through the use of a camera installed on the robot and are detected after the object is scanned. A robot does not process images the same way a human does. While humans can rely on inference systems and assumptions, computing devices must see by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features such as pattern recognition engines. Machine vision and computer vision systems are capable of processing images consistently, but computer-based image processing systems are typically designed to perform single, repetitive tasks.

Processing methods:

  • Pixel counting: counts the number of light or dark pixels
  • Thresholding: converts an image with gray tones to simply black and white
  • Segmentation: used to locate and/or count parts
  • Blob discovery & manipulation: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure.
  • Recognition-by-components: extracting geons from visual input
  • Robust pattern recognition: location of an object that may be rotated, partially hidden by another object, or varying in size
  • Barcode reading: decoding of 1D and 2D codes designed to be read or scanned by machines
  • Optical character recognition: automated reading of text such as serial numbers
  • Gauging: measurement of object dimensions in inches or millimeters
  • Edge detection: finding object edges
  • Template matching: finding, matching, and/or counting specific patterns

Some of the things that a robot's vision system may consist of are:

  • One or more digital or analog camera (black-and-white or color) with suitable optics for acquiring images
  • Camera interface for digitizing images (widely known as a "frame grabber")
  • A processor (often a PC or embedded processor, such as a DSP)
  • Input/Output hardware (e.g. digital I/O) or communication links (e.g. network connection or RS-232) to report results
  • Lenses to focus the desired field of view onto the image sensor.
  • Suitable, often very specialized, light sources (LED illuminators, fluorescent or halogen lamps etc.)
  • A program to process images and detect relevant features.
  • A synchronizing sensor for part detection (often an optical or magnetic sensor) to trigger image acquisition and processing.
  • Some form of actuators used to sort or reject defective parts.

The applications of machine vision (MV) are diverse, covering areas including, but not limited to:

  • Large-scale industrial manufacture
  • Short-run unique object manufacture
  • Safety systems in industrial environments
  • Inspection of pre-manufactured objects (e.g. quality control, failure investigation)
  • Visual stock control and management systems (counting, barcode reading, store interfaces for digital systems)
  • Control of automated guided vehicles (AGVs)
  • Quality control and refinement of food products
  • Retail automation

Knowledge Based Vision Systems

[edit | edit source]

Vision systems are generally used in manufacturing companies to perform simple tasks such as counting objects that pass by on a conveyor belt, reading serial numbers or searching for surface defects. These tasks are simple programs that work through the use of a camera installed on the robot and are detected after the object is scanned. A robot does not process images the same way a human does. While humans can rely on inference systems and assumptions, computing devices must see by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features such as pattern recognition engines. Machine vision and computer vision systems are capable of processing images consistently, but computer-based image processing systems are typically designed to perform single, repetitive tasks. A knowledge based vision system would allow a robot to make inferences and assumptions. The robot would be able to identify an object and then be able to determine if that object were to be dangerous or not. The robot would first be given a list of objects to identify. Using its knowledge system, the robot would use the web to find images of the items it was given to find. Once it finds and identifies all of the objects, the robot will start on its search for the objects. Not only are the vision sensors an essential part in this process but the motion controls must also be functioning. The robot must be able to move and traverse through obstacles to be able to find its objects.

A knowledge-based vision system in turn links to the artificial intelligence of a robot. To be able to create a fully functional android, the robot must have a working artificial intelligence and a working knowledge based visions system. Not only will the android be able to think on its own, it will be able to view the world as a human does. Whatever the robot sees, it will be able to make its own assumptions and own perceptions of its surrounding and the objects that it identifies.

There is a challenge called The Semantic Robot Vision Challenge (abbreviated SRVC) that best describes what knowledge based vision systems are. SRVC is a research competition that is designed to push the state of the art in image understanding and automatic acquisition of knowledge from large unstructured databases of images (such as those generally found on the web). In this competition, fully autonomous robots receive a text list of objects that they are to find. They use the web to automatically find image examples of those objects in order to learn visual models. These visual models are then used to identify the objects in the robot's cameras.

In general terms:

  • The vision sensor must be able to send coordinates to the motion controller.
  • The motion controller must be able to accept commands.
  • The vision pixel coordinates must be converted to real world coordinates by vision sensor, additional PC, or motion controller.
  • Send data.
  • Make move.

Robots and Artificial Intelligence

[edit | edit source]

Artificial Intelligence is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times, and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality.

Artificial Intelligence can be described as the mind of a robot. It is their ability to learn, reason, solve problems, perceive, and how understand everything around them. You can build a robot and program it to do certain tasks but it will not be intelligent. That robot will perform only those tasks in which it was programmed to do in a continuous loop. If you programmed the robot to pick up a box in a specific location and move it to another destination and you were to move the box at the starting destination not too far away from where it is supposed to be, the robot will not be able to process in its mind that the box has moved. It will register that the box is not there so it will halt what it was programmed to do until the box is back where it is supposed to be. That robot may be able to move and do what it is programmed to do but without an artificial intelligence, it will not be able to learn or perceive the changes in the environment around it. One of the main challenges today is implementing an advanced artificial intelligence into a robot. Although robots may only be able to perform tasks that they are programmed to do, one day, the creation of an artificial intelligence will allows robots to learn and think just as humans do. Not only will these robots be able to perform tasks that they have been programmed to do, they will be able to adapt to their environments and situations, learn, reason, solve problems, and even be able to process and understand language. Robots will one day be able to think and act like humans and even be able to interact and communicate with us as well.

When most people think of artificial intelligence and its use in robots, the most common image that will pop up is the image of a humanoid robot. This image of the robot is of one that can talk and think like a human being and is capable of making its own decisions. Although this is one way that artificial intelligence can be implemented in robots, it is not the only way. Artificial intelligence can be used for more than just androids. If implemented into a machine that has the task of monitoring something, that machine will be able to perform its task more efficiently and will be able to do more than just simple tasks.

Advantages:

  1. Provide answers for decisions, processes and tasks that are repetitive
  2. Hold huge amounts of information
  3. Minimize employee training costs
  4. Centralize the decision making process
  5. Make things more efficient by reducing the time needed to solve problems
  6. Combine various human expert intelligences
  7. Reduce the number of human errors
  8. Provide strategic and comparative advantages that may create problems for competitors
  9. Look over transactions that human experts may not think of

Disadvantages:

  1. No common sense used in making decisions
  2. Lack of creative responses that human experts are capable of
  3. Not capable of explaining the logic and reasoning behind a decision
  4. It is not easy to automate complex processes
  5. There is no flexibility and ability to adapt to changing environments
  6. Not able to recognize when there is no answer