By Ariel Ramirez and Ethan Jackson
- 1 The Future of Robots
- 2 Programming Concepts
- 3 Robot Control
- 4 Robot Hardware
- 5 Mathematics of a Robot
- 6 Robot Programming
- 7 Obstacle Avoidance
- 8 Task Planning and Navigation
- 9 Robot Vision
- 10 Knowledge Based Vision Systems
- 11 Robots and Artificial Intelligence
- 12 Resources
- 13 External links
The Future of Robots
A future where robots are as common as cars and cheaper is on the way.Robots will soon be everywhere, in our home and at work. They will change the way we live. This will raise many philosophical, social, and political questions that will have to be answered. In science fiction, robots become so intelligent that they decide to take over the world because humans are deemed inferior. In real life, however, they might not choose to do that.
Autonomous androids which look just like you could conduct your business, attend conferences, and go shopping on your behalf, while you sat in the comfort of your home. A camera would monitor your facial expressions and your android's face would mirror your expressions. Ishiguro says there is even a psychological phenomenon whereby, if someone touches your android, you feel it. "It's a very tactile sensation, Robots will be commonplace: in home, factories, agriculture, building & construction, undersea, space, mining, hospitals and streets for repair, construction, maintenance, security, entertainment, companionship, care.
Purposes of Future Robots:
- Robotized space vehicles and facilities
- Anthropomorphic general-purpose robots with hands like humans used for factory jobs
- Intelligent robots for unmanned plants
- Totally automated factories will be commonplace.
- Robots for guiding blind people Robots for almost any job in home or hospital, including Robo-surgery.
- Housework robots for cleaning, washing etc
- Domestic robots will be small, specialized and attractive, e.g. cuddly Properties of these robots:
It will greatly affect the economy as well. Given that in the next two decades robots will be capable of replacing humans in most manufacturing and service jobs, economic development will be primarily determined by the advancement of robotics. Given Japan's current strength in this field, it may well become the economic leader in the next 20 years.
The development of robot programming concepts is almost as old as the development of robot manipulators itself. As the ultimate goal of industrial robotics has been the development of sophisticated production machines with the hope to reduce costs in manufacturing areas like material handling, welding, spray-painting and assembly, tremendous efforts have been undertaken by the international robotics community to design user-friendly and at the same time powerful methods. The evolution reaches from early control concepts on the hardware level via point-to-point and simple motion level languages to motion-oriented structured robot programming languages. The robot programming languages can be classified according to the robot reference model, the type of control structure used for data, type of motion specification, the sensors, the interfaces to external machines, and the peripherals used. The following types of programming languages are available.
Point to point motion language Basic motion languages at assembler level Non-structured high level programming languages. Structured high level programming languages. NC type languages Object oriented languages Task oriented languages
There are many ways to design software for controlling a robot. The focus is not on low-level coding issues, but on high level concepts about the special situations robots will encounter and ways to address these peculiarities. The approach taken here proposes and examines some control software architectures that will comprise the brains of the robot.
Probably the biggest problem facing a robot is overall system reliability. A robot might face any combination of the following failure modes:
Mechanical Failures - These might range from temporarily jammed movements to wedged geartrains or a serious mechanical breakdown.
Electrical Failures - We hope it is safe to assume that the computer itself will not fail but loose connections of motors and sensors are a common problem.
Sensor Unreliability - Sensors will provide noisy data (data that is sometimes accurate, sometimes not) or data that is simply incorrect (touch sensor fails to be triggered).
The first two of the above problems can be minimized with careful design, but the third category, sensor unreliability, warrants a closer look. Before discussing control ideas further, here is a brief analysis of the sensor problem.
An example of robot control is when it interacts with a wall. In a worst-case scenario, what could happen while a robot was merrily running along, following a wall? Several possibilities:
1. The robot could run into an object or a corner, properly triggering a touch sensor.
2. The robot could run into an object or corner, not triggering a touch sensor.
3. The robot could wander off away from the wall.
4. The robot could slam into the wall, get stuck, and conditionally trigger a touch sensor.
5. The proximity sensor could fall off its mount, causing a series of incorrect sensor readings. Ideally, control software should expect occurrences of cases like those numbered #1 through #4 and be able to detect case #5.
Wheels Robot builders often find that the trickiest part of a robotics project is making the wheels. First you need to find a suitable tire/wheel combination, then you must figure out a way to attach a sprocket so that it will handle the torque of a geared-down drive motor.
Motors From the start, operating motors seem quite simple. Apply a voltage to both terminals, and it spins. But what if you want to control which direction the motor spins? Correct, you reverse the wires. Now what if you want the motor to spin at half that speed? You would use less voltage. But how would you get a robot to do those things autonomously? How would you know what voltage a motor should get? Why not 50V instead of 12V? What about motor overheating? Operating motors can be much more complicated than you think.
Sensors The light sensor uses a photocell that allows your robot to detect and react to light. With the light sensor, you can program a whole new range of capabilities to your robot. Design a simple tracker that follows the beam of a flashlight, or use a light sensor to help your robot to avoid getting stuck under furniture by making it steer away from shadows. You can even give your robot color vision by putting colored filters on different light sensors!
Mathematics of a Robot
Mathematics in robotics mainly involves robots kinematics. Robot kinematics is the study of the motion (kinematics) of robots. In a kinematic analysis the position, velocity and acceleration of all the links are calculated without considering the forces that cause this motion. The relationship between motion, and the associated forces and torques is studied in robot dynamics. One of the most active areas within robot kinematics is the screw theory.
Robot kinematics deals with aspects of redundancy, collision avoidance and singularity avoidance. While dealing with the kinematics used in the robots we deal each parts of the robot by assigning a frame of reference to it and hence a robot with many parts may have many individual frames assigned to each movable parts. For simplicity we deal with the single manipulator arm of the robot. Each frames are named systematically with numbers, for example the immovable base part of the manipulator is numbered 0, and the first link joined to the base is numbered 1, and the next link 2 and similarly till n for the last nth link.
Robot kinematics are mainly of the following two types: forward kinematics and inverse kinematics. Forward kinematics is also known as direct kinematics. In forward kinematics, the length of each link and the angle of each joint is given and we have to calculate the position of any point in the work volume of the robot. In inverse kinematics, the length of each link and position of the point in work volume is given and we have to calculate the angle of each joint.
Robots are an amazing feat of man’s intellect. We’ve become creatures with the capabilities to make tools so advanced that they literally seem to think for themselves. It’s a far cry from the revolutionary invention of the wheel thousands of years ago. Still, no matter how robots may seem to act independently, they are all actually governed by internal reasoning created by humans as well.
There are several steps in order to program a robot. First,Buy a factory-built robot. There are a few different manufacturers, but the most popular and well-established maker of domestic robots as the iRobot company. .Second,Set the internal clock on your factory-built robot. It may come with an atomic or radio-controlled clock already in it, which means you will only have to turn it on to set the time. Once the robot is set to the right date, schedule the times that you would like the robot to operate. For cleaning robots, it’s usually when you are away from the home. Some robots also may require the measurements of the room it is to be traveling in. Third, Build a robot. This step is for the far more advanced robot users. The parts and construction of a robot largely depend on what the robot’s primary function will be. If you want the robot to carry things around, it will probably look like an arm mounted on wheels. Because of the large variety of different robots and the complex nature of their construction, it is advised to seek out specific plans for the robot you wish to build. Fourth, Write the code for your robot. Again, this seems like a vague and huge task for one step, and it is. There are a couple of different programming languages you can write your code in depending on the software you are using. The code that you write will also depend on what the robots primary function is. Since you don’t want your robot to get stuck in a corner, a common piece of programming deals with what to do when in such a situation. The programming should vaguely resemble basic reasoning. For example, IF left sensor detects an object THEN turn the wheels to the right. Programming requires a lot of foresight and trial and error. Fifth,Test your programming. This is important for both factory and home-built robots. Run the robot through all possible situations it may encounter and take note of how it performs. Go back and fix the code as you see fit.
Obstacle avoidance is one of the most important aspects of mobile robotics. Without it robot movement would be very restrictive and fragile. This tutorial explains several ways to accomplish the task of obstacle avoidance within the home environment. Given your own robots you can experiment with the provided techniques to see which one works best.They assume a punctual and omnidirectional vehicle and are doomed to rely on approximations. Our contribution is a framework to consider shape and kinematics together in a exact manner, in the obstacle avoidance process, by abstracting these constraints from the avoidance method usage. The research conducted faces two major problems in this discipline. The first is two move vehicles in troublesome scenarios, where current technology has proven limited applicability. The second one is to understand the role of the vehicle characteristics (shape, kinematics and dynamics) within the obstacle avoidance paradigm. For these vehicles, the configuration space is 3 dimensional, while the control space is two dimensional. The main idea is to construct the two-dimensional manifold of the configuration space that is defined by elementary circular paths. This manifold contains all the configurations that can be attained at each step of the obstacle avoidance and is thus general for all methods. Finally, we propose a change of coordinates of this manifold in such a way that the elementary paths become straight lines. Therefore, the three dimensional obstacle avoidance problem with kinematic constraints is transformed into a simple obstacle avoidance problem for a point moving in a 2-dimensional space without any kinematic restriction (the usual approximation in obstacle avoidance). Thus, existing avoidance techniques become applicable.
For any mobile device, the ability to navigate in its environment is one of the most important capabilities of all. Staying operational, avoiding dangerous situations such as collisions and staying within safe operating conditions temperature, radiation, exposure to weather, etc. come first, but if any tasks are to be performed that relate to specific places in the robot environment, navigation is a must. In the following, we will present an overview of the skill of navigation and try to identify the basic blocks of a robot navigation system, types of navigation systems, and closer look at its related building components.
Task planning for robots usually uses on spatial information and on shallow domain knowledge, such as labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. Defining specific types of semantic maps is key, which integrates hierarchical spatial information and semantic knowledge. Semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. Several experiments demonstrate the effectiveness of solutions in a domain involving robot navigation in a domestic environment. Robot navigation means its ability to determine its own position in its frame of reference and then to plan a path towards some goal location. In order to navigate in its environment, the robot or any another mobility device requires representation i.e. a map of the environment and the ability to interpret that representation.
One of the most fundamental tasks that vision is very useful for is the recognition of objects (be they machine parts, light bulbs, etc) Evolution Robotics introduced a significant milestone in the near-realtime recognition of objects based on various points. The software identifies points in an image that look the same even if the object is moved, rotated or scaled by some small degree. Matching these points to previously seen image points allows the software to 'understand' what it is looking at even if it does not see exactly the same image.
Image and/or video processing can be technically difficult. Home robots are continuously moving towards PC based systems (laptop, netbook, embedded, etc.) that have the power to support complex image processing functions. RoboRealm provides the software needed to get such a system up and running. We've compiled many image processing functions into an easy to use windows based application that you can use with a webcam, TV Tuner, IP Camera, etc. Use RoboRealm to see your robot's environment, process the acquired image, analyze what needs to be done and send the needed signals to your robot's motors, servos, etc.
Knowledge Based Vision Systems
Knowledge Based Vision Systems configurates automatically programs for image processing and supports the recognition of objects. The system runs in two phases. In the first phase based on the primitives: curved edges, corners and the explicit specification of the content of the image given by the user a sequence of operators will be generated and all their free parameters will be computed adaptivelly. In this phase the system uses a rule base composed of knowledge of visual processing operators, their parameters, and their interdependence. In the second phase a hierachical object model is formulated and edited by the user based on the primitives selected in the first phase. The system editor is specially provided for this purpose. Using the hierachical object model facilitates a rapid interpretation of the result obtained from the previous image processing for the subsequent object recognation.
Robots and Artificial Intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines.
Artificial intelligence has been the subject of breathtaking optimism, has suffered stunning setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. Standard humanoid robots mimic the human form but they generally function quite differently—and their characteristics reflect this. This places severe limitations on the kinds of interactions robots can engage in, on the knowledge they can acquire about their environment, and on the nature of their cognitive engagement. Instead of copying only the outward form of a human, Cronos mimics the inner structures as well—bones, joints, muscles, and tendons—and thus has more human-like actions and interactions in the world.Some robots can interact socially. Kismet, a robot at M.I.T's Artificial Intelligence Lab, recognizes human body language and voice inflection and responds appropriately. Kismet's creators are interested in how humans and babies interact, based only on tone of speech and visual cue.
- Robotics has details on how to build small robot hardware
- Artificial Intelligence
- Embedded Systems has details on typical robot CPUs, and how to program them