User:Supergroup~enwikibooks

From Wikibooks, open books for an open world
Jump to navigation Jump to search

This page is for a final project in Advanced Robotics with Prof. Moskal

Created by: Matthew Realley and Mustafa Khan


1. The Future of Robotics: Since their creation, robots have spread to many fields, from military applications to factory tasks. The future will see only more of this expansion for robots, as it did for computers. One example is South Korea’s plan to have robots perform surgery (est. 2018). iRobot (maker of the Roobma robotic vacuum cleaner) claims that by 2033, robots will perform most household tasks. In the United States Military, according to the Department of Defense, one third of the fighting units will be robots by 2015.


2. Programming Concepts: Programming languages are used to make computers do specific tasks. While many can do similar things, they each vary from one to the other. One way they vary is also the first concept gone over in this section, syntax. According to Wikipedia, syntax is the “set of rules that define the combinations of symbols that are considered to be correctly structured programs in that language.” What this basically means is that syntax is the words/commands that tell the program what to do and the order they are put in. The next programming concept is variables. Variables are a way to store values, be they characters, strings, integers, or non-integer numbers. There are also Conditional Statements in most programming languages. These tell programs what to do in specific cases. In English, these statements say: if one thing is true, do a specific thing, but if it’s not true, do something else. The final concept I will discuss in this section is loops. Loops cycle through the same code for a set amount of time, or until a condition is no longer true. Each cycle is referred to as an iteration.


3. Robot Control is the way which robots sense and act, and how they relate to each other. It varies based on the objective of the robot. Robot Control is broken down into four main categories: deliberative control, reactive control, hybrid control, and behavior-based control. Deliberative control involves the robot heavily analyzing and calculating BEFORE acting. Reactive control involves the robot merely reacting to stimuli (hence the name). Hybrid control involves both happening simultaneously. Behavior-based control involves the robot automatically acting a certain way based on specific circumstances.


4. The hardware we used in class was Lego Mindstorms. This DIY robotics kit was made up of 619 elements:

LEGO TECHNIC building elements gears, wheels, tracks and tires 1 NXT micro-computer – that acts as the brain of the robot 2 Touch Sensors – that makes the robot feel 1 Ultrasonic Sensor – that makes the robot ‘see’ - and detect motion 1 Colour Sensor – that can detect different colours, light settings and acts as a lamp 3 Interactive servo motors with built-in rotation sensors 7 connector cables for linking motors and sensors to the NXT


5. The mathematics of robot control vary on the type of control the designer is trying to create in order to accomplish a specific task. Deliberative control would use complex functions and algorithms to do specialized, highly purposeful actions. These functions would take up a lot of processing power and wouldn’t make for a quickly reacting robot. Reactive control merely has the robot react to stimulus, so it requires far less complex mathematic equations. Hybrid control uses the same amount of complexity in mathematics as deliberative control, but while simultaneously reacting instantaneously to stimuli.


6. There are many options in programming languages for the Lego Mindstorms NXT. The some of the most well known and popular are: NXT-G Retail, NXT-G Educational, RoboLab 2.9, NBC, NXC, RobotC, NI LabVIEW Toolkit, leJOS NXJ, pbLua, LEJOS OSEK, ICON. Follow this link for a side-by-side comparision on the features of all of the aforementioned programming languages. http://www.teamhassenplug.org/NXT/NXTSoftware.html


7. Obstacle avoidance is the process of a robot obtaining its control to avoid an illegal intersection or collision with an object in its path. Robots can easily avoid objects if they are being controlled with a remote because the person holds the control and easily direct the robot around an object or to a more secure pathway. That is known as pre-computational where a robot is controlled and can easily perform obstacle avoidance. A more complex way for a robot to perform obstacle avoidance is without any remote at all. Remote less robots are able to perform obstacle avoidance in many ways. One way is that there is an obstacle course and a program which runs a robot is predefined to calculations that already measure distances of the objects and directs the robot ahead of time to move around it. Other robots contain sensors which can be program to recognize that something maybe in its path and it is told how to react when there is something there such as an ultrasonic, light, and touch sensor from the NXT Lego mind storms robots. Those can be programmed to read a distance, light intensity as it approaches an object, or touch a object and know what actions to perform to undergo its obstacle avoidance. Another way is based on more advanced technologies. For robots to avoid obstacles they must be preprogrammed with lots of discipline. These technologies help robots to understand the dynamics in scenarios of real life applications which maybe complex and understand characteristics of an object they are trying to avoid such as shape, size, and if so its own moving path. Most robots that are performing obstacle avoidance do not take into account its own shape and size which will be need to know. Setbacks such as that can be preprogrammed so a robot can know how it must move majority of the time to avoid obstacles. Many of obstacle avoidance for robots are done on platforms which contain obstacles that a robot must avoid to complete its course. Many places that have mobile robots use computational computers and cameras to record the environment in which a robot is moving through. The vision it sees isn't the same as a human would see. The vision is broken down to a more computerized look breaking down the surroundings to a plane and objects or obstacles to linear pictures coming out of axes' to represent them. If a mobile robot is moving through an unknown environment then it must be able to read scenarios of the path it must follow, the objects it must avoid, and how it would go about doing that. All of these are computational based on what the robot reads. This is an ongoing research and it can potentially lead up to driverless or steer less vehicles in the future.

8. Task Planning is key for mobile robots because it allows it to determine its course or objective over a course of time. A robot must contain a sort of reasoning abilities to know what paths or direction is the most efficient for it to use. At most levels of robotics, task planning is used to determine a sequence of actions in order to fully surpass its course. A useful thing a robot may use for task planning is semantic maps. Semantic maps gives a robot a worldly like view of what it is going to encounter and decided on what it has to do. The information they obtain is known as semantic knowledge. This allows the robot to perform in a more remote less and intelligent manner. Semantic maps are also important in two ways for task planning: determines what new information actually is physically and what information doesn't affect or need to be used for certain tasks. When looking at new information a robot must determine if it is in the right environment firstly. Its task maybe to perform one thing, but its location maybe the wrong place to complete the task. It has to be able to read and learn the environment and then remember that is the wrong environment for the next time. After recognizing, when they reach the correct environment, a robot must remember to follow through with its original task is planning to accomplish. Navigation is very important for a robot to achieve. It has to be able to avoid and navigate through hazardous areas they may cause it to collide with other objects or even cause itself to malfunction. Navigation is important because a robot has to able to have a self knowledge to interpret not only its own features and capabilities, but to determine if the environment is capable of being navigated through.

9. Robot vision is a vital part of robotics because it allows a mobile robot to envision a path for it to travel. This is where a robot may get its own sense of what a human eye can see, but in its own fashion and conditions. Robot vision allows the system and machinery of autonomous robots to be more productive and efficient for human research. Robots are attached with cameras that allow them to interpret what it sees or reads with the camera. There are many different ways robot vision works to help better understand images that we see every day. A robot may see many things with cameras, but may never know what anything actually is. There is research done where a robot is connected to the web and when it reads or senses and object, it uses the web to find images of similar structures to get a better understanding of what the object is. Once it gets that, it will always be able to determine that object every time it sees it through visual models. Another way a robot vision works is through image formation. The eyes of a human may see things in specific models to recognize is, but the eyes of a robot, the camera, and sees things in different structures. The camera shows map version of the 3D world and a robot has to be able to process it. It has to have the knowledge of knowing what is what through the process of edge finding. Edges have to be separated based on if it is an object in the robots way or if it is a path with turns and curves. Not only is edge finding important, but also light and color recognition. In our society today we know what many objects or signs such as a stop sign is based on its shape and color. If worldly use of an actual autonomous vehicle is to be created then something as simple as recognizing what the color red usually means is necessary. Robot vision also goes a long way before even knowing what images are. They also have a mathematical stand point. The mathematics works through the program. Whatever a robot sees it is calculated in sort of a 3D space to know shape and sizes of objects. If a mobile robot needed to surpass an object then it would need to determine from it saw the amount of space it needed between itself and an object, the distance it has move, and when it is safe to move in front of it. All that is determine because of the robot's vision and how it implements what it sees through the cameras.

10. Knowledge based vision systems are more like image processing and object recognition software. The image processing comes from the vision of the robot whether its curves or edges that it reads. Its divides everything it sees into parameters to be interpreted. The object recognition is the ability to realize what an object is based on a hierarchy of details it reads from previous same or similar objects. It will determine what an object is by the features that the object contains and are the most acknowledged. Knowledge based vision systems are important because they need to recognize changing environments so it can know to be cautious and slow down to begin to try and interpret new objects through object matching. Because the cameras see a different version on the world it has to be able to distinguish between the real world objects and the virtual world objects and compares them to see if they are the same or if they are somewhat different. Also the robot must know how to manipulate the space it has based on the objects of its environment so that it may navigate productively.

11. Artificial Intelligence plays a great deal in robotics because of the concept of machine learning, the ability for something without a brain to act and react as if it did have a brain. Scientifically it is known as neural networks which are the robotics version of human neurology. Neural networks are more mathematical in programming robots. There are equations that derive errors and are re-entered as hidden values. The mathematics of machine learning comes from the back propagation. There is an input or what the robot sees, then it interprets it, and the interpretation is put into an equation which has a solution that is trying to be reached. If the solution isn't reached then it back propagates and regenerates a value so that the solution maybe reached the next time. That's how the learning process works. It’s more like learning from mistakes. A robot's artificial intelligence begins an action, but if it fails it will attempt another option, but has learned that the first option it tried to do doesn't work. There are many robots and machines to day that work on artificial intelligence. One example is the Osimo robot by Honda. It is able to free walk and detect a person coming along and recognize its environment to react properly. Another example is the U.S. Military's predator drone. The predator drone is a pilotless aircraft that is actually flies and collects information. Attempts now are to make driverless vehicles that learn to read and interpret the streets of all and new environments it reaches. Many challenges would include traffic signs, pedestrians crossing the street illegally, and avoiding accidents.