I recently attended a talk by the Institute of Physics talking about advancements in robotics and what the future might realistically look like; i.e. not what a Hollywood director thinks it might look like. One thing that really did strike me was actually quite how much robotics is already used by us, all the time. So why not start with some examples of artificial intelligence and robotics in action around us today?
The image above is something that does not look particularly strange to you or I, but imagine you were from 1900 and you saw this image. Machines building machines that can travel at hundreds of miles an hour… you would think it was nothing short of sorcery, a little over 100 years ago. This is already a fairly advanced display of robotics, if you give it some thought.
Humans are moving on to bolder quests than repetitive tasks on the factory floor. Google have been investing heavily in creating autonomous cars which have been on the roads already amongst regular traffic with the odd accident here and there but never the less fairly successfully. On golf courses drones are now being used to allow customers to order meals and drinks and get them delivered without the need for a human to deliver or collect. Tasks like this do not currently make drastic implications to the way we live our lives, but the potential social impact is huge. For example an autonomous vehicle that is able to take elderly people who no longer feel safe to drive places.
Humanoid robots are what grabs the headlines more often than not because we seem to be able to, in a strange way empathise with them. The in the below video are made by a firm called Boston Dynamics and are a great example of how, when programmed correctly, robots can perform tasks.
What you can’t see in the above video sequence is that the robots are running pre-programmed scripts and they do not have a huge amount of reactivity. A robot can be programmed, for example to track a box however if you remove the box from the equation entirely, or put a glass screen between the robot and the box then it does not know how to reach to this change in variables. This is there interesting challenge that faces robotics.
One of the important elements of a robot is reactivity. This is something that is becoming increasingly sophisticated. If we have to pre-programme a robot to move to specific locations, what happens if the location unexpectedly changes? This is one of the huge challenges that faces robots when it comes to for example exploring other planets. There is the option of remote control, however remote control to locations that are this far away can take very large amounts of time, so even with a humans fast reactions it is not possible to react to the situation as it happens. So the idea behind more advanced robotics is to have a whole multitude of different reactive sensors, that take in information about the environment – for example thermostats that can sense temperature, light sensitive resistors and friction sensors. With all of this data a robot can tell certain things about the environment and react accordingly. So with enough sensors that is the end of it right?
Well the answer actually isn’t that simple because. When you really think about it there is a huge amount of planning that goes into every action you do. Most of it is not conscious which means you don’t make a big deal of it but I can promise you it is there. The robot needs to imitate this plan – which is a lot trickier. For example if I am crossing the road, if I see a car I will stop for it to cross. As I cross the road if someone is in my way I move. If a car is coming at me fast I speed up to get out of the way. I will choose to cross where there is a designated crossing if this is an option – but it is not a rule as sometimes there isn’t one. Sometimes there is a curb in the middle of the road you need to step over, sometimes the road is a plane at uniform height. These are just a small set of examples that begin to illustrate some of the complexity which is involved in a very simple action. A robot needs to know how to react to all of these things which means you cannot just write a script. You need to build the plan in real time, which means reacting to data as it happens.
So what is the future for robotics? Well the answer is to try to decrease the cost and size of input sensors to allow more and more of them to be placed inside robots. This then means that we can write computing logic that tells the robot how to react. This is, still in theory as good as the programming but let us not forget that programming can and does yield amazing things – a scientific calculator is a great example of simple programming by a human that far outweighs human performance. The bit race and challenge for robotics is to build this planning engine. It is to build something that knows how to react to the inputs to achieve a goal.
Here is an example of progress in this area, which shows the great reactivity that can be achieved:
The progress humans are achieving, I would not be surprised if we are near the age of the robots.