What robots can and can't do

Let's take for example the phrase "A robot can't wash the dishes", but this works for pretty much anything, from washing the dishes, telling friend from foe and loving.

It's easy to make a robot to appear to correctly do a single instance of washing the dishes, you st everything up, program in the exact steps and press play. Of course, if there's any variation in the expected behaviour of the plate, or in the initial conditions you'll have a catastrophic failure, and the robot isn't terribly useful. This kind of machine, which starts, follows a pre-planned set of actions, and then terminates, with no logic or reacting to the world is sometimes called a "ballistic" robot, to highlight the similarity with a launched cannon ball, which simply follows a single path once launched. This kind of technology can be seen in the ancient Greek clepsydra. My favourite example is Heron's autonomous theatre.

It's a bit trickier, but basically a mundane engineering task to make a robot wash the dishes in a tightly controlled environment, you either make the environment such that the problem is basically solved, think of a dishwasher, where the plates are arranged such that all the robot needs to do is pump hot water through the rotors for a time. You could also imagine a robot like a jukebox where the plates are stacked and shaped so regularly that a simple arm could pick them up and act like the ballistic robot from the previous example, only repeated as many times as necessary. These are often not thought of as robots, as they are more readily identified by their parts which enclose a bit of the environment and are simply thought of as machines, even though they're sensing and reacting to the environment, think of the watt governor on steam engines, or the float in a toilet cistern which detects the water level and hence controls the cistern filling tap.

This is the level of most, arguably all, of the robots in your list, to realise what they can do you must ignore the claims of the design or marketing teams, you must look at the controlled environment they require because this will be setting the limits on the variety of the task they're expected to do. A marketing team might say "our dishwasher can wash dishes", but only when you see that all dishes must be under a certain size, upside-down, resistant to heat, and with the lid off, do you see the limit of their claim. To get a better handle on what the machines can't do, just imagine if something unexpected creeps into the environment, such as a household pet, or someone's stash of life savings. A good way of getting an idea of a machine's limits is to imagine when a wise old artisan of the same task might know to stop. To clarify, when building cars on a production line, a wise old car builder would know something was fishy if he heard someone scream, if the factory lights weren't on, or if it was Christmas day, if a child or bird or ornament was nearby they'd work slower or stop altogether, if they'd worked many hours straight, or felt a twinge in their wrist they'd give themself a rest. All of these things are potential factors for disaster, especially if you're working with heavy machinery. A response is to say "we can make our robot detect nearby children, birds, and ornaments, we can make it check the calendar and the lights, and the status of all its component parts", but then I can always, easily, come up with more examples, and you either end up with a robot that never takes action because it's uncertain that everything is safe, or you end up with a fully controlled environment from the previous section.

It's very tricky when the environment is even a not-controlled. An example in the theme of washing dishes is a dish washing robot in a kitchen, imagine a torso with two arms and a camera at a sink. The technology required to make a robot that could identify and grasp dirty dishes and perform the necessary gentle and firm actions is either not available or too expensive. Notice that the examples from the two previous sections, single instance and tightly controlled environment, came from the ancient world, but it's tricky to think of an example even from the modern day which works autonomously in a slightly unpredictable environment. Examples do exist, self-driving cars is a good one, supermarket self-checkouts is another, arguably any system which uses speech recognition qualifies as such, but already we become aware that these systems either work perfectly most of the time, or work adequately all of the time, nothing works perfectly all of the time unless it controls its environment, or anyone interacting with it is well informed of all the quirks. Some headway is being made in this direction, but there is no reason to believe that with more complexity and time we will make constant progress in this direction and "solve" the problem of responsive behaviour in not fully controlled environments.

It's impossible when the environment is totally uncontrolled, such as a robot walking around in public. Imagine a self-employed dish washing robot, which would have to find its own jobs and do the work to the criteria of each particular job. This no longer requires just the technology to wash dishes, but to find jobs, pass interviews, sustain itself outside of work, follow the training, and learn the quirks of the job that aren't mentioned in training. Now we're firmly in the realms of science fiction and in a conceptual space which I believe is absolutely unobtainable in the traditional, computational, approach. You might argue, "but we can make robots do anything, so eventually we will be able to make a robot, for example, find jobs", I would half agree, but remind you that you'd need a controlled environment, it's easy to imagine this robot finding jobs if there's a specific online database for it, especially if every job is listed in a formal regular way, but if you need to do that for every aspect of this robots life you're just back into building a tightly controlled environment. You could possibly argue that Google Search is an attempt at this kind of system, even though it can only be interacted with via the input of a short piece of text, you might say that the whole world of the written word is still pretty unpredictable, and the result is that you have a machine that really feels like a machine when you try to do anything outside of common actions. "what is six times nine" is handled pretty well, but "which film did I watch last night" or "isn't it nice today" return results which are not only wrong, but absolutely out of this world in how wrong they are when you consider how a person might respond.

It's even more than impossible in actively hostile environments, such as a robot active in a war zone, or a public control robot in an area that doesn't take kindly to being controlled. To push an example to breaking point, imagine a dish washing robot in a house of someone who doesn't want their dishes washed. It wouldn't take long for the person to work out how the robot is identifying dishes and either paint one on the wall, or throw one into the road, or to convince the robot that all the dishes are the ornamental sort which don't need washing or to cover the dishes in a cloth which makes them unidentifiable, or to buy plastic dishes that can't be found by the robot, or to start eating out of the fruit bowl which the robot doesn't recognise as a dish, or to cover the kitchen floor in mirrored tin foil, and the walls in sound insulating egg-boxes the robot fails to navigate by sight or sound. The list goes on and on.

As something of a conclusion by way of a new example, I assure you that if a war robot is programmed not to blow up kids then the enemy will start to put children in the field, or to invent camouflage which makes the troopers resemble children (at least resemble children to the robot).

For these reasons, I wouldn't want to be on the programming team responsible for making the war robot able to tell between real children and fake, killable, children. The danger here, is that someone who doesn't agree with my analysis above will receive a report from the management of the programming team, detailing all the safety and discrimination tests passed by the war robot, and authorise the use of the robot in the war zone. Unaware that even a wide range of tests can only represent a small number of reasonably controlled environments, not the whole complexity of all real world environments, and certainly not all possible facets of a hostile environment.