Stories are appearing more frequently about robots and advances in artificial intelligence (AI). As a computer programmer I find this to be a fascinating subject. Perhaps that is also the reason I find the subject so terrifying. I know how easy it is to write software that interacts with machines. Daily I write computer code to open and close valves and to read flow, pressure and gas instruments in order to automate breathing tests for babies.
I know how easy it is to write a program to:
- Identify target
- Align weapon with target
- Activate the weapon’s trigger
I also know of the problems related to “Identify target”. Once you have refined algorithms well enough, and are using enough sensors, it is pretty easy to be sure the machine is looking at a human. And eventually identifying which particular human will be as accurately defined, and I’m sure already is in systems being developed today.
The real question becomes how the decision is made to kill a particular person. This is the question that has not been satisfactorily answered regarding our use of drone warfare. And how close are we to using, or are we already using, drones for domestic “targets”? Imagine that being added to our problems with policing!
But the greatest danger comes when robots are programmed to act independently in situations, such as combat or a police encounter, to make its own decision as to when to kill a target.
The consequences of software that empowers that kind of autonomous decision making are difficult to contemplate, they are so potential horrible. Combine that with automated manufacturing process like 3-D printing, and it is easy to image weaponized robots manufacturing clones of themselves.
Perhaps not quite as horrifying, but sooner to be a practical problem for all of us, are how driverless cars make decisions about who to save and who to sacrifice when an accident looks eminent. Recent statements about protecting the car’s occupants don’t sound that attractive to those of us who aren’t likely to be in cars.