Royal Academy of Engineering has published a report exploring the social, legal and ethical implications of ceding control to autonomous systems
. Someone outside science fiction tries to explore who should be blamed if machine does something wrong.
I've been thinking along these lines for a while now. On a surface it is a simple problem - if it becomes sentient then it has to take the responsibility for it's actions, if it's not then the programmer take the blame. But with self evolving software come new problems. The end result cannot be foreseen. It becomes subject to the laws of evolution and it can become virulent and toxic without any evil design, without premeditation. Just by chance. A single line of false reasoning that can lead to disastrous consequences, simple error in judgment. Or something that makes that trait advantageous - like competing for resources. It doesn't even have to want to kill us. Never become sentient. Just be more efficient then the biological factors that want to kill us all the time. Not evil. Just trying to survive. Like mosquitoes and HIV and tapeworms. However, unlike as with biologicals, we can still blame the people who started this mechanical doom, even if they didn't intended it. If we're still around.
But even if the machines do get sentient, there's no guarantee their sense of right and wrong and necessary will be like ours. Morality is part biological programming, part social structure. We can get other people to agree on absolute rights and wrongs and what is the overreacting. How can we expect something that doesn't share our common genetic program of social interactions and ability for compassion to act according to our norms? The mirror neurons
give us the ability to understand other's feelings, to put oneself in someone else's shoes but how can we expect it form something that doesn't have them (neither neurons nor shoes)? Can we blame it for not being like us when we made it that way in the first place?
Who gets to be blamed for terminator killing people - after all they are programmed to do it, so it's not their fault. They are neither responsible for what they do nor evil. They don't have a choice. The one who programmed them is the one to blame. But Skynet (at least original version) could always claim self-defence. It was made by military no wonder it couldn't tell it was using excessive force . When a war starts it's hard to stop and the victory must be ours, no matter the price.
Same with cylons. I always wondered about the refusal to acknowledge their sentience. If they didn't decide to do it on their own then the humans are to blame for programming them that way. After all they made killer robots, why are they surprised they kill? The moment you hold them responsible for all that happened should also be the moment you admit they are no longer things. And that maybe you shouldn't make the killers in first place.
It may turn out one has to destroy them either way (or be destroyed). The faults may be beyond repair and being able to make a choice doesn't mean making the right one. Still there is the difference who gets the blame.