When a company exists in the society, sustaining itself with the production, marketing and sale of goods, it becomes morally obliged to become a stakeholder in the society and the surroundings. Now, aside from going forth with its primary objective of building a powerful business model, the company is encouraged to do something for the society bound by ethical agreements to try and help build the society they sell their goods to.
One of my personal favourite movies of recent times is ‘The Imitation Game.’ Among many others it raises a question that can in a way be mapped to our current discussion of the ethical dilemma autonomous systems face.
After the historic breaking of the Enigma, Alan Turing and his team decide to design an algorithm to decide what vessels to save and what not to, based on a preset priority list. This was to ensure that the Nazis do not come to know of the code’s failure to stay unbreakable and hence not try to change it. Whatever may be the historical discrepancies with retelling of the historic event that changed the course of the war, the question remains – Who, if any at all, decides who to save and who to not?
As kids, every one of us fantasizes one unique friend with whom we could do anything. Talk, play goof around and what not… As we grow that fantasy friend grows from an imaginary friend to a talking pet and then to a stranded alien from another planet. Then maybe a genie and as you get into teenage he probably becomes a robot or clone of yourself. Well, that’s what my personal fantasies were growing up influenced by pop culture and films I watched as I did.
It wasn’t long ago when Iron Man came out and the idea of an Artificial Intelligence system that helps a genius through his every day activities garnered widespread attention. Of course, the idea had floated around for quite a bit by then and it was quite consistent in the comics, but this idea to many like-minded youth such as myself showcased a very possible means to bring that nostalgic childhood whim to life.