As some of you may recall, in 2015 when to us mere mortals Artificial Intelligence was just a term associated with aliens, there was a landmark case of robots purchasing drugs. Two Swedish engineers set about pushing the legal boundaries by using machine learning to acquire narcotics and despite succeeding, were promptly apprehended by law enforcement who swiftly arrested the “robot” and confiscated the computer. Before you lay scorn on these law breakers you should know it was a semi-staged event.
Now it seems that technology powerhouses are frequently testing the boundaries of the law where AI is concerned, most notably Uber who had been banned from autonomous driving in California due to repeated traffic violations. This case has started a larger discussion around the ethics of AI and many questions have been raised: was this also a staged event, were there bugs in the algorithm, had they intentionally been programmed, who would have been accountable for any fatalities?
It is inevitable that with emerging technology, mistakes are going to happen and until governed laws are in place there are none to be broken so it is ultimately up to humans to decide what is morally right or wrong.
Burkhard Schafer a leading professor within computational legal theory believes that “for the moment, the question of liability should be no different than an injury caused by an electric drill”, further stating “we decide is it the fault of the owner, or the manufacturer, not the drill itself. Robots don’t change the picture dramatically”
This offers some reassurance that where the Judiciary system is concerned, the same laws can be applied to algorithms. In the case of drug dealing robots or autonomous driving vehicles being programmed by human beings – mistakes being made either unintentionally or intentionally.
However, the AI world is in a competitive race. The winners will be those using techniques such as reinforced learning, which is essentially the modern-day AI with computers applying behavioristic psychology to maximize performance.
As the governments start looking into legal grey areas of deep learning, other key technological figures such as Elon Musk have commissioned OpenAI a non-profit organization which will focus on the ethical issues to promote and develop friendly AI. I suppose until we have legal test cases none of us will know.
If you’re based in NYC, enjoyed reading this blog and feel passionate about the discussion of AI topics, please join me at the meet up I will be co-hosting. Please DM me for more details. https://www.meetup.com/Machine-Learning-and-Data-Science-Business-Network/events/239443409/
For the moment, the question of liability should be no different than an injury caused by an electric drill, he thinks. “We decide is it the fault of the owner, or the manufacturer,” says Burkhard – not the drill itself. “Robots don’t change the picture dramatically.”