Carlos Casabona discussed the topic at the opening of the International Congress on Criminal Sciences
As technology and its quick advances take over, societies are yet to know who will be liable for cybercrimes involving artificial intelligence (AI) and robotics. The professor of Criminal Law of the University of the Basque Country (Spain), Spanish attorney and Medical Doctor Carlos Romeo Casabona is an expert in the area and was invited for the opening of the International Congress of Criminal Sciences, on Oct 17, sponsored by the Law School’s Graduate Program in Criminal Sciences. Holding a Honorary Doctorate from six Ibero-American universities, and serving as professor of the course on Law and Human Genome, Casabona talked about Artificial Intelligence – Robotics and Criminal Responsibility.
In his view, with the advent of AI, jurists are concerned with what is protected by criminal law. As an example of that, he mentions human intervention in air traffic control, which could lead to a deadly crash. In this case, the tragedy resulting from software manipulation would have no one to be blamed as it would be hard to identify them. As another example, he mentioned alterations that could be made to the electronic records of a ICU patient, which could deteriorate his health.
Autonomous systems, who is to be blamed?
Casabona claims that the major challenge of these days, however, are the autonomous systems of AI that lead to self-reprogramming. It is very common that these systems would not follow the previously programmed instructions. They are failure-prone too, but are not susceptible to malicious human interference as in the previous cases. The question to be asked is: can robots be criminally responsible? “By not applying the concept of blame on a robot or smart systems, as blame is something typical of humans, new issues involving legal practice arise, even though there are not many experts in this area”, claims he.
In many countries, criminal responsibility for offenses produced by autonomous AI systems is attributed to corporations – but they are answered by the individuals who represent them. Casabona is supported by legal thinking experts and, in his view, the individuals who developed the AI software should be blamed for this type of offense.
But he defends public intervention with preventive measures to control AI systems that could offer some risk. As an example of that, Casabona mentions the countries in which an annual inspection for vehicles over 10 years old is mandatory in order to avoid risk for drivers and pedestrians. “Something similar must be developed for AI. We have to find solutions beyond the immediate punishment of offenders involved in autonomous smart systems. There are measures that must be taken before that happens”, concludes he.