top of page
WhatsApp Image 2019-11-26 at 17.15.20 (2

LAW  FOR EVERYBODY

Home: Bienvenue
Search

Artificial Intelligence (AI) is one of the buzzwords that has gained a lot of traction in the last few years, after coming out of yet another AI winter. A winter in AI is characterised by the lack of funding for research and general interest in the subject. As predicted by American research Roy Amara, whose adage is named after him (Amara’s law), “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”



AI and the hype surrounding it currently have led many stakeholders to question whether and how to regulate this field along with the many different questions that arise – do we consider AI as persons (the issue of personhood), who can we hold responsible for damage caused by a robot (the issue of liability), how do we make sure that decisions taken by AI are fair (the issue of transparency)… Have we gotten to the point in Amara’s law where we risk underestimating the effects of AI in the long run or are we jumping the gun with our eagerness and hype surrounding another form of intelligence that we can create out of scraps of metal?


The European Union has been quite responsive in general to regulating technology when it deems it necessary, or at the very least investigating the need for regulation. Already back in 2013 when nanotechnology was going through its own hype, the European Parliament, which does not have the power to propose new legislation but instead has to suggest it to the European Commission, pushes the Commission to enact legislation on developing technologies.

This push to regulate emerging technologies is done for several reasons – cohesion between the laws of the member states of the EU surrounding the subject, clarity and certainty for researchers and investors, and protecting the safety of EU citizens. The era of killer robots is definitely not as far off as we may like to think, with terrifying videos of progress made available here. For those of you interested, Boston Dynamics is a leading private company in robotics working with the American Army – but if you are worried about the military applications of AI, drones are what should be keeping you up at night.


Luckily for us, AI in robotics (or any other field) is not quite there yet in terms of being able to completely autonomously and intentionally make the decision to harm us (as far as the author knows). Even within AI, different types exist to delimit the level of intelligence and capacities of the AI. What we currently have, the computer programs, algorithms and deep learning capable of beating the world champion at chess or Go, are all still weak AI. The next big thing is strong AI, capable of playing all video games (and not just one, as is the case currently). This is in part the reason why neural networks have been looked into as so promising, as Artificial Neural Networks (ANNs) would allow the AI to acquire general capability.



But does the European Commission, who has the power to propose new legislation, believe it is time to start regulating AI, considering the advancements expected in the next few years by some? Or does the prediction by some of the soon to come AI winter cool down legislative enthusiasm? As with nanotechnology, the European Parliament took the lead in February 2017 by passing a Report with recommendations to the Commission on Civil Law Rules on Robotics, drafted by Mady Delvaux, a member of the Parliament. This report addressed to the Commission recommended a series of legislative and non-legislative initiatives, but in particular asking for the Commission to propose legislation providing civil law rules on the liability of robots and artificial intelligence. Part of the recommendations included in the Report concerned the issue of personhood (recognizing a robot as a person having rights and obligations in law), asking the creation of a specific status for robots as “electronic persons”.


This ties into the solutions proposed by the Parliament for civil liability (who is responsible and pays when a robot does damage), based on insurance schemes and compensation funds. Robots and AI have seen the biggest rise in use in the manufacturing and services sectors (where weak AI is sufficient to fulfill certain tasks), and this has brought an onslaught of changes in work organisation, working conditions, and other employment aspects (ask people at Amazon) and therefore questions regarding who is ultimately responsible for any accidents.

However, the European Parliament’s set of recommendations to the Commission (the Report) was not well received in certain academic circles. One group felt so strongly that the European Parliament was jumping the gun that an open letter was written, accusing the Parliament of being too hasty and failing to understand AI properly. The issue of personhood has always been a source of debate and controversy, and giving legal rights to a robot would require a fundamental shift in legal thinking. Legal personhood includes the ability to take someone to court, be taken to court (for civil or criminal reasons) and the ability to hold that “person” responsible for their actions. Giving legal personhood to an AI would then allow us to hold the AI (robot) itself accountable - but is that what we want?


According to the European Commission, this is not what we want. With regards to the issue of personhood, but also the issue of liability, no legislative proposals have been made over a year after the Report was published. The Commission has however addressed the Report through a Communication, and a High Level Expert Group of Artificial Intelligence (AI HLEG) consisting of 52 experts on the subject has been created, as well as a Robotics and Artificial Intelligence unit established. The AI HLEG has released draft AI ethics guidelines as well as the policy and investment recommendations recently (June 2019). Both entities are tasked with either researching possible ways of legislating liability of AI or simply following issues related to the liability of AI, with the recent policy recommendations coming from AI HLEG seemingly following the original European Parliament’s recommendations!



45 views0 comments

CONTACT

Thanks for submitting!

bottom of page