While Timnit Gebru claimed that she was fired by Google management staff over a paper highlighting AI bias, the worldwide discussion about the implementation of ‘AI’ into our daily lives continues apace. As countries have been creating national or regional research hubs and announcing national strategic road maps, it may not be so accurate to label all those technological applications ‘AI’ due to the complexity and broadness of the topic. But one thing remains in question within the storm of AI implementation craziness all around the world; that of the legal status and the possible ‘way outs’ when it comes to explaining algorithms in potential incident and harm cases. In that respect, one would like to explore some possible legal explanations to these crucial issues involving a very wide range of topics like privacy, ethics and fairness within itself. Therefore, I would like to take you on a short journey through the key matters of debate for legal scholars.
On December 4th 2020, during the pandemic restrictions, Istanbul Bilgi University held the 3rd of the Annual Istanbul Privacy Symposium: Law, Ethics and Technology with the great and friendly moderation of the Director of IT Law Institute Leyla Keser, and Associate Dean of Faculty of Law, Bilgi University, M.Bedii Kaya. With a broad range of speakers the conference main track focused on AI, namely ‘AI: What the Future Holds? Opportunities and Pitfalls‘, and kept us captivated for three hours on Zoom.
Alexa Hasse, from the Berkman Klein Center for Internet & Society, drew our attention to the gender gap in the technology jobs – specifically computer science and AI – and questioned how we can achieve to involve underrepresented groups of people in AI workplace and education. There are exceptions: In Turkey the proportion of women graduating in computer science increased from 29% to 33% during a 12-year-span (see Sophia Huyer, ‘Is the gender gap narrowing in science and engineering’, p.93 of the UNESCO Science Report – Towards 2030 (2015) and Huyer & Westholm, ‘Gender Indicators in Engineering, Science and Technology, an Information Toolkit‘ (2018)), when the world has seen a 10% drop from 35% to 25% in technology jobs held by women in twenty years. The speaker concluded with an assessment on digital citizenship to enhance the equality among AI workforce and education.
Emre Bayamlıoğlu, from KU Leuven/CITIP Leuven, started his paper with the crucial (but somewhat sad) term anthropocene, just before he explained physics entropy rules vis-à-vis ecological collapse and climatic issues. He highlighted the borders of law(s) when it comes to finding a solution to climate change and gave perspectives on ephemeralization and datafication. He called the situation we are now in ‘data anthropocene’ as he addressed the data-driven governance modalities and transition of everything as a service. He presented data as the new environment and concluded with philosophical meditations which got us thinking about the theory of science, law(s) and politics. His points were a different interpretation of the view that unless we give up on our ‘human selfishness’ the law(s) has nothing to do with bringing a concrete solution to environmental sustainability problems.
Michael Veale, Lecturer in Digital Rights & Regulation at UCL, detailed several encryption methods, including secure multi-party computation and homomorphic encryption. He then described implementation of these crypto systems into agriculture, business and socio-economic disparity research. He spoke about the thin line between privacy as confidentiality and information power while finishing with some proposals on data processing processes, and how code could be made available legitimately to end users.
Burkhard Schafer, Director of SCRIPT Centre, Edinburgh University, introduced us to three historical figures from the fifteenth century: Sigismund, Archduke of Austria, his first wife Elanour of Scotland and Heinrich Steinhöwel, and how they fostered a scientific approach to translation in Europe. He highlighted Thomas Craig, the Scottish Institutional Writer, who is considered the one who brought continental law from France to Scotland and – luckily! – made way to a uniform European legal practice and mentality. Schafer stated that due to the emergence of national law(s), this uniformity of the continuance legal market and practice seemed to ‘get stuck’, as ‘did the access to justice‘. While scrutinizing the opportunities and the pitfalls, he proposed using a legal domain MT engine, though noting that the hardest part of this is the processing of input data from minority languages which are not widely used. So the input data lacks the capability of teaching the nuances of languages to the machine and lead it to get it right through the work. He emphasised those reasonable concerns on potential discrimination issues in which legal MT engine can have; but noted there is potential for this in all AI-based systems.
Carlos Affonso Souza, Director at ITS Rio, focused on national AI strategies around the world as he enlightened us about predictions on several nations’ strategies. He opened up the floor with different states’ action plans of leading countries such as the U.K., the U.S. and Japan on AI (and robotics)and questioned the potential for research monopolization which might stem from the global competition in the field. Enlarging with the insights upon investments worldwide (e.g., India’s AI garage, SMEs in Mexico ), the presentation ended with emphasizing AI principles guidelines in which fairness ,ethics and diversity are the key components of the human-centric and minimum risk formula.
From the European Centre of Excellence on the Regulation of Robotics & AI (EURA), Andrea Bertolini took the floor to expand the discussion into liability and the risk management in AI & Robotics. After a short review on what would be the approximate definition of AI & Robots and an assessment on their autonomy and unforeseeability, he dived into the liability issues, in which one human being is liable regardless of the parameters of the situation. So, reasonably, he raised the argument that the entire puzzle can be solved when/if we can get through a good ascertainment. To split liability wisely and place it on the right person we follow a functional analysis – which he called CoA (Classes of applications). He described a bottom-up approach which would eventually help to create such a technology classification environment. On that point he evoked the Product Liability Directive (Directive 85/374/EEC ) in which producers are found liable with no fault liability understanding and then he finished the talk by giving a case study example over autonomous vehicles. Those curious to find out more can check his report which was requested by the European Parliament’s Committee on Legal Affairs (i.e., the JURI Committee).
The closing keynote speaker Paul de Hert, from the Research Group Law Science Technology & Society (LSTS), Vrei Universiteit Brussels and Department Law, Technology, Markets and Society (LTMS), Tilburg University, gave perspectives on the EU Regulations on AI and personality and considered analysis on the ecosystem of trust. He steered the discussion towards achieving an efficient classifying system of AI technologies, which would help to regulate the increasingly problematic series of AI related issues and cases. He went on to point out the risk approach of the European regulators. In his own words, to adjust the focal point of AI regulatory issues and to avoid an abstract discussion that goes nowhere, he showcased proximity, flexibility and efficiency as key elements to justify the idea of awarding legal personality to AI.
The event offered participants an opportunity to think through AI related issues by giving reflections and predictions for the future. Additionally, the Q&A session at the end was lively with a fast flowing series of questions. It’s great to see Bilgi University holding the doors open for international discussion and to diverse interaction (including scholars from the Middle East) on this criminally important topic that relates each and every one of us.
Idil Kula is a legal trainee from Turkey. Her study covers IT&IP, technology and cyber law. She is a contributor to the ISOC Turkey Chapter and a newcomer in ICANN Community. In her spare time she finds herself on the pages of IEEE and ACM publications and is ‘amazed by science and enthusiastic about space & nature’.
Idil’s paper was originally posted on Medium.