Future Lawyer Blog

Event Review: Transatlantic Perspectives on Legislating AI

It is trite to remark that data is everywhere and that we generate streams of it wherever we go. Normally, this data is merely transient. For instance, my current location may be available to those in my immediate vicinity, but once I move on, those same people may not necessarily have access to my next location. This data may be of some value (I flatter myself) to those nearby, but it is difficult and expensive to exploit on a large scale.

The rise of the internet, into which almost all day-to-day interactions are couched, has given rise to a new medium where activities that were once transient now leave permanent traces. Everywhere you go, you leave a trace: cookies, tracking pixels, server logs, your IP address, your browser agent. Like footprints, these are used in aggregate to build up a complete picture of who you are, and if you sign in to Chrome, then you’ve just imprinted your name (and more besides) into the sole of your shoe.

This is the context in which Woodrow Hartzog, Professor of Law at Boston University, gave his presentation on “Against AI Half-Measures.” From a perspective of protecting individual rights and freedom, he offered a critique of the existing legislative approaches to regulating the new “Artificial Intelligence” (AI) systems that have emerged in the last two years. He strongly advocated for robust legislative intervention to ensure that AI systems are developed responsibly and for purposes that will serve people and society.

The talk was hosted by the Institute of Advanced Legal Studies (IALS) at Russell Square and was set up as a seminar with Prof. Hartzog as a visiting speaker. The panel also included Dr. Anna-Maria Sichani, BRAID Fellow at the School of Advanced Study, and was chaired by Dr. Nóra Ní Loideain, the director of the Information Law & Policy Centre.

Beware half measures!

The main thrust of Prof. Hartzog’s presentation was that the emergence of AI systems demands a fresh and more robust regulatory approach to ensure that the controllers of such systems can be held accountable to consumers.

He argued that existing legislation on AI broadly follows traditional software legislation paradigms that impose transparency and reporting obligations but lack substantive duties that would create causes of action, allowing injured parties to enforce their rights. He characterized these kinds of legislation as “half-measures,” as they provided only the illusion of protection.

What followed was a lively discussion among the panel and the audience on existing data protection legislation and the importance of handling issues relating to model and dataset bias. There were some hard-hitting questions from the audience, making it clear this seminar had struck a chord with the zeitgeist of the moment.

Some closing thoughts…

There is a lot of food for thought on this topic, and I think the recent trend has been to stand in awe of the convenience that technology has brought to our lives. In the software industry, where I work, a duality is currently playing out in the relationship between engineers and AI. On the one hand, there are half-whispered promises of untold productivity and its associated fame and glory; on the other, the looming threat of being automated out of a job. The loss of our privacy has been the price of having the world apparently at our fingertips.

Data about ourselves is a curious kind of asset—it is simultaneously completely free (certainly to us) yet extremely valuable. In a very real sense, our entire online identities are merely accretions of data. Whatever your metaphysical views may be, that is the reductive effect of encoding everything into a numerical representation. The clear takeaway from this is that while we may think nothing of the information we continuously and unthinkingly create, we should consider what that data might be worth to someone else. It follows, then, that we should reflect on who ought to be the custodian of that resource and whether it really should be a small group of unaccountable, profit-seeking organizations.

The idea that information-processing technology is largely benign is overdue for serious consideration, and as the lawyers of the future, I hope that we are up to the task.

Prof. Hartzog has written extensively on law and policy issues relating to privacy. More information on him can be found on his academic profile.

Vince Chan

Thanks to Vincent Chan for this excellent review – the events on this topic from the Information Law and Policy Centre are very thought-provoking. Hopefully this might prompt some of you to attend future ones! Keep an eye on the Lawbore Events Calendar, or go direct to the IALS updates.

Before doing his GDL at City, Vince worked for over 10 years as a software engineer in the financial services industry. Lawbore is excited about having Vince on the journalist team this year and exploiting his tech knowledge.

Leave a Reply

Your email address will not be published. Required fields are marked *