AI Regulation Bounded by Current Political Horizons Can Only Play Catch-Up. Cathedral Thinking Can Help

“The question is not whether we are able to change but whether we are changing fast enough.” —Angela Merkel

Artificial intelligence, physical security, and political security are inextricably linked in the Digital Age and have vast implications on social fairness. Largely due to advances in computing power, machine learning algorithms, deep neural networks, access to datasets, and gains in standard software frameworks that allow for exponential iteration and replication of experiments, AI and machine learning (ML) have progressed rapidly over the past few years and will only continue to proliferate in the future. At the core of this growth has been expanded military and commercial investment in developing the capacities that feed AI and machine learning (ML)-driven technologies.

However, the dual-use nature of AI- and ML-driven technologies have already facilitated their misuse e.g. weaponizing consumer drones, hacking public services, create privacy-deficient surveillance states, racial profiling, repression, and targeted disinformation campaigns to name a few. AI-driven cyber-attacks often lead the public to question the robustness of their political and public structures, and work to undermine trust in governments and political participation. In the meantime, the United Nations along with member countries have struggled for over ten years to cohesively define lethal autonomous weapons, let alone define what constitutes a malicious inter-state cyber-attack on (inter)national institutions. The reasons for regulating AI are clear. So why are our national and supranational institutions struggling to keep up?

AI and ML technologies are inherently future-oriented technologies. Our political and economic regulatory frameworks are not.

Most term limits for democratically elected national and international leaders and administrations vary between four and five years. Of this, the first is one of transition and fulfilling short-term campaign goals or ‘quick-wins.’ The last year is often oriented towards fulfilling future campaign targets – directing efforts towards showing why these administrations should be re-elected for the next four to five years – with little consideration as to what will occur after the entirety of their term is up. This follows the mindset of most 21st century societies: that of a “globalized, consumption-driven neoliberal capitalism that the global economic system requires”. AI and ML-technologies that have sped up our systems exponentially do little to assuage the brevity of our political and economic horizons; we can now collect data on our constituents to learn about their concerns in real time. This saturation of information of ‘what is on our mind’ now versus what will be in our minds in the future does little to make salient the vague and abstract future use of AI and ML technologies.

The way forward

Addressing the gap in adopting a future-oriented approach to regulating AI need not touch changes to institutional term limits. The solution is also not to stop innovating and developing these technologies altogether – others will only step in fill voids. AI regulation requires a far-reaching vision, a blueprint mapping out which groups will be impacted not only now but in the future, and a “shared commitment to long-term implementation”. Cathedral thinking can help.

The concept of Cathedral Thinking dates back to medieval times, when architects, stonemasons, and artisans laid plans and began constructions on the intricate, soaring structures we see today. Gaudi didn’t live to see the Sagrada Familia completed, and neither did countless architects and artisans across time and geographies, although we – the future generation – enjoy these structures now as places of worship, community, and tourism. The scale of their vision is especially impressive given the shorter lifespans, threat of war, and lack of technology that these artists grappled with in their lifetimes. Cathedral thinking helps us consider the future not as some vague and abstract event occurring well-past our term limits, but as a default that needs to be considered when addressing current needs. It can help us consider the path-dependent outcomes of our failures to adequately regulate AI technologies today. The way forward requires regulators to make salient what each administration passes on to the next in real and actionable ways. The environmental, political, social, and economic outcomes of actions taken by current administrations decades later should be made salient in every extent possible in policy design and success measurement – to incentivize both regulators and the electorate. The political horizon of an AI regulator cannot be limited to the end of their term. Success should be measured in the extent to which they consider the regulatory impact on the generation 10, 20, or 50 years hence. Unless AI regulators start innovating their regulatory frameworks along with our technologies, policymakers are only going to barely play catch up with the tech-military-industrial complex.

Kulani Abendroth-Dias is a PhD student at the Graduate Institute of International and Development Studies and a Strategic Advisor at the Organisation for Economic Co-operation and Development*. A TEDx speaker on “Why Good People Do Bad Things – And What We Can Do About It,” she previously worked with the UN Institute for Disarmament Research, United Nations Development Programme, and UN Peacebuilding Secretariat. She has an M.A. in Social Psychology from Princeton University (USA) and a MSc. in European Integration specializing in Economics, Security and External Relations from the Institute for European Studies (VUB) in Brussels, Belgium. *Please note that the opinions expressed in this article do not reflect the official views of any of the organisations with which the author is affiliated.

comments powered by Disqus