The European Union‘s draft strategy for artificial intelligence (AI) is reminiscent of the 90’s debate over globalization. Many back then addressed a complex phenomenon bearing vast socio-economic consequences with fine and fair words, but little reality.
Earlier this year, in Barcelona (Italiano qui), Spain, while the European Commissioner for Digital Economy and Society, Mariya Gabriel*, spoke about “shaping AI in a way that reflects European values and principles”, about trust and “an ethical approach to AI” as an enabler of competitiveness, in the nearby halls of a major telecommunications fair (MWC2019), around 1000 global companies were marketing their AI applications, and closing deals.
Barcelona was just the proverbial tip of the iceberg. The companies gathered there accounted for just one industry, and were only the industry’s most important ones. To compose a global picture of applied AI nowadays, big companies as well as SMEs in all other industries in many thousands of university and independent researchers must be reckoned too.
The draft guidelines that Mrs. Gabriel referred to include the concepts of transparency, traceability, responsibility, diversity, accessibility, sustainability, friendliness, democracy, environmental protection, security of data and non-discrimination. In her speech, the Commissioner touched upon AI’s numerous “challenges for the future” and “the need for investments and the world race.”
Putting aside the discussion whether AI will best serve humanity if tackled as a race or as a global collaborative project, if race must be, then the one for AI began a few years ago. So did its “future”. Europe has probably already lost the AI race. China’s plans for developing AI date back to 2015 and 2017. There is even some concern that the US could lose the AI race to China.
The EU has a “strategy for AI fostering development, addressing social issues and ensuring ethical AI.” For such a windfall, however, Europe will get the recipe only in 2020, as in the timeline attached to the “Ethics Guidelines for Trustworthy AI” published in April. Based on this exhaustive document, the Group of High-level Experts (HLEG) of the European Commission will start an open consultation this summer. An implementation of their conclusions or related regulation will follow around or after mid-2020.
The question is: is this the right pace and method to foster a debate about a topic that is extremely complex because of the many aspects and implications it holds, and its potential to become the most important technology of the 21st century?
The debate about AI’s challenges began a few years after its founding conference in 1956. World-renowned theoretical physicist Stephen Hawking warned in 2014 that AI “could spell the end of the human race.” Owing to a letter addressing AI applied to weapons by 300 researchers from Facebook, IBM, Microsoft, Google, DeepMind, amongst others, in 2015 the debate made the front pages. Bill Gates too has advised for a careful management of “super intelligences.”
Elon Musk has been among the most vocal personalities in technology warning that “AI is a rare case where we need to be proactive about regulation instead of reactive. By the time we are reactive in AI regulation, it’s too late.”
The EU’s concerns are therefore not out of step, but come “reactively”, to put it in Mr. Musk’s words, and with a delay of years.
It is an AI application that unlocks our smartphones with facial recognition, identifies polluters, provides forecasts for farmers, manages telecommunications networks, identifies mortgage-worthy individuals, supports financial security, fights cybercrime, decodes hieroglyphics, saves millions in energy for a subway authority, writes sonnets attributable to Shakespeare, contributes to military security, channels your call to a customer center, computes historic and new data to offer a personalized recommendation of a product or a service, etc. In particular, AI is being used since at least five years in numerous projects as a valid tool to improve the diagnosis of serious diseases such as cancer.
The debate intensified in 2016 and 2017 with a prevailingly applicative approach.
The first mission of the Partnership on AI, founded in 2006, with Amazon, Apple, the BBC, UNICEF and Amnesty International being some of its members, is to define best practices.
In China, in 2018, senior members of the People’s Congress and diplomats believed that everybody needed “to cooperate to preemptively prevent the threat of AI.”
Military US center DARPA, wants scientists, engineers and technologists to think about the “issues AI poses for the military and civilian society,” and to discover and transition AI technologies into operational use.
The IEEE, the world’s largest technical professional organization, aims to educate designers and developers to consider ethics a priority so ensuring that autonomous intelligent systems are developed for the benefit of humanity. The European Atomium-AI4People recently published a framework of criteria for the ethical use of AI.
Were the EU HLEG’s guidelines released on April 8th in force today, their implementation would lead to perplexities in numerous real-life cases, in spite of its comprehensive pilot assessment list.
Would a telco operator using AI to orchestrate the flows in its networks abide by them? Facebook resorting to AI to identify abuses would be using Ai for the benefit of humanity or its shareholders?
Would a large investment bank that used AI to avoid fraud, fulfill the requisites of “human-centered AI” or AI applied “in the service of humanity“?
If one pharmaceutical company out-competed others by identifying with the use of AI the most effective drug for a disease, would this be considered “non-robust AI” creating “unintentional harm” or would the company have helped “individual human flourishing and the common good of society”?
If a nation gained a competitive advantage by using AI, and ruined a competing nation, would the economic victory be a result of “responsible competitiveness” or give way to a dispute?
These are, of course, extreme simplifications, but also real cases. The experts of the HLEG will need to tell also “who” the arbitrating authority will be, and how its jurisdiction will extend beyond Europe—and all of that after the horse has bolted.
In the 1990’s the phenomenon of globalization swept through advanced, emerging and poor countries, upsetting societies and economies. “Governing globalization” or “rewriting its rules” were likely solutions debated, even as it was unclear that there was a body or institution capable of “governing” a phenomenon of such scale.
In some advanced economies, as a result of the offshoring and outsourcing of productions, it has left social scars, the consequences of which still take a toll 25 years later. In retrospect we know also that, according to official research, between 1988 and 2008 globalization has helped cut in half the number of people living in extreme poverty.
Back then, however, the globalization debate took center stage, and deeply involved hundreds of thousands of people around the world. Yet it barely scratched the surface of the phenomenon.
Artificial intelligence is not a phenomenon. It is a technology, but one with a mighty far-reaching scope across industries and across the world. Nonetheless, there is a high risk that the debate around its ethics, as is the case in Europe, will produce guidelines and standards detached from a deluge of AI that, in the meantime, will have flown under the bridge.
A technology that is in its essence predictive of the future should not be addressed retrospectively, and should involve all three the public sector, research and businesses—and hands on.
* Before becoming Commissioner for the Digital Economy and Society, Mrs. Gabriel was Vice-President of the European People’s party Group in the European Parliament (2014-2017), and before that Member of the European Parliament for the GERB party (Citizens for European development of Bulgaria) (2009-2017). Her official biography shows no exposure to nor an experience in science and technology. At the time I write, I have no knowledge if she will stay in her capacity or if, she should not, who will be the next Commissioner.
My critique owes to my being a true believer in the European project.