History of AI OECD

AI Report – The ethics of artificial intelligence: at the center of the debate of businesses, universities and institutions. The point of view of Francesca Rossi, expert for the EC and AI Ethics Global Leader IBM

Ms. Rossi, help us put into context the status of artificial intelligence (AI): on the one hand we have the Pontifical Academy for Life, IBM, Microsoft, the FAO and the Italian Government signing a document for the Ethics of AI and the High Level Expert Group (HLEG) of the European Commission (Italiano qui) (CE) working at guidelines for a trustworthy AI. On the other, the use of AI is exploding both as to consumer interfaces and productive sector applications. an AI program deemed uninteresting by some governments, discovered Covid-19, hence the number of private companies knew before governments about the coronavirus. This shows how valuable AI is. It’s been a few years now that we talk about the ethics and privacy of AI, allow me to put  forward a scenario: data from China collected with no guarantees as to ethics and privacy could allow with the help of a  to produce a vaccine. In Europe, in compliance with the draft of the EC guidelines and the GDPR, authorities could not authorize it. The rest of the world would. The two levels don’t seem to be able to overlap.

It is not surprising that a research applied to a specific real-life problem will come from the private research sector. I have long worked in the academic field, where research is usually long-term and at a higher level—even if researchers then try to connect it to real life problems, whereas researchers in the private sector, such as the centers for AI of IBM where I work, produce articles end results that are more connected to real life and have a greater applications scope. private research is more linked to specific problems of companies or individuals, it has a greater amount of data, more resources and computing power.

The discipline of AI ethics has many dimensions: it involves not just AI experts but all stakeholders—who produces it, who uses it and who has to live has to live with the decisions made with the support of AI. Many voices must be heard before we understand how to behave. The EC is trying to understand if and how support policies for AI should be implemented—because Europe wants to support innovation and competitiveness, but in a way that is compatible with the values of Europeans. To this end, it created the group of 52 experts (HLEG on AI) to understand the important aspects of this technology for when it will be applied and for who produces it. Among the various requirements published by this group in April 2019 in its ethic guidelines, are the trustworthiness as regards the privacy of people, a responsible management of data and the rights of who provides data to a system to receive in exchange a service. Europe with the GDPR is already ahead here compared with the rest of the world.

We know that without data, AI and in particular machine learning algorithms (ML) don’t work properly, because it is the data that tells how problems can be solved and, more importantly, without personal data we could not provide customized solutions. In principle, therefore, providing your data is of benefit, but, as in the documents of the HLEG or the Vatican or other initiatives on AI ethics, it is important that data management respects the privacy of those who provide it and that its control remains with them.

In other regions of the world there are different ways of managing innovation, regulating it and guiding it, and it can vary a lot depending on it being top-down or bottom-up. Europe has now, in my opinion, precisely the role of showing how innovation and competitiveness can be combined with important values, in a way that they are applicable to everyone, and how all actors involved in producing, developing and using AI can build an environment of trust for this powerful technology. We know that AI can make very positive contributions to decision-making processes, problem solving and therefore improving our life in all fields. We just need to point it in the right direction.

The timetable of the EU guidelines, nonetheless, appears slow-paced compared with the strong demand for AI applications and results. China defined its AI strategy in 2015 and 2017, and it is years that AI is researched and widely used outside Europe—for example, in biotechnologies and personalized medicine. Moreover, in Europe, with some exceptions,  organizations involved in this technology are small and fragmented. The picture resulting seems to be one of small and large organizations around the world—some with abundant resources—that are moving ahead swiftly, or, in other words, that Europe has fallen behind.

You draw an picture which emphasizes, in fact, the need for global and multicultural coordination to develop, produce and then enter into real life this technology, because AI a technology that can be developed with data collected in one place and used everywhere. Some countries are working in this direction: France and Canada launched a global AI panel, which many other countries are expected to join. Other initiatives point to dealing with the ethics of AI with a multidisciplinary, multi-stakeholder and multi-regions approach, in the form of a multicultural and global coordination.

That said, it is true that the regions of the world have different laws and one cannot impose one’s own. Instead, we can show what the appropriate behavior also for  companies is, as is the case of IBM, where we believe that the purpose of AI is to work alongside people supporting their decision-making ability in whatever they need to do.

At IBM we have an internal body (an AI ethics committee) which helps us assess the sensitive aspects of each job we are proposed and whether it follows the principles or guidelines on AI ethics that we adopted at IBM. Among these is data collection that is respectful of the fundamental human rights and people’s dignity and privacy. If proposals don’t stand up to our principles, we do not participate or accept the contract.

We work for companies, B2B, not directly with the consumer. This is an important distinction to make, because our application domains are in industries where the decisions to be made are very sensitive and can have a serious impact on people’s lives: government, banks, health services, medicine. In many of these domains, ethical guidelines or data regulation precede AI, meaning that there is already an infrastructure which thought about the ethical aspects of the solutions or decisions to be made.

We do not oppose AI regulation, but we rather align with documents such as the EC White Paper or the guidelines published recently by the White House. Both pay attention to differentiate  between high-risk scenarios and those with less risk. firstly, the issue is to define what is high-risk, and then perhaps regulate not an entire industry, but only those areas that could have a significant impact on people’s lives. The ones carrying low risk could be handled with a self-regulation, self-assessment or co-regulation.

In the middle of an ocean of AI use cases spanning from call centers to the most sophisticated academic research, there is the vast productive part of the economy, in turn immensely diverse. Even if large organizations have the means to be careful and not risk unethical uses of AI, entire other industries use it with lesser self-restraint. An example could be financial companies or small banks in the US that use AI to support the decision about who is worthy of being granted a mortgage. There have been cases in which the data fed to these programs was biased and the results were therefore discriminatory. How will the excellent intentions of the EC or the Vatican be applied in the real world?

A company like IBM, as said, can set an example. What we observe, however, is that it is our customers who ask how we handle the risk of discriminatory or misleading results or solutions, and what properties we put into our solutions to control and mitigate such risks.

When a big organization like IBM, which has a presence worldwide and in almost all industries, provides tools and explanations for risk control, it is showing concretely the attention to ethics that is necessary. Other suppliers in the market understand that, and, if not else to remain competitive, follow suit. This starts a chain reaction that leads all actors in those industries across the world to follow suit.

That said, as in the EC White Paper—now a the consultation phase—the point is to identify the applications with a higher risk are, because it would make no sense to regulate those areas at low risk. The latter could use self-assessment, standards or other mechanisms established by a broad consensus worldwide helping us understand the correct way or the correct properties that this technology needs to have. The HLEG’s self-assessment list with its seven requirements—currently under review—is a tool for companies to understand if they are producing an AI with the right characteristics or not, which is trustworthy or not, and if not, in which aspect of the list they must intervene.

For more high-risk initiatives—still in a definition phase—it may make sense to apply regulation or co-regulation. Laws, however, are not very flexible, and this is a technology that changes very quickly. So, you have to make sure that the norms or standards can be adapted over the years. It’s safe to say that there is an agreement, at least between Europe and the US, on the fact that the best approach is to identify the riskiest aspects in each sector and regulate them, while leaving to the less risky ones the option of self-assessment. At IBM we support this approach. We call it precision regulation.

As you said, the abundance and availability of data over the last few years was a decisive element to get to how AI is used today, which nonetheless arises problems as to privacy and rights compliance.

The original debate on AI ethics actually dates back to 1956, and it began in Europe. Shouldn’t the EC have started it earlier, perhaps even just three or four years ago? Almost two and a half years ago, while EC commissioner Mariya Gabriel, a political commissioner with no professional experience in science or technology, spoke at a conference about creating a “trustworthy AI”, all around her in the same technology fair, 2000 exhibitors or more, IBM included, were closing deals on artificial intelligence programs—from “simple” interfaces to critical infrastructures management. I understand how difficult it can be, but even considering the virtuous chain reaction you described, aren’t we closing close the stable door after the horse has bolted?

It that it takes time to understand that there are problems, or even just identify them and then find shared solutions. This is so precisely because artificial intelligence experts are not only ones that need to be involved. Other disciplines are needed as well. As to me, I participated already in 2015 in an initiative on AI ethics as a member of the committee of external consultants of the Future of Life Institute. At that time, the international community was beginning to ask itself themselves if there were problems or concerns as to AI, and how to identify them. In my opinion, a lot has been done in a short time.

Before, nobody talked about AI ethics, partly because the applications were less pervasive in our lives, partly because the AI ​​used before the massive use of ML was a technology that did not rely on large amounts of data, and posed therefore fewer privacy, data collection and sharing problems. Many issues emerged the use of these AI techniques, which, in turn, added to the availability of large amounts of data and an increasing computational speed, gave way to a pervasive use of AI and a greater impact on our lives.

When you understand this, the solutions you find must be shared worldwide, and this takes time. In fact, I’m amazed at how quickly artificial intelligence experts, academics, companies and even policy makers like Mariya Gabriel, who created the HLEG, teamed up to understand how to deal with problems.

AI is a dynamic technology by nature, and considering that for many instances laws exist already—against fraud, discrimination…— standards seem a feasible first step. However, they should be so flexible as to accommodate applications and scenarios that we cannot even imagine at this stage, not even from a technological point of view as is the case of the computational capacity you touch upon. How realistic do you see the following two paradigms: 1) one paradigm similar to that of climate change, where part of the private sector imposes itself more stringent rules than those passed by the federal government and regulation comes last—as is the case in the US or with the virtuous chain reaction you just described; 2) a paradigm of the “end justifying the means”, that is, cases in which consumers deliberately give up their perfect privacy in exchange for solutions, for example: treatments for cancer or, as many people do today, deciding to use the free services of Facebook or Google in exchange for their data, or how we do all when we click “accept” to all cookies imposed the GDPR without making a choice, because nobody has the time to read pages and pages of fine print in every site one opens?

Francesca Rossi, expert for the EC and AI Ethics Global Leader IBM

Francesca Rossi, expert for the EC and AI Ethics Global Leader IBM

As said, for now there are no regulations telling how and what all AI properties should be. We see, however,  as is the case with IBM, that the rules one organization gives itself creates ecosystems, which in turn involve customers and the customers of customers, and so on, who are all respectful of rights.

As for the compromise between purpose and use of the data, from my point of view it is up to the person providing the data to decide whether to allow it to be used in a given way—which must however be stated clearly. For instance, a person needs to know if he or she is interacting with an AI system or with another human. In our business model, the data we receive from customers to whom we provide solutions, such as a hospital or a bank, remain the property of that customer, and we use it only to better provide the required solution, but not to improve other customers’ solutions. If they want to relinquish it for research purposes or more in general, we open a discussion with everyone involved, and first of all with whoever provided the data. This is the approach that we believe is right.

You did not tackle the possibility of data collected without guarantees, for example in China, and indispensable for an important purpose such as a Covid vaccine. What do you do?

It depends on the company. We at IBM follow our fundamental rights principles. Even if I’m not in a business unit, I believe I can say that if an initiative does not respect are principles, we are not available. This does not mean that solutions to important problems, such as the virus or other medical problems, cannot be found, but we believe that they can be found in respect with these principles.

 

* Call for an AI Ethics is a document undersigned at the end of February by the Pontifical Academy for Life, IBM, Microsoft, FAO and the Italian Government “to support an ethical approach to Artificial Intelligence and promote between organizations, governments and institutions a sense of shared responsibility, with the aim of guaranteeing a future in which digital innovation and technological progress are at the service of human ingenuity and creativity and not their gradual replacement”.