EU science funding is being spent on developing new tools for policing and security. But who decides how far we need to submit to artificial intelligence?
Patrick Breyer didn’t expect to have to take the European Commission to court. The softly spoken German MEP was startled when in July 2019 he read about a new technology to detect from facial “micro-expressions” when somebody is lying while answering questions.
Even more startling was that the EU was funding research into this virtual mind reader through a project called iBorderCtrl, for potential use in policing Europe’s borders. In the article that Breyer read, a reporter described taking a test on the border between Serbia and Hungary. She told the truth, but the AI border guard said she had lied.
A member of the European parliament’s civil liberties committee and one of four MEPs for the Pirate party, Breyer realized that iBorderCtrl’s ethical and privacy implications were immense. He feared that if such technology – or as he now calls it, “pseudo-scientific security hocus pocus” – was available to those in charge of policing borders, then people of color, women, elderly people, children, and people with disabilities could be more likely than others to be falsely reported as liars.
Using EU transparency laws, he requested more information from the European Commission on the ethics and legality of the project. Its response was jarring: access denied, in the name of protecting trade secrets.
So Breyer sued. He wants the European court of justice to rule that there is an overriding public interest in releasing the documents. “The European Union is funding illegal technology that violates fundamental rights and is unethical,” Breyer claimed.
Breyer’s case, which is expected to come before the court in the new year, has far-reaching implications. Billions of euros in public funding flow annually to researching controversial security technologies, and at least €1.3bn more will be released over the next seven years.
Faces, voices, veins
Horizon 2020 is the EU’s flagship research and innovation programme . From 2014 to 2020 it was worth nearly €80bn in funding grants for scientists.
Competition for Horizon money is fierce. It pays for research into such things as colorectal cancer, mosquito-borne disease, and improving irrigation for agriculture. This year Horizon financing supported the German company BioNTech, one of the first companies to announce success in Covid-19 vaccine trials.
But €1.7bn from the program over the past seven years backed the development of security products for police forces and border control agencies in the public and private sectors. Much of it involves cutting-edge technology: artificial intelligence, unmanned drones and augmented reality, as well as facial, voice, vein, and iris recognition and other forms of biometrics that could be deployed for surveillance.
The consortium behind the iBorderCtrl lie-detector technology received €4.5m from Horizon 2020’s security portfolio and spent the three years to August 2019 developing and testing it.
EU officials say such innovation is crucial for dealing with crime, terrorism, and natural disasters. The strategic goal is to bolster the bloc’s security companies to compete with the US, Israel, and China.
But there is unease about the aims, public oversight, and the perceived influence of corporate interests over the security strand of Horizon. Seven current and former ethics experts working on EU-funded security projects raised concerns in interviews with the Guardian. They questioned whether some Horizon-backed research was truly in the public interest.
A major concern among ethicists is that scrutiny and criticism appear to be sidelined in the quest to bring new technologies to market, even when the technologies raise clear privacy and civil liberties concerns. But little of this is made public. Like Breyer, the Guardian was denied access by the commission to documents on the activities, legality, and ethics of more than a dozen Horizon 2020 security projects, on the grounds that releasing them could undermine public security and for “the protection of commercial interests”.
Unethical tech
Applications for Horizon 2020 money first pass through a scientific review and, if funded, a review conducted by a team of independent ethicists hired as consultants by the European Commission. These ethicists can clear a project or demand further assessment but their scope to really modify a project is limited.
“Often the problem is that the topic itself is unethical,” said Gemma Galdon Clavell, an independent tech ethicist who has evaluated many Horizon 2020 security research projects and worked as a partner on more than a dozen. “Some topics encourage partners to develop biometric tech that can work from afar, and so consent is not possible – this is what concerns me.” One project aiming to develop such technology refers to it as “unobtrusive person identification” that can be used on people as they cross borders. ¨If we’re talking about developing technology that people don’t know is being used,” said Galdon Clavell, “how can you make that ethical?”
Kristoffer Lidén, a researcher at the Peace Research Institute Oslo who has worked on the ethics component of multiple Horizon 2020 security projects, said the very participation of ethics experts on security projects seemed to be taken as a rubber stamp, even if they expressed grave concerns. He suggested ethics reviewers could feel pressure to approve projects without much fuss.
“[Projects] can easily be co-opted by commercial logic or by general technological optimism where people bracket ethical concerns because they see new technology as a positive development.” A 2018 study of ethics in EU-funded research projects reached the same conclusion.
For some individuals who have tried to raise concerns publicly, there appear to have been consequences. In 2015 Peter Burgess, a philosopher and political scientist who was then on three Horizon 2020 security research advisory boards, gave a candid interview to the German public television channel ARD and a Der Spiegel reporter in which he raised concerns about the industry’s influence over the research, particularly as it relates to migration. “Refugees are seen as targets and goals to be registered,” Burgess told the German reporters.
He was immediately released from all three advisory boards and has not been engaged in the program since. Two other ethics experts, both of whom spoke on the condition of anonymity, told the Guardian that they felt they had been sidelined from work on EU-funded security projects after being too critical in their assessments. The European Commission denies that such removals take place. “No request for removing ethics experts participating in the assessments/checks has been received by DG Research and Innovation,” a spokesperson responded via email.
Ethicists interviewed by the Guardian argued that ethical oversight should be used to make sure the EU is working in the public interest, rather than to legitimize the development of potentially controversial technology.
Corporate interests
Large-scale investment in security by the EU began in the early 2000s, after 9/11, the invasion of Iraq, and an increase in domestic terror attacks. EU leaders, concerned about further strikes as well as organized crime gangs and securing borders, vastly expanded cooperation with the European defense industry. In 2004, EU institutions launched a security research program by bringing together senior officials from national interior ministries and law enforcement agencies alongside multinational weapons and IT corporations such as BAE Systems, Finmeccanica (now Leonardo), Siemens, and the French defense and aerospace company Thales. This program would lay the groundwork for Horizon 2020’s security funding, which increasingly became focused on the development of biometric and other surveillance technologies.
Burgess underlined the role played by corporate interests. “Already in the preparatory phase there was a lot of industry involvement,” he said. But there was a shared assumption among participants from the public and private sectors that “gadgets make you more secure – bigger guns, taller walls, biometrics.” He added: “Ever since 9/11 and the terrorist attacks in Madrid in 2004 and London in 2005, it’s big business.”
Figures compiled by the Guardian from publicly available records suggest that Horizon 2020 has been particularly beneficial for the private sector: since 2007, private companies have received 42% of the €2.7bn distributed by the security research program – almost €1.15bn. They have also been the lead partner in almost half of the 714 funded projects. Other participants, such as research institutes and public bodies, trail far behind.
“This was the commission’s approach, for better or worse: to do what is best for Europe,” Burgess said. “It wasn’t a secret, corrupt system, it was public policy.”
While final decisions on funding are taken by national and EU officials, a body called the Protection and Security Advisory Group (PASAG) provides advice on the annual security research work programs, which set out the types of research that will be funded.
Critics raise questions around where responsibility lies for guiding the direction of EU-funded security research. The PASAG has 15 members drawn primarily from the private sector and research institutions across Europe. Public documents from the European commission suggest PASAG input in setting research priorities and in promoting links between the existing security research program and new funding for military technology.
Ten PASAG members have declared interests that relate to their work on Horizon 2020, public documents registered with the commission show. The group is chaired by Alberto de Benedictis, a former CEO of the UK division of Leonardo, an Italian defense company that has participated in 26 projects and received €11.3m from the EU security research budget since 2007. De Benedictis joined the PASAG in 2016, having retired from Leonardo the previous year. Another member runs a consultancy firm, MSB Consulting, which works with the defense and security sectors and whose customers are listed as security companies, as well as the European Commission and European Defence Agency. One other PASAG member works for Louvain University in Belgium, which has received millions from Horizon 2020 and other security programs over the years.
At present only one of the 15 members works for a civil society organization, and none has declared affiliations with human rights or ethics organizations. A European Commission spokesperson said: “The composition of the PASAG reflects the widest possible representation.” They said none of the group members’ stated interests could “compromise (or to be reasonably perceived as compromising) the expert’s capacity to act independently and in the public interest”.
De Benedictis said the PASAG only provided high-level direction, and that industry involvement in the group did not represent a conflict of interest. “The PASAG, like other expert advisory groups, was formed by the commission to ensure it could access expertise across the spectrum of stakeholders that contribute to the success of the research program,” De Benedictis said. “Industry is one of these stakeholders, as are universities, research institutes, and government departments and agencies that represent the practitioner communities.”
He emphasized that responsibility for research funding priorities lay with governments. “Member states are ultimately the decision-makers.”
Jean-Luc Gala, a former Belgian army officer and an academic at Louvain University specializing in bioweapons, also rejected any conflict of interest. Gala suggested that industry and academic security experts advising the commission on security technology projects was a net positive. The PASAG had a collective role where there was “no place for individual views nor opportunity to push an individual interest”, he added.
A spokesperson for Louvain University said: “Professor Gala is not sitting at this advisory group as a representative of the university, but has been invited because of his scientific expertise. The university considers that contributing to scientific advisory groups at the national and international level is an important part of the missions of our academic staff.”
Iskra Mihaylova, an MEP from Bulgaria who is working on the legislation for Horizon Europe, the successor to Horizon 2020, argued that industry involvement was unavoidable. “If you are looking for someone competent, he or she has experience in this field,” she said.
Covid creep
For the next seven years (2021–27), Horizon 2020 will be rebadged Horizon Europe, with an expected overall budget of €86bn and security funding of approximately €1.3bn.
However, a complementary budget of at least €8bn will go towards the research and development of military technologies. The explicit aim is to fund dual-use technologies with civilian and military applications, and a number of preliminary projects have explored unmanned drone “swarms” and other surveillance devices.
The fight against Covid-19 has further accelerated a push by European governments to develop surveillance technologies, including unprecedented use of drone surveillance, data tracking, facial recognition and other forms of biometrics for quarantine enforcement and contact-tracing. Poland, for example, has launched an app that asks quarantined citizens to upload selfies throughout the day to prove they are staying home. The app relies on geolocation and facial recognition technology and notifies the police when users fail to respond.
Facial recognition or AI-based policing algorithms are notorious for reinforcing racial and other biases. National Covid-tracing apps were not Horizon-funded, yet some researchers who work or have worked on EU-backed projects fear an EU-funded race to develop and test new biometric and other security technologies, especially at a time when public health fears have led many Europeans to be more accepting.
It begs the question: who decides what type of government surveillance Europeans should live with?
For now, it is unclear how effective EU funding is in bringing new security products to market, and in some cases, it seems they fall foul of the EU’s own laws.
The iBorderCtrl project’s website stated that some of its technologies were “not covered by the existing [EU] legal framework”. European Dynamics Luxembourg, the lead company in the iBorderCtrl consortium, did not respond to requests for comment.
Yet there is no meaningful way for the European public to stay informed, much less have a debate on whether they want their tax money contributing to Orwellian biometric and other surveillance technologies. Like Breyer, we requested (under EU transparency laws) access to dozens of documents produced by 15 Horizon 2020-funded projects seeking to develop new forms of biometric technology. These included ethics and legal reviews of each project. But after months of waiting, and filing appeals, we were told that many of the activity reports and some of the ethics reports had to remain confidential for reasons of security and the protection of commercial interests.
Breyer said he found it strange that an MEP would have to sue the EU to get information about a publicly funded project. “They won’t release criticism of this project because it won’t help them sell the technology,” Breyer suggested. “Is that a legitimate reason for the EU, for a public authority, to withhold information?”
Mihaylova, the Bulgarian MEP, agreed there needed to be more transparency in the research program. But she argued: “We cannot stop technology. We have to work on the balance between both sides of this process” to try to counteract the dangers posed by new surveillance devices.
For Breyer, there is a bigger question at stake, concerning who decides what kind of technological development is truly in the public interest. “Do we want to fund these dubious technologies?” he asked. “That’s a decision that should be taken democratically.”