Major internet companies have more and more of our personal data, posing huge challenges to civil liberties. States have created regulators in response, but can they stand up to the web giants? We discussed this and more with Isabelle Falque-Pierrotin, president of France’s Commission Nationale de l’Informatique et des Libertés (CNIL), the country’s independent data protection authority.

Benjamin Joyeux: As president of the CNIL, what are your main missions today?

Isabelle Falque-Pierrotin: The main mission of the CNIL is to be a data regulator: it’s about finding the right balance between the protection of individuals’ personal data and the freedom of economic actors to take advantage of the opportunities provided by new technologies. Initially, the CNIL’s role was to protect public records, but this gradually evolved into the role of an economic regulator of data, an aspect that didn’t exist when it was set up in 1978.

Currently, there is a clear imbalance between individuals online – internet users – and web companies that use their data. To redress this imbalance, we must first provide internet users with all relevant information. This educational aspect has become a very important part of the CNIL’s work. For example, together with some 60 other organisations, we’ve launched a digital education collective.

Another of the CNIL’s missions is to forward complaints from individuals to the companies concerned. We currently receive around 8000 complaints a year from individuals regarding online reputation, requests for social media account closure, data access rights, and so on. In 98 per cent of cases, we act as a mediator. However, we don’t hesitate to use penalties when necessary: they’re an irreplaceable deterrent. The CNIL’s sub-committee can impose various penalties, including fines of up to 150 000 euros and, in the case of a repeat offence, up to 300 000 euros, a penalty that can be made public. We currently impose 15 or so penalties a year on very large groups like Facebook or C-discount, a French online discounting platform. So the CNIL is a fairly comprehensive regulator as it has a range of tools at its disposal, from educational information to the power to impose penalties.

The CNIL is also tasked with protecting public records, with the same powers as for private records. We can issue an agency or minister with a compliance notice: for the Admission post-Bac platform, the former online higher education admission system, for example, we issued the Education Ministry with a compliance notice because, at the time, no information was given on the right of access for individuals concerned.

Where does the CNIL stand on these public records, whose numbers have exploded in the context of increased surveillance due to the terror threat?

At the CNIL, we cannot limit ourselves to a binary position between the need to keep citizens safe on the one hand, and the need to protect their data on the other. If we limited ourselves to this opposition, we’d remain at an impasse. There is of course a growing tension between these two principles. This became clear during the debate surrounding the French Intelligence Law of July 2015 [the Intelligence Law provided for several controversial measures such as the installation at telecommunications providers of so-called ‘black boxes’ designed to detect suspicious behaviour in connection data]. We believe that security and data protection are not conflicting goals but two underlying principles of the rule of law. If, in the name of security, we undermine the protection of personal data, we must then put in place more safeguards for individuals. What are they? During the debate on the Intelligence Law, the CNIL argued that it should contain a sunset clause, which would oblige legislators to review the law after a set trial period, and that the use of data should be strictly confined to the fight against terrorism. We also pointed out that the absence of external safeguards for this data was not democratically healthy, and we wanted this power of external oversight to be granted to us. It’s an issue that I discussed at length with the prime minister at the time. The question of intelligence file oversight was first raised in 2013 by the CNIL, because these files are increasingly used for taking highly operational measures against individuals. The S file [part of the file domestic security services hold on a wanted person], for example, which is well known to the French public, is a sub-file of a set that we technically control.

There is a right to indirect access: the law says that the person concerned cannot directly ask the holder of the file if and why they are the subject of one. The CNIL therefore acts as the intermediary between the individual and the file holder. We receive around 4 000 indirect right of access requests a year regarding intelligence files, tax records, etc. We guarantee the individuals concerned that their case will be handled by a CNIL magistrate appointed to exercise this right.

How do the CNIL’s resources compare to those of its European counterparts? Are they adequate given your new missions and the increasing digitisation of the economy?

The CNIL has 200 employees and an annual budget of 17 million euros. Last year, I made another request for more resources but it fell largely on deaf ears. To give you an idea, with our European neighbours, the equivalent of the CNIL has, for example, around 600 staff in Germany; in Great Britain, it has also about 600 staff and its role also includes freedom of information oversight. In Poland, the equivalent of the CNIL has roughly 150 people and it is about the same in Ireland, which has considerably increased staffing in recent years. At the European level, we work within the framework of the General Data Protection Regulation (GDPR): this is intended to strengthen the rights and freedoms of European citizens regarding their personal data, and to regulate the behaviour of companies that use this data.[1] The regulation will help redress the asymmetrical relationship between internet users and large internet companies by legally strengthening ideas of consent and the right to data portability. A major part of the GDPR enables any internet user to receive all of their personal data in a readable and interoperable format.[2]

Over the past five years, staffing levels have remained virtually unchanged at the CNIL but it has been given new missions: oversight of video surveillance, authority concerning security flaws in corporate IT systems, ethics, and so on. Now we also have the GDPR to guide economic actors towards legal compliance of their digital practices. We have significant regulatory expertise, with sector-specific legal guidelines, and we must do even more to promote what we offer. The CNIL provides economic actors with flexible legal tools that allow a certain degree of legal certainty. Providing individual support will be one of our main roles in the coming years.

In your view, what are the greatest opportunities and threats presented by the growing use of digital data, artificial intelligence, and algorithms in everyday life?

Artificial intelligence and algorithms are incredible tools in certain areas: for personalising healthcare, making hospitals work better, optimising logistical tools, managing road traffic, and so on. They are a set of tools which provides considerable collective benefits. At the same time, they can legitimately raise fears because, for the public, they are complex black boxes containing information of which the public can feel dispossessed. An algorithm which, using medical data, determines the price of people’s insurance is extremely dangerous. But an algorithm that, using the same data, simply tells us what is the best medical treatment represents definite progress. A year ago, we conducted a survey on basic knowledge of algorithms. And it turns out that the French have clearly identified this technical concept in their minds but say that they do not really understand how it actually works.

The CNIL published a report last December on the ethical questions posed by algorithms and artificial intelligence, leading us to hold a major public debate for over a year, working together with more than 60 stakeholders who each organised consultations in their fields to identify the ethical issues created by algorithms. This report brought us to make a number of recommendations, including two new principles. Firstly, the principle of loyalty: algorithms must not betray their users or the community. And secondly, the principle of vigilance: algorithms must be continually reviewed because they are extremely complex and constantly changing. We are promoting these principles at a global level by setting up an international working group.

Do you think that legislators have enough in their armoury given the pace of innovation in the sector and the financial might of large digital multinationals? Isn’t it an unfair fight that’s already lost?

We have to debunk the idea that big tech firms are too big to regulate, which isn’t right. First of all, the European level is the relevant level for this challenge. We already have, and will increasingly have, the ability to make our voices heard when it comes to these tech giants. Yes, there are difficulties that can be political, but regulators’ ability to act is real. In 2014, for example, the CNIL led action against Google. The issue was Google’s privacy policy, as the firm was completely ignoring its obligations under national laws regarding the combining of data. People said that we couldn’t do anything as we had no legal grounds. However, the CNIL eventually fined Google 250 000 euros, and was followed by the Spanish and others. We ordered the firm to post the decision on its home page for an entire weekend. From then on, regulatory authorities understood that they had some power. After this, the European Union began looking into the matter. The GDPR will allow us to co-decide with 28 Member States. It’s a single decision that will apply the same terms across the whole of Europe, and a major tool for rebalancing the power of regulatory authorities in the face of the tech giants.

By definition, the digital world and new technologies have no borders. The individual state is not, therefore, an adequate level for regulating a sector that operates on a global scale. How do you work with your international counterparts? You have been chair of the International Conference of Data Protection and Privacy Commissioners since last September. What does it involve?

Given that these issues cross borders, it is indeed vital to have sufficiently representative international mechanisms. Today, some 119 authorities are represented in the International Conference of Data Protection and Privacy Commissioners. There is also a United Nations Special Rapporteur on the right to privacy, Joseph Cannataci, whose mandate principally concerns the security of personal data. For the time being, it’s an informal network, but the creation of a permanent secretariat is being considered. As part of this conference, we have a big informal meeting once a year organised by a host country that changes each time. The stakeholder authorities in this conference don’t all have exactly the same missions in their respective countries. It’s a very diverse community but one in which all members are independent authorities.

In 2018, the issue on the agenda should indeed be artificial intelligence and algorithms. There needs to be a draft recommendation because many countries already have action plans, so the time is right to put the issue of personal data protection at a global level on the table.

When I became chair of this conference, I asked for a general reflection to draw up a strategy on its nature and members because some of them did not attach enough importance to their independence. The Conference shouldn’t be a sort of club for regulatory authorities, nor a sort of ‘high mass’ but rather a genuine body that unites all these authorities in tackling the immense challenges posed by digital technology globally.

There is still a lack of awareness among the public, who do not yet realise that advances in digital technology are likely to completely change the world of work, our welfare state, health, defence, etc. How can we enable citizens and civil society to fully engage in this debate?

There are more and more signs that this issue worries the public. For example, many people install adblockers on their computers to limit targeted ads that invade their screens, protecting themselves from what they may consider to be an attack by advertisers on their private digital space. The public’s awareness may appear slow in coming, but it does exist and is increasing gradually. In this context, the CNIL has a decisive role to play, a role that we have been looking to strengthen for many years now. In addition to their role in rebalancing relationships between internet users and large tech companies, the educational role of regulatory authorities vis-à-vis the public is fundamental – this is what we’re trying to do with our digital education collective, particularly at an international level. It’s then up to politicians to keep up with the shifts underway to legislate for the common good.

Since the Edward Snowden revelations, the public has suffered a crisis of trust in digital technology. Digital technology must continue to make progress in its respect for people and their rights. Take, for example, Google Glass, the ‘smart glasses’ which were withdrawn from sale to the public in 2015 following an outcry on their likely impact on people’s privacy. This clearly shows that protecting the rights of individuals and innovation do not conflict but go hand in hand. We must continually seek the right balance between the rights of individuals and the rights of companies to create a sustainable digital world.

 

 

[1] Adopted by the European Parliament in April 2016, this regulation will come into force in the 28 EU member states in May 2018.

[2] Interoperability is the ability of a product or system to work with other current or future products or systems without any restriction in either implementation or access.