The EU’s revamp to its data protection laws are a good attempt at keeping up with the times
Today, data is a hot topic but also a difficult one. It is of the essence because of how plentiful and ubiquitous it has become; market research firm IDC reckons that the amount of data produced by digital services will reach 180 zettabytes in 2025. At the same time it is difficult because it is an unprecedented resource which has raised concerns regarding privacy, security and control. But the EU’s new data protection laws may help to resolve some of these issues as it aims to provide a regulatory framework that deals with the various complexities with an economy and society becoming increasingly reliant on the great flow of data.
The General Data Protection Regulation, or the GDPR, becomes enforceable in May 2018. It is an upgrade of the old regulatory framework which was the Data Protection Directive 1995. The GDPR “lays down the rules relating to the protection of…persons with regard to the processing of personal data and the rules relating to the free movement of personal data.” The Regulation will bring about tougher fines for those who fail to comply, and also empowers data subjects by giving them greater control over their data. Elizabeth Denham from the Information Commissioner Office (ICO), the UK’s information watchdog, says that the Regulation is “the biggest change to data protection law for a generation.”
While the Regulation contains a lot of material, there are four main spokes which should be acknowledged. The first is that companies and other data processors will have to be more careful about obtaining consent from data subjects to collect and process their personal data as well as making it clearer that that consent has in fact been given sufficiently. The second is the rights (both old and new) conferred to the data subject. The Regulation goes to great lengths to lay these out in detail and places emphasises on respecting the rights and freedoms of the data subject. The third is the security measures that those processing data will have to implement. The fourth important principle of the Regulation is to enforce these provisions whilst maintaining the free flow of data both inside and outside of the EU. These aspects of the Regulation will likely impact those engaging in big data analytics and the traditional practices exercised by many companies handling data as part of their business activities.
But new data protection rules like the GDPR have been on the horizon in Europe for a while. Ever since the Snowden revelations in 2013, and the subsequent fall of the Safe Harbour Agreement (a regulatory framework to allow the transfer of data between the EU and America), there has been an appetite for tougher and more applicable data laws to hold tech giants like Facebook and Google to account. Not only may this help to improve trust and clarity in the digital economy, the EU also believes that by having identical rules across the single market, businesses could save a collective €2.3 billion a year due to less legal uncertainty.
Nevertheless, there are some snags. This includes the lack of clarity around where responsibility lies when a breach occurs where multiple parties are involved in the protection of the data. The practicalities of obtaining consent and effectively executing the right to erasure are questionable too. The issue of international data transfers also comes with some controversy. Even so, the Regulation is perhaps one the most advanced set data protection rules anywhere in the world. It is thus something to build upon.
Then and Now
Like the Data Protection Directive, the GDPR applies to both controllers and processors of personal data. Controllers are those who set out how data will be processed, whereas processors actually carry out the processing. A controller could be an organisation or government body, with an IT or cloud-computing company typically occupying the role of the processor on behalf of the controller.
One of the most significant differences between the GDPR and the Data Protection Directive is that the two are different types of legislation; the former being a regulation and the latter being a directive. Regulations, in the context of EU law, are arguably the strictest forms of legislation which are automatically binding on all Members. This derives from Article 288 of the Treaty on the Functioning of the EU, which says that Member States “shall adopt regulations, directives, decisions, recommendations and opinions” of the Union.
Many of the rules of the Directive can be found in the new laws, although the Regulation updates some of the definitions of key terms relating to data protection rules. The definitions of the terms “personal data” and “processing” are broader. According to the Regulation, personal data is classed as “any information relating to an identified or identifiable natural person.” This could include names, interests, location data and anything else relating to an individual. The act of processing data is defined as any actions taken involving the data, from collecting and recording to storing and disclosing.
Whilst there are some familiar aspects, there are new additions to data protection rules in the Regulation. For example, the Regulation requires there to be a data protection officer; both the controller and processor should designate the appropriate person, especially where data processing activities “require regular and systematic monitoring of data subjects on a large scale.” These officers will also be an important point of contact between organisations and supervisory authorities. Also, the Regulation requires joint controllers to allocate its compliance with the laws, including implementing the necessary security measures.
One misconception apparently evident so far as the GDPR’s enforcement draws closer is that it is just merely an update of the old laws. Imperva, a cybersecurity firm, conducted a survey in December 2016 which showed that only 43% of companies were assessing GDPR’s impact and almost a third of companies were not preparing for the new laws at all. This apparent complacency ignores the increased severity and expanded detail of the new upcoming laws. Those who fail to comply could face fines of 4% of global revenues or €20 million (whichever is higher). This combined with the greater emphasis placed on consent, data subject rights and security in particular means many organisations may have to adjust much more than before.
Of the many changes, one of the most potent are the rules surrounding consent for processing. The Regulation defines consent as “any freely given, specific, informed unambiguous indication of the data subject’s wishes…by a statement or by a clear affirmative action” signifying permission to process personal data relating to the individual in question. Data processing can only take place when consent has been given, although there are other requirements which need to be met too.
There are additional conditions for consent that are not found in the Data Protection Directive of which are detailed in Article 7. Among them include the requirement for the controller to demonstrate that consent has in fact been given. When consent is given in writing, it needs to be presented “in an intelligible and easily accessible form, using clear and plain language.”
In essence, the Regulation emphasises that this consent must be “unambiguous”, “freely given”, “specific”, and “informed.” The data subject must also be notified of their right to withdraw consent, which can be at any time and should be easy to do. A withdrawal does not, however, invalidate any processing that had taken place beforehand.
The specifics surrounding consent, which are quite different from the Directive, signifies the laws appreciation of the modern digital age. The emphasis placed on gaining, maintaining and demonstrating consent is important since data is non-rivalrous, meaning it can be used and copied by more than one person at a time infinitely. Thus, data could be used for activities not necessarily agreed to by the data subject. The Regulation deals with this by mandating that where processing has multiple purposes, consent should be given for all of them. Amazon, for example, may ask for an email address as a means for identification, along with a password, when logging onto its website or apps. Under the Regulation, Amazon would have to seek further consent to use that email address for marketing purposes. This is evident in the Regulations’ recitals, where it states that “Silence, pre-ticked boxes or inactivity should not…constitute consent.”
This broadening of consent which may be given by the data subject would typically be justified by vague terms and conditions of digital services. Such documents are often littered with legal jargon unrecognisable to the average user. To combat this, the Regulation requires specific consent to be obtained for each instance in which the data is processed. But the new laws also require that the specific purpose for processing must also be made clear to the data subject. In Article 6, it is stated that data processing can only take place, among other things, when “the processing is necessary for the performance of the agreement.” The strict rules around consent combined with this “purpose limitation” principle may help to establish better transparency and trust between companies and consumers by conveying exactly when data is being collected and what is being done with it. The Regulation, then, makes relying on consent as the sole legal basis for processing perhaps necessarily difficult for those processing data.
The eradication of reliance on passive acceptance also helps to combat the tendency for users to be lax about the data they give away. This has been the trend since the dotcom bubble burst when companies gathered data for targeted advertising to make a quick buck, according to Glen Weyl from Microsoft Research. Now that the Regulation has introduced the requirement to provide “specific, explicit and legitimate purposes” for processing data when obtaining consent, users are better informed about what will happen to their data and whether they would actually want to permit such activities instead of blindly agreeing to terms and conditions they do not understand. It reflects Steve Jobs’ perception of privacy, in which people “know what they are signing up for.”
It may also mean that companies will have to make better offers to their users and be able to showcase that their data will be used for something worthwhile. The Regulation could thus help demonstrate the true value of personal data by making it slightly more difficult to attain.
There are problems though. The question of practicality arises when it comes to these stricter procedures for attaining consent. Especially in the context of big data analytics where personal data is usually repurposed, firms will need to obtain informed consent for any further use of the data. The definition for ‘profiling’ and Article 22 are of particular interest here. The Regulation defines profiling as the use of personal data to “evaluate certain personal aspects relating to a natural person.” This can include data on a person’s health, work, economic situation, personal preferences or interests. Article 22 allows individuals to not be subject to automated data processes, essentially where decisions are “automatic” or “without human intervention.” In comparison to the Directive, the Regulation contains a more specific definition for ‘profiling’ and requires explicit consent as a new legal basis for profiling activities.
As such, the embracement of big data will be a bumpy ride for companies under the GDPR. Since consent can be withdrawn by the data subject at any time, companies will need to track when consent is given and when it is revoked. They will also have to implement the appropriate filters in their analytic models which can differentiate between data where consent has and has not been given, as well as detect changes with consent and identify when individuals request for the erasure of their data. Consequently, obtaining consent from users of a digital service could be difficult in practice without those systems in place, making the implementation of big data analytics, machine learning and artificial intelligence awkward.
Thus, it exposes the clash between regulatory frameworks and market incentives. Profiling extracts value from big data and can be used in numerous ways which are of benefit, as well as sometimes to the detriment, to individuals and society as a whole. Google’s AI-powered digital butler called “Assistant” can only get better at performing tasks and answering questions by analysing more data from the user. The GDPR could therefore potentially frustrate the legitimate progress being made by some companies to build and offer services for consumers that want them. This may only be the case to a certain extent however; as consumers start to move away from their passive habit of giving away their data carelessly while companies begin to appreciate the stricter rules around data collection, the stipulations under the GDPR may not seem so onerous.
There is also the argument that “informed” consent may not always be possible with regard to profiling and big data analytics. While the Regulation requires the data subject to be made aware of the purposes for which their data will be used, such information may only be brief or vague. This is because, as Anna Rossi argues in her paper on the GDPR, “profiling makes invisible patterns of data visible and it discovers new information we simply could not anticipate that the profiler would discover.” This again demonstrates and the conflict between the legal demands of the GDPR and the data practices of companies when it comes to the principle of consent.
The emphasis placed on both consent and purpose limitation are a few amongst other principles and rights which are aimed to confer more power and control to the data subject. The GDPR both modifies some of the rights found in the Data Protection Directive as well as implementing new ones. Of all the new rights included in the Regulation, the right to be forgotten may be one of the most notable.
Article 17 of the Regulation states that the right to be forgotten entitles the data subject to have their data erased upon request when the data “are no longer needed for legitimate purposes.” This right is intended to allow data subjects to remove information about them which may be “outdated, inaccurate or irrelevant.” It also helps to encompass the overall theme of the GDPR, which is to confer the data subject with more and better powers enabling greater control over their data. Exceptions to this rule apply for reasons of protecting freedom of expression or where erasing data would not be in the public interest.
The rationale behind this right essentially stems from the European court case of Google v Costeja González (2014). This established that search engines were obliged to ensure that any requests to delete certain information about individuals when requested are responded to in accordance with data protection laws. Specifically, the European Court of Justice, or ECJ, held that search engines such as Google were to be classed as data controllers and that the function that such companies carried out did fall under the definition of ‘data processing’. As such, Google would have to remove links to webpages published by third parties on its search indexes when requested by data subjects.
The court case demonstrated the purpose of the right to be forgotten, which is to allow the data subject to erase information from their past which could be detrimental to their future. In Google v Costeja González, the plaintiff’s name could previously be found via Google in “connection with an auction under attachment proceedings of a real-estate for the recovery of social security debts.”
The case also shows why search engines like Google are particularly subject to respecting this right. Such sites are commonly used as a doorway to more information on the internet. By closing the doorway to certain information about certain individuals, that information becomes harder to access although it does not wholly or permanently remove the data. For many sites, Google is a primary source of traffic, due to the company’s dominance in search. Closing off this primary source will make access to certain information more difficult.
This is the idea, but the practicalities of enforcing the right to be forgotten are questionable. To begin with, it is not always possible to completely erase information on the internet. A link can be removed from a Google search index, but it is possible that the same link could be found on an alternative search engine such as Yahoo or Bing. The fact that data can be copied numerous times and distributed speedily and widely on the internet means that data can never be guaranteed to be truly erased.
This links to a second problem, which is that the right to be forgotten could encourage what is called the ‘Streisand effect’, where the attempt to conceal certain information results in that information becoming even more available than before. This is fuelled by the natural curiosity and intrigue surrounding any kind of information being suppressed as it instead draws more attention. A few years ago, a French spy agency ordered Wikipedia to amend an article about a military radio base to remove classified information. But instead of the information being suppressed, the story became widespread, and, thus, the information from the article became more well-known and more accessible than ever intended. The same effect could easily take place with other attempts to suppress information on the internet since it is a rather ruthless platform when it comes to controlling the information swimming around on it. This, therefore, provides another reason as to why enforcing Article 17 of the GDPR may not be so realistic.
There are also likely to be conflicts between this right to erasure and freedom of expression. Jeffery Rosen, an American academic and legal affairs commentator, reckons that the right to erasure will be “the biggest threat to freedom of expression on the internet in the coming decades.” This concern is particularly applicable when there are requests to delete pieces of information which may be of value to public knowledge. The case of Mosley v United Kingdom (2012) addressed this issue where the plaintiff brought an action against a UK newspaper after it published an article detailing his sexual activities allegedly involving Nazi elements and included photos that were taken from video footage taken by one of the participants. The court held that there was no requirement for the plaintiff to be notified before the article was published and that the role of the press and its duty to act as a “public watchdog” in a democratic society meant that applying any limitations on freedom of expression required a narrow approach.
Essentially, the court found that where the press were reporting facts capable of contributing to a debate of general public interest as opposed to making tawdry allegations about an individual’s private life, the protection of freedom of expression should be given greater prominence than the protection of privacy. It was appropriate to take this stance, according to the court, since the plaintiff in Mosley was a prominent figure in international motor racing.
A particular concern which arises with Article 17 of the Regulation is the potential chilling effect on freedom of expression online. Since companies will soon be facing much steeper fines than they used to under the Data Protection Directive, they may be tempted to create insouciant agreements with users rather than assessing each erasure request on a case-by-case basis. Companies like Google will be put in a somewhat difficult position when it becomes the arbiter as to whether certain information is “inaccurate, irrelevant or out-dated” in compliance with Article 17. Such arrangements may lead to frivolous or even fallacious complaints being accepted, at the expense of the public.
In addition, there is the possibility of creating inconsistent standards on privacy and freedom of expression among different jurisdictions. The different legalities surrounding such rights between the EU and America, for example, may result in a complicated clash which is now particularly the case with the broad reach of the Regulation. The American case of Sidis vs FR Publishing Corp (1940) emphasised only very limited circumstances in which the right to be forgotten will be permitted since such a right could threaten free speech and expression by encouraging unnecessary censorship. Thus, whereas the deletion of information about a person submitted by the person themselves is tolerated, forcing the erasure of information about oneself published by others is not.
Apart from the right to be forgotten, the GDPR also confers to the data subject the right to know the circumstances surrounding the processing of their data. To begin with, there is Article 13, which states that data controllers are required to provide the data subject with certain information regarding the use of their data, including the purpose of the processing and the legal basis for processing. Any recipients of the data in question must also be made known as well as the length of time the data will be stored. Article 14 details the same provisions and would apply where the data is not obtained from the data subject.
In addition to this, Article 15 confers to the data subject “the right to obtain…confirmation as to whether or not personal data…are being processed.” Where there are automated decision-making systems involved in the processing of data, the data subject needs to be provided with “meaningful information about the logic involved” and also “the significance and the envisaged consequences of such processing for the data subject.”
The similarity of the language used in Articles 13, 14 and 15 suggests that all three mandate that a ‘right to explanation’ should be provided by the data controller to the data subject. However, different types of ‘explanations’ can be given with regard to the processing of data. The first is a type of explanation detailing the functionality of the system for processing. This kind of explanation would address “the logic”, “significance” and “envisaged consequences” of a processing system, as well as the operations of any automated components. It may involve decision trees for example. The second type would detail the particular reasoning or rationale behind specific decisions. Such may involve machine-defined, case-specific decision rules.
The explanations required by the GDPR can also be distinguished by the time at which they may be provided. There could be explanations provided before processing decisions are made but these can only detail the generalised functionality of the processing system and cannot possibly detail the rationale behind specific decisions since they have not yet been made. Alternatively, there could be explanations provided after processing decisions are made and which would be able to provide system functionality explanations as well explanations for specific decisions which have in fact been made.
There is some ambiguity though. Articles 13 and 14 are fairly unequivocal in conveying that an explanation of the system functionality before decisions are made would be required and that such notifications are to be provided at the point when data is collected for processing. On the other hand, Article 15 does not provide such clarity, even though its language also suggests that only the same kind of explanation would be needed. But it is not clear what exactly is meant by “meaningful information”, “logic involved”, “significance” and “envisaged consequences.” Thus, the kind of explanations regarding automated decision-making systems that would satisfy this section of the GDPR is not entirely clear.
Some of the imprecisions and gaps in the GDPR when it comes to some of the data subject’s rights may be better clarified by future legal disputes addressing them. In the meantime though, such legal uncertainty may make the implementation of the appropriate systemic procedures to comply with the new laws tricky.
Who Protects My Data?
The GDPR not only attempts to empower data subjects by giving them greater control over how their data is used but also establishes obligations to ensure that this personal data is sufficiently protected. The Regulation now recognises the pernicious digital environment in which data subjects, as well as data controllers processors, operate in. It also recognises the little work which has been done by a worryingly large number of companies to combat this. Research by Barkly, a security firm, found that 52% of organisations that suffered successful cyber attacks in 2016 are not making any changes to their security in 2017. This is despite the fact that a serious cyber attack could cost the global economy more than $120 billion, according to Lloyd’s of London, an insurance market.
As such, the GDPR seeks to strengthen the duty to implement better security in the face of growing cyber threats to protect data subjects from malicious actors. Article 32 of the new laws state that both the data controller and the data processor will be required to “implement appropriate technical and organisational measures to ensure” an appropriate level security is in place to protect “the rights and freedoms” of the data subject. These data security measures should include the use of pseudonymisation, encryption, as well as the assurance of ongoing confidentiality, integrity, availability, and resilience of processing systems and services. There should also be the ability to restore the availability and access to personal data. Such measures will need to be tested regularly.
Article 32 replaces Article 17 of the Data Protection Directive and goes beyond the former laws by obligating both the data controller and processor to implement an appropriate level of security. In 1995, the Directive contained what was essentially the first major legal requirement to implement “an appropriate level of security.” But such obligations were only intended for data controllers. Under the GDPR, the obligations are now much broader. The changes to definitions in Article 4 of the Regulation combined with the widened scope of Article 32 means that the requirement of adequate security extends to all those who interact with personal data, not just data controllers. However, for those who process data purely for personal or household activities, the GDPR will not be applicable. Nevertheless, the Regulation’s differences in application and implementation allow the laws to be more in line with the realities of the modern digital age.
Even so, there does not exist within the laws any extensive or detailed list as to all the security requirements which would need to be put in place as a minimum. Although some measures such as encryption and pseudonymisation are explicitly mentioned, it is unlikely that companies would only have to stop at those. Alexander Dittel of Charles Russell Speechylls, a UK law firm, provides a more thorough list of the kind of security measures and protocols businesses would need to implement to ensure full compliance with the GDPR before it comes into force in 2018. Some of the appropriate measures, according to Dittel, include “properly configured” firewalls, “unique passwords of sufficient complexity”, “regular software updates” and encryption on all portable devices as well as of personal data while in transit. Companies should also use “real-time protection anti-virus, anti-malware, and anti-spyware software.” Dittel reckons that sufficient training of “staff, contractors, vendors, and suppliers on a continuous basis” should also be included too. Such training should entail education on “data processing obligations [and] identification of breaches and risks.” Ensuring good physical security will also be key as part of a sound overall security system, and Dittel also affirms that companies should have a “strict ban on the use of personal email for work purposes.”
Dittel does emphasise that while these security requirements are not explicitly mentioned in the Regulation, they are based on “commonly adopted security measures and trends in enforcement action by data protection regulators.” If companies are seeking to better defend themselves against online threats while complying with the GDPR and other relevant data protection laws, it would seem sensible to implement at least some of these measures.
The laws unquestionably place the responsibility on the data controller, who dictates how the data will be processed, and any data processor, who will actually carry out the processing of the personal data. Since it will dictate the terms of processing, the data controller will have to take on the main responsibility for the protection of the personal data. In doing so, and as detailed in Article 28, the controller will, realistically, only engage with processors providing adequate guarantees with regard to the implementation of appropriate technical and organisational security measures via a written contract. But even in the absence of such a contract, Article 32 does provide that the implementation of appropriate security would be a legal obligation for the processor nevertheless. Thus, cloud storage companies, which store data and make it available to specified recipients, would be regarded as processors and thus bound by the GDPR, meaning appropriate security would have to be a part of its operations. Even internet service providers, or ISPs, would be bound by the Regulation since they would be transmitting personal data through networks and would thus fall under the definitions detailed in Article 4.
Where there may be a loophole in terms of who would be bound by the Regulation is where IT security firms come into play. If controllers or processors are not capable of implementing adequate security themselves, they would likely employ the services of another firm with the relevant expertise. Security firms may implement firewalls, anti-virus or anti-malware software or use encryption and other methods to protect data. Yet while the security firm would be engaging in activities to protect personal data, it would not, under the definition set out in Article 4 of the Regulation, be ‘processing’ the data. Accordingly, since the firm would not be accessing or interacting directly with the data, it would not be bound by the GDPR.
The duty to secure data would therefore either have to come from a contractual arrangement between the controller or processor and the security firm or be imposed by national law. As Pieter Wolters points out in his article on the GDPR’s harmonisation of security requirements, the exact existence and scope of the security obligations would become dependant on the national law of member states. As such, “a German court might set different requirements for ‘appropriate’ security than a Spanish court.” This could potentially mean that the security obligations of security firms could be different in each member state of the EU. As such, the Regulation would not achieve the harmonisation of security duties across all member states.
Wolters also explains how those developing individual pieces of security software could escape the legal demands of the GDPR too. A developer of a firewall, for example, would not be classed as a controller or processor of personal data, much like the security firm. In addition, the Product Liability Directive of the EU only provides compensation for damage to property or damage caused by death or personal injury. A breach of security of personal data is unlikely to result in any of this, and so a firewall developer could potentially escape liability for any defect in the firewall which could lead to a breach of the security of personal data.
Consequently, as Wolters argues, depending on the jurisdiction and the legal basis of the duty, IT security firms and developers of security software “may not be liable in the absence of negligence or if the breach of the security is caused by a factor beyond their control.” Furthermore, under Article 82 data subjects would only be entitled to compensation for an infringement of the GDPR and not if the security of the personal data was breached even where appropriate measures were implemented. This legal uncertainty created by the Regulation would, as Wolters says, give “national courts a lot of room for discretion.”
In early 2017 the ICO fined Royal & Sun Alliance Insurance £150,000 for its poor efforts at keeping personal data safe from malicious hands. “There are simple steps companies should take…including using encryption, making sure the device is secure and routine monitoring of equipment,” said Steve Eckersley, the ICO’s head of enforcement. “RSA did not do any of this and that’s why we’ve issued this fine.” When the GDPR comes into force in 2018, companies like RSA will face steeper fines as well as repetitional damage for such blemishes. While the Regulation does not implement a broad harmonisation of security duties across all parties, it does at least set a standard for data controllers and processors which should not be ignored.
Free Movement of Data
The internet and the data it hosts respects no boundaries or borders. Upon this important recognition, the GDPR imposes rules on data transfers outside of the EU. Article 45 states that transfers of personal data “to a third country or an international organisation may take place where the [EU] Commission has decided” that there is an “adequate level of protection.” This “adequacy” test conducted by the Commission will look at a number of relevant factors on the third parties ability to protect personal data. These include its human rights record, the existence of a functioning supervisory authority and its commitment to international obligations. This adequacy test will be reviewed every four years and can be revoked if it is deemed appropriate to do so.
Article 46 of the Regulation states that where the Commission has not made such a decision, then either the data controller or processor can engage in data transfers to a third party as long as “appropriate safeguards” are provided and “enforceable data subject rights and effective legal remedies for data subjects are available.” These “appropriate safeguards” may be provided for by “a legally binding and enforceable instrument between public authorities or bodies, binding corporate rules [or BCRs], standard data protection clauses adopted by the supervisory authority, standard data protection clauses adopted by the Commission and an approved code of conduct, or certification enforceable in the third country.” According to Article 47, supervisory authorities shall approve BCRs provided that they are “legally binding” and “expressly confer enforceable rights on data subjects with regard to the processing of their personal data.”
There is also Article 48, which states that a decision of a court or an administrative authority in a third country requiring the disclosure of personal data by a data controller or processor “may only be recognised or enforceable in any manner if based on an international agreement.”
Where Articles 45 to 48 do not apply, data transfers may still be permitted under Article 49. This exception only applies when consent by the data subject has been given or when such transfers are necessary for the performance of a contract. Other specific situations where such transfers are permitted include when it is necessary for “important reasons of public interest”, necessary for legal proceedings or “necessary in order to protect the vital interests of the data subject” where they are “incapable of giving consent.”
Previous events surrounding international data transfers have very much influenced the rules within the GDPR on this kind of activity. In particular, the fall of the Safe Harbour Agreement, a former data transfer mechanism negotiated between the EU Commission and the US Department of Commerce (DOC), seems to have shaped much of the legislation addressing the clash between the EU and America when it comes to data protection.
The Safe Harbour used to be the basis for finding that the US provided an “adequate level of protection” of personal data. Under this legal framework, more than 4,000 US companies committed to adhering to certain data-protection principles in order to transfer data across the Atlantic. But after the Snowden revelations in 2013, plenty of criticism was directed toward the Safe Harbour for failing to provide the protection of personal data of EU citizens using US-based services. This eventually led to a court case and the Schrems decision a few years later, in which the ECJ invalidated the agreement on the grounds that it did not provide adequate protection from the mass and indiscriminate surveillance of American intelligence agencies.
The invalidation of the Safe Harbour led to the creation of the EU-US Privacy Shield in an attempt to settle disputes with the previous legal framework. But even this new agreement is under threat and is likely to result in another court challenge. Some argue that it still does not defend against US surveillance any better than the Safe Harbour managed. In addition, the ECJ is becoming steadily stricter in its interpretation of fundamental rights, which was not only evident in its Schrems decision but also in other cases. As such, the Privacy Shield is likely to be challenged and invalidated and would thus be insufficient under the GDPR.
Acknowledging this backdrop, Article 48 could be interpreted as a provision of more political significance than anything else. Controllers and processors enjoy protection from orders from other jurisdictions to disclose personal data they may hold. Yet, it is almost inconceivable that such would not be the case. Tobias Bräutigam, a former senior legal counsel at Microsoft, in his article on the GDPR believes the idea that “foreign authorities do not have jurisdiction over controllers or processors established in the EU” is obvious in the context of international law. This, therefore, suggests that Article 48 is more of a signal to the US to encourage a sufficient agreement on the exchange of personal data. This interpretation is further reinforced when read together with Article 50, which invites the EU Commission and the data protection authorities to “develop international cooperation mechanisms” and to further international cooperation. Nevertheless, as Bräutigam argues, Article 48 of the Regulation will perhaps have limited practical importance.
Even so, the applicability of the GDPR will be broad enough to directly impact US companies and so the clashes which existed previously between the US and the EU may resume. Article 3 of the Regulation states that the new laws apply to controllers and processors interacting with data within the Union and also to organisations which may not be established in the EU but offer services to individuals in the EU. Even Brexit should have little effect, if any, on the GDPR’s enforceability in the UK.
US companies will therefore also need to take the necessary action to become compliant with the Regulation if they intend on operating in the EU. Since the Privacy Shield will not be an option, companies will have to comply with Articles 45 or 46 of the GDPR. There is an element of legal uncertainty with Article 45 however since the approvals granted by the EU Commission are dynamic. The Commission’s ability to “repeal, amend or suspend [a] decision” on adequacy was incorporated into the Regulation due to the influence of the Schrems decision.
Alternatively, companies may look to rely on the provisions of Article 46. In doing so, they will have to pay particularly close attention to both Articles 40 and 42. Article 46 states that companies will need to establish either “an approved code of conduct pursuant to Article 40” or “an approved certification mechanism pursuant to Article 42” to allow transfers of data to the third party in question. The “codes of conduct” specified in Article 46 will be, as stated in Article 40, produced by the Member States, the supervisory authorities, the Board and the EU Commission to “contribute to the proper application of [the Regulation].” Article 42 states that these same parties “shall encourage…the establishment of data protection certification mechanisms and of data protection seals and marks, for the purposes of demonstrating compliance with [the Regulation].”
Data controllers and processors will have the ability to create their own contractual clauses for data transfers under the GDPR but will require the approval of a competent supervisory authority before any data is processed. This creates a degree of flexibility for companies wanting to conduct data transfers where the Commission has not already stepped in. The new laws implement rigorous checks and balances to ensure that compliance with certain data protection principles are still adhered to by all parties, while not completely stifling the free flow of data. Although, as Bräutigam explains, “new instruments have not been developed to facilitate certain data flows” and “for most data controllers, the burden to submit to the process described in the [GDPR]…will be too burdensome.” Bräutigam also suggests that the ECJ should give more clarity to “what “fundamental rights protection” means in practice” to make it clearer as to whether the provisions laid out in the GDPR will cause companies to “rethink their approach to data transfers” to adequately protect data subjects.
These shortcomings may thus disappoint those who would have preferred to see even stricter rules in place. The Regulation resembles much of what was included in the Data Protection Directive already. It is likely then that the issue of data transfers will produce further legal disputes in the future meaning that the provisions within the GDPR may be subject to change.
New Officer In Town
Implementing the requirements of the GDPR will be a complicated task for many organisations. As a result, the Regulation requires the appointment of a data protection officer, or DPO, to take on a number of responsibilities relating to the compliance of these new data protection rules.
Article 37 of the Regulation states that controllers and processors should appoint a DPO when the processing is carried out by a public body (except a court), or when the “core activities…consist of processing operations which…require regular and systemic monitoring of data subjects on a large scale”, or where there is the processing of special categories of data relating to criminal convictions and offences on a large scale. These DPOs shall be chosen “on the basis of professional qualities and expert knowledge of data protection law and practices.” The data controller or processor will also have to “publish the contact details of the data protection officer and communicate them to the supervisory authority.”
Along with this, Article 39 provides a list of tasks which the DPO will be required to do. Not only will DPOs need to provide education and training to company staff on important compliance requirements, but they will also have to conduct audits to ensure compliance and serve as a point of contact between the company and the supervisory authorities. Another important responsibility includes informing data subjects about how their data is being used, their rights, and what measures are in place to protect their personal data.
The Regulation does state that DPOs are to be given a degree of autonomy. In particular, Article 38 states that the DPO cannot “receive any instructions regarding” their work from the controller or processor. The DPO is also “bound by secrecy or confidentiality concerning the performance of his or her tasks.” It is here where there could potentially be some friction. DPOs may order companies to cease any data processing in order to properly assess the procedures in place and the associated risks of that particular processing system. Such actions could disrupt some business plans and frustrate timescales. In addition, the strong autonomy awarded to the DPO makes disciplinary action difficult, but not impossible. While the Regulation states that the DPO cannot be “dismissed or penalised by the controller or the processor for performing his tasks”, companies may seek to implement other ways to hold such officers to account, such as withholding promotion.
But the mandated independence of the DPO should not necessarily distract from the important work which such positions will come with. The DPOs autonomy is meant to allow a sufficient amount of protection to conduct and complete its work. Ideally, a balance needs to be struck. After all, as Steve Durbin, the managing director of the Information Security Forum, emphasises, preparations for the GDPR will require the right resources and expertise as well as having “data protection, legal and information security teams in place.” This is all to ensure that, as he says, companies “are not overwhelmed with requests closer to the enforcement deadline.”
The DPO will be heavily involved not only in this planning stage but also in ensuring that companies continue to comply with the new laws thereafter. The GDPR does come with hefty fines for those who fall short of its provisions. Consequently, the demand for the appropriate professionals will be high. As made clear in the recitals for the GDPR, companies will need DPOs with the right level of expertise which will be determined “according to the data processing operations carried out and the protection required for the personal data processed.”
It is therefore imperative that companies start recruiting as soon as possible. The high demand may not necessarily meet the supply, and those who fail to require DPOs with the right level of expertise and knowledge may be at a disadvantage when the GDPR becomes enforceable in 2018. Sources for The Cyber Solicitor suggest that there is already a shortage of professionals able to fill such positions. This combined with long corporate hiring cycles could make recruiting difficult for some companies. Those who do not appoint a DPO could, in theory, be subject to a fine since it would be an infringement of the GDPR. But it is perhaps doubtful supervisory authorities would issue such fines if companies still manage to be compliant despite not appointing a DPO. However, companies may still be keen to implement one to ensure that all bases are adequately covered so that it is compliant with all the other more pressing stipulations of the Regulation.
The GDPR is a big piece of legislation, but necessarily so. It addresses a lot of the new developments in data analytics and modern business practices and provides users with a more reliable framework to protect personal data in today’s digital age. Most of the loopholes that The Cyber Solicitor identifies, such as with the right to erasure and international data transfers, should be clarified by future court cases. It is overall a good attempt at keeping up with the times.
For the companies and organisations who will be subject to the Regulation, many of the more difficult changes will be more cultural than technical. The GDPR attempts to shake-up traditional notions and habits surrounding the processing of data to ensure that safer, more trustworthy and secure practices are used. The more severe fines for non-compliance are part of this aggressive revamp.
The Regulation thus champions the EU as somewhat of a trailblazer for data protection laws. It is possible that other jurisdictions will follow in its footsteps, since for many businesses across the world, Europe is an attractive market and too big to miss out on. Britain, for example, plans to introduce new data protection legislation mirroring the provisions of the GDPR.
The likes of Facebook and Google will be paying particularly close attention to the new laws too. Numerous tech companies from the US have been in varying tussles with the EU; there have been disputes on tax, abuse of market power and data protection too. The Snowden revelations kicked off a clash between Europe and the US on the issue of privacy, and the GDPR is a much stricter form of data protection rules which companies may not be more familiar with across the pond.
The Cyber Solicitor ultimately believes that the Regulation is the way to go. Over time, amendments should be made to fix some of its flaws, but it is a good start nevertheless.
Google v Costeja González  All E.R. (EC) 717
Mosley v United Kingdom  E.M.L.R. 1
Sidis vs FR Publishing Corp, 1940 U.S. LEXIS 26
Schrems v Data Protection Commissioner  IEHC 351
CELA, M. H. (2017). Meeting Upcoming GDPR Requirements While Maximizing the Full Value of Data Analytics, 1–24.
Voss, W. G. (2016). European Union Data Privacy Law Reform: General Data Protection Regulation, Privacy Shield, and the Right to Delisting, 1–14.
CELA, M. H. (2017). Viewing the GDPR Through a De-Identification Lens: A Tool For Compliance, Clarification, and Consistency, 1–22.
Voss, W. G. (2016). European Union Data Privacy Law Reform: General Data Protection Regulation, Privacy Shield, and the Right to Delisting, 1–14.
Wolters, P. (2017). The security of personal data under the GDPR: a harmonized duty or a shared responsibility? International Data Privacy Law, 1–14.
Rossi, A. (2016). Respected or Challenged by Technology? The General Data Protection Regulation and Commerical Profiling on the Internet, 1–48.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to explanation of automated decision-making does not exist in the General Data Protection Regulation, 1–47.
Greenleaf, G. (2016). International Data Privacy Agreements after the GDPR and Schrems (pp. 1–8).
Akintunde, S. E. (2017). AN Analysis of the General Data Protection Regulation (EU) 2016/679. by Salami Emmanuel Akintunde, 1–40.
Bird, A. (2016). Transferring Data From the EU: Privacy Shield and Data Transfers Under the GDPR, 1–21.
Bräutigam, T. (2017). The Land of Confusion: International Data Transfers between Schrems and the GDPR, 1–38.
GDPR series: the role of the DPO – overcoming a GDPR hurdle D.P.I. 2017, 10(3), 7-10
Right to be forgotten: a critique of the post-Costeja Gonzalez paradigm C.T.L.R. 2015, 21(6), 175-185