[Redacted]: This Article Categorised [Harmful] by the Government

In April 2019, the UK Government’s DCMS released its White Paper for ‘Online Harms’, which would establish in law a new duty of care towards users by platforms to be overseen by an independent regulator. Our earlier research outlines how we got to this point, sets out what the White Paper proposes, and criticises its key aspects. Our objections and criticism remain applicable to the UK Government’s Online Safety Bill. The Parliament is now scrutinising the Bill. The House of Lords Report sparked some optimism that the scrutiny could address critical concerns around free speech in particular. The Draft Online Safety Bill Joint Committee Report, however, suggest otherwise. This paper returns to key arguments as to why risk-based regulation and duty of care are not appropriate for policing content and expression online. We focus on the human rights implications of the Bill, in particular, the provider duties to ‘handle’ legal but harmful content. Here, we reemphasise the vague conceptualisation and nature of this harm, as well as the inadequate duties attached to it. We argue that the independence of OFCOM cannot be guaranteed.

Reviving a European Idea: Author’s Right of Withdrawal and the Right to Be Forgotten under the EU’s General Data Protection Regulation (GDPR)

The right of withdrawal allows authors to unilaterally withdraw a copyright contract and retract copyrighted work to disassociate based on moral reasons. Although accepted in some European jurisdictions, the right of withdrawal is mainly theoretical due to the scarcity of case law resulting from its strict requirements. Therefore, it has been perceived as a concept without practical use. However, this right is underpinned by a significant and still valid European idea reflected by the EU’s General Data Protection Regulation, outlined in the data subject’s right to be forgotten. While the right of withdrawal and the right to be forgotten have different characteristics and goals, these two rights share the same reasoning, emphasising that the same European spirit is still alive and very much needed.

Legal Algorithms and Solutionism: Reflections on Two Recidivism Scores

Algorithms have entered courts, e.g. via scores assessing recidivism. At first sight, recent applications appear to be clear cases of solutionism, i.e. attempts at fixing social problems with technological solutions. Deploying thematic analysis on assessments of two of the most prominent and widespread examples of recidivism scores, COMPAS and the PSA, casts doubt on this notion. Crucial problems – as different as “fairness” (COMPAS) and “proper application” (PSA) – are not tackled in a technological manner but rather by installing conversations. It shows that even technorationalists never see the technological solution in isolation but are actively searching for flanking social methods thereby accounting for problems that cannot be eased technologically. Furthermore, we witness social scientists called upon as active parts of such engineering.

Biomedical Data Identifiability in Canada and the European Union: From Risk Qualification to Risk Quantification?

Data identifiability standards in Canada and the European Union rely on the same concepts to distinguish personal data from non-personal data. However, courts have interpreted the substantive content of such metrics divergently. Interpretive ambiguities can create challenges in determining whether data has been successfully anonymised in one jurisdiction, and whether it would also be considered anonymised in another. These difficulties arise from the law’s assessment of re-identification risk in reliance on qualitative tests of ‘serious risk’ or ‘reasonable likelihood’ as subjectively appreciated by adjudicators. We propose the use of maximum re-identification risk thresholds and quantitative methodologies to assess data identifiability and data anonymisation relative to measurable standards. We propose that separate legislation be adopted to address data-related practices that do not relate to demonstrably identifiable data, such as algorithmic profiling. This would ensure that regulators do not expand the jurisprudential conception of identifiable data purposively to capture such practices.

Processing Data to Protect Data: Resolving the Breach Detection Paradox

Most privacy laws contain two obligations: that processing of personal data must be minimised, and that security breaches must be detected and mitigated as quickly as possible. These two requirements appear to conflict, since detecting breaches requires additional processing of logfiles and other personal data to determine what went wrong. Fortunately Europe’s General Data Protection Regulation (GDPR) – considered the strictest such law – recognises this paradox and suggests how both requirements can be satisfied. This paper assesses security breach detection in the light of the principles of purpose limitation and necessity, finding that properly-conducted breach detection should satisfy both principles. Indeed the same safeguards that are required by data protection law are essential in practice for breach detection to achieve its purpose. The increasing use of automated breach detection is then examined, finding opportunities to further strengthen these safeguards as well as those that might be required by the GDPR provisions on profiling and automated decision-making. Finally we consider how processing for breach detection relates to the context of providing and using on-line services concluding that, far from being paradoxical, it should be expected and welcomed by regulators and all those whose data may be stored in networked computers.

Between a rock and a hard place: owners of smart speakers and joint control

The paper analyses to what extent the owners of smart speakers, such as Amazon Echo and Google Home, can be considered joint controllers, and what are the implications of the household exemption under the GDPR, with regard to the personal data of guests or other individuals temporarily present in their houses. Based on the relevant interpretations of the elements constituting control and joint control, as given by the Art. 29 Working Party and by the European Court of Justice (in particular in the landmark cases Wirtschaftsakademie, Jehovah’s Witness, Ryneš, and Fashion ID), this paper shows how the definition of joint control could be potentially stretched to the point of including the owners of smart speakers. The purpose of the paper is, however, to show how the preferred interpretation should be the one exempting owners of smart speakers from becoming liable under the GDPR (with certain exceptions), in the light of the asymmetry of positions between individuals and companies such as Google or Amazon and of the rationales and purposes of the GDPR. In doing so, this paper unveils a difficult balancing exercise between the rights of one individual (the data subject) and those of another individuals (the owner of a smart speaker used for private and household purposes only).

The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’

Google’s Duplex illustrates the great strides made in AI to provide synthetic agents the capabilities to intuitive and seemingly natural human-machine interaction, fostering a growing acceptance of AI systems as social actors. Following BJ Fogg’s captology framework, we analyse the persuasive and potentially manipulative power of emotionally intelligent conversational agents (EICAs). By definition, human-sounding conversational agents are ‘designed to deceive’. They do so on the basis of vast amounts of information about the individual they are interacting with. We argue that although the current data protection and privacy framework in the EU offers some protection against manipulative conversational agents, the real upcoming issues are not acknowledged in regulation yet.

The Concept of ‘Information’: An Invisible Problem in the GDPR

Information is a central concept in data protection law. Yet, there is no clear definition of the concept in law – in legal text or jurisprudence. Nor has there been extensive scholarly consideration of the concept. This lack of attention belies a concept which is complex, multifaceted and functionally problematic in the GDPR. This paper takes an in-depth look at the concept of information in the GDPR and offers up three theses: (i) the concept of information plays two different roles in the GPDR – as an applicability criterion and as an object of regulation; (ii) the substantive boundaries of the concepts populating these two roles differ; and (iii) these differences are significant for the efficacy of the GDPR as an instrument of law.

Offering ‘Home’ Protection to Private Digital Storage Spaces

The law classically provides strong protection to whatever is inside a home. That protection is lost now that our photo albums, notes and other documents have become digital and are increasingly stored in the cloud. Even if their owner never intended these documents to be shared, their copies in the cloud may be accessed by law enforcement, under possibly lower conditions than apply to home searches. In this paper, we study this problem from a theoretical perspective, asking whether it is possible to establish home-equivalent legal protection of those private digital storage spaces (smartphones, private cloud storage accounts) that most closely resemble the home as a storage environment for private things. In particular, we study whether it is possible, using technological design, to clearly separate digital storage spaces that are used privately versus storage spaces used to share data with others. We sketch a theoretical architecture for such a ‘digital home’ that most closely resembles the physical home in terms of the space that is the most personal storage environment for private files. The architecture guarantees the data are indeed only stored for private use, and can never be shared with others unless the device used for storage itself is shared. We subsequently argue that the law should offer ‘home’ protection to data stored using this system, as an intermediate stepping-stone towards more comprehensive legal protection of cloud-stored data. Such protection is needed, since nowadays, not the home or the smartphone, but the smartphone/cloud ecosystem holds ‘the privacies of life’.

Algorithmic Colonization of Africa

We live in a world where technological corporations hold unprecedented power and influence. Technological solutions to social, political, and economic challenges are rampant. In the Global South, technology that is developed with Western perspectives, values, and interests is imported with little regulation or critical scrutiny. This work examines how Western tech monopolies, with their desire to dominate, control and influence social, political, and cultural discourse, share common characteristics with traditional colonialism. However, while traditional colonialism is driven by political and government forces, algorithmic colonialism is driven by corporate agendas. While the former used brute force domination, colonialism in the age of AI takes the form of ‘state-of-the-art algorithms’ and ‘AI driven solutions’ to social problems. Not only is Western-developed AI unfit for African problems, the West’s algorithmic invasion simultaneously impoverishes development of local products while also leaving the continent dependent on Western software and infrastructure. By drawing examples from various parts of the continent, this paper illustrates how the AI invasion of Africa echoes colonial era exploitation. This paper then concludes by outlining a vision of AI rooted in local community needs and interests.