By Lucas Miotto Lopes and Jiahong Chen

The future of real-time appeal

Knowing when to say or do something is often just as important as knowing what to say or do. The right advice at the wrong time is not really the right advice. While timing one’s words and actions can be a sign of wisdom, it can also be an indication of a much less flattering character trait. The TV series Modern Family provides a good illustration of the latter. In one episode we see Claire Dunphy’s relatives — each with a piece of bad news —competing for her attention at a narrow window after her spa day, when “she’s at her most relaxed and understanding” mood.[1] Their behaviour looks strategic, sneaky, manipulative. Claire strongly shares this impression. After unmasking their intentions, she describes feeling manipulated, used, and reproaches her family members for treating her this way.

This sort of timed influence has its online, and more automated, counterpart. With an exploding amount of data collected about our everyday activities both online and offline, automated systems are increasingly capable of predicting the optimal moment to interact with and influence us. We call timed influences based on such technological advancements “real-time profiling”. And we use “profiling” to describe it, because automated systems usually analyse individuals in real time and build an electronic profile that best represents the features of a targeted individual at a particular time.

It might sound unrealistic but targeting individual users based on what is going on with and around them at a specific point in time is no longer a fiction. Some gambling apps, for example, are found to be capable of targeting users when they seem to be attending a sports event.[2] But real-time profiling goes beyond advertising and include a wider range of human-computer interactions, from the amount you pay for a service[3] to when you are asked to rate an app.[4] The growing number of data points and prevalence of sophisticated algorithmic systems will only make these practices easier to implement.

Recently, real-time profiling has caught the attention of some digital ethicists. Like Claire Dunphy, they too think that there is something deeply problematic with this kind of timed influence. Pinning down exactly why it is wrong and what makes it wrong, however, can be a challenge.

It feels manipulative, but is it?

Real-time profiling may look intuitively problematic, and one plausible reason is that it can be classified as an instance of (wrongful) manipulation. Consider a fictional—but far from fictitious—example of real-time profile:

(MoodX) A social media company, MoodX, develops an algorithm that predicts its users’ current mood with high accuracy. With the help of the algorithm, MoodX advertises products tailored to users’ current mood. Sales of advertised products skyrocket as a result.

There are a few distinct aspects about Mood X that might have made MoodX sound manipulative: First, we can see that MoodX clearly attempts to influence their targets’ behaviour. The whole point of figuring out users’ real-time mood is to make it more likely that they will buy the advertised products. Second, MoodX’s influence stems from its own unilateral plan to, let’s say, maximise profit. Unlike more legitimate ways of engaging and influencing individuals, such as informing, MoodX’s act seems to be primarily motivated by MoodX’s intention to profit from their users. Finally, MoodX also seems to be taking advantage of users’ affected deliberation. When the target is in a particular mood, physical, mental, or relational status, their capacity of deliberation may be different to other times, making them potentially more susceptible to certain ideas and influences. Although the definition of manipulation is still very much a debated topic, most approaches include slight variations of the three of the elements outlined above. This is probably why examples of real-time profiling may come across as being manipulative for many. Now, simply stating that cases of real-time profiling can be seen as examples of manipulation doesn’t tell us much; at least not much about what makes them morally wrong or suspicious. For that we need a more elaborate and specific story.

Here we want to say a few words about what we think must be part of this story. Unlike some extant accounts of manipulation, we do not identify the wrongs of real-time profiling (and similar manipulative phenomena) with violations of autonomy, deception, harm, or even with a decrease in the target’s deliberative capacity. Rather, we attribute the wrongness of real-time profiling (and similar manipulative practices) to two elements: (i) to what we call “psychological hijacking” and (ii) to the fact that it works as a gateway to other wrongs.  

By “psychological hijacking”, we mean a unilateral plan to make some of the target’s psychological or quasi-psychological states subservient to the hijacker’s intention that the plan succeeds. The notion of subservience plays a key part in understanding the wrongness of real-time profiling. That is because it brings to light the fact that a hierarchical and purely instrumental relationship arises from the profiler’s attempt to influence the target. After all, by interfering with the target, real-time profilers place themselves in a position where the target’s own goals, intentions, or desires work for the benefit of profilers’ unilateral agenda. Even when the target gets some benefit out of the influence, we still see a hierarchical and instrumental relationship that at least require some sort of justification.

The other element which makes real-time profiling morally problematic is that it serves as a gateway to further wrongs. First, the conditions that enabled the occurrence of an instance of real-time profiling (for example, the collection of personal information) may reveal vulnerabilities of the target which can then be exploited by the profiler or someone else. Second, and even worse, real-time profiling can not only expose pre-existing vulnerabilities, but also create or facilitate the creation of vulnerabilities and thus expose targets to new risks. That is because the very same act, mechanism, and information used to, for example, get someone to buy a product online on the basis of their mood can be used to get someone to adopt false beliefs, or to foster hatred towards certain members in the community, or to even lead individuals to engage in self-harming behaviour.[5]

It is thus the combination of psychological hijacking and gateway wrongs that renders typical cases of real-time profiling morally tainted and in need of justification. Now, once we understand what makes real-time profiling wrong, we may rightly ask what we can do about it. Can we, for example, pre-empt these practices by applying existing laws or regulations if regulatory interventions are deemed necessary?

Can we do anything about it (legally)?

An obvious place to look at would be consumer protection law. The relevance of the consumer protection regime is clear because it regulates commercial practices that distort the economic behaviour of the consumer, which seems to be the case in at least many cases of commercial applications of real-time profiling techniques. Yet, consumer protection law also tends to focus on the concept of “average consumer” and it is not clear how that concept would legally play out in the context of highly personalised targeting practices, which is characteristic of real-time profiling.[6]

We can also consider preventing real-time profiling by stopping the collection of personal data that enables it. Data protection law prohibits “unfair” uses of personal data, and the notions of psychological hijacking and gateway wrongs may be helpful in articulating exactly why real-time profiling should be deemed unfair. Significant legal uncertainties will remain, however, due to the notoriously vague concept of fairness in data protection law.

To sum up, while consumer protection and data protection law may apply in some cases of real-time profiling, further specific legislation or guidance may be needed to specifically address these practices. In this regard, the ongoing efforts in regulating data-driven technologies, such as the EU’s proposed Digital Services Act (DSA)[7] and Artificial Intelligence Act (AIA),[8] may mark a valuable opportunity for regulatory intervention. The current proposals, however, may still fall short of capturing real-time profiling squarely. Notably, while the draft DSA imposes transparency requirements (Article 24), simply making information available doesn’t change the manipulative nature of these practices. The AIA, on the other hand, would ban manipulative AI systems (Article 5), but a large part of real-time profiling practices may fall outside its definition of manipulation, which requires the involvement of “physical or psychological harm” by exploiting vulnerabilities in relation to “age, physical or mental disability”. It remains to be seen how these two proposals will evolve in later stages of the legislative process, but what’s clear is that further research is needed to pin down the most effective regulatory approach.

This blogpost is based on a work-in-progress by the authors, who can be contacted for the latest version of the draft paper.

Authors:

Lucas Miotto Lopes, Maastricht University. Email: lucas​.​miottolopes​@​​maastricht​university​.​nl Twitter: @miottoluc

Jiahong Chen, University of Sheffield. Email: jiahong.chen@sheffield.ac.uk Twitter: @jiahong_chen


[1] IMDb, ‘”Modern Family” Mother! (TV Episode 2018)’ (2018) <https://www.imdb.com/title/tt7822370/> accessed 27 June 2021.

[2] Olivia Rudgard, ‘Gambling firms could use GPS to tempt “vulnerable” customers’ The Telegraph (25 June 2018) <https://www.telegraph.co.uk/news/2018/06/25/gambling-firms-could-use-gps-tempt-vulnerable-customers/> accessed 4 January 2021.

[3] Amit Chowdhry, ‘Uber: Users Are More Likely To Pay Surge Pricing If Their Phone Battery Is Low’ Forbes (25 May 2016) <https://www.forbes.com/sites/amitchowdhry/2016/05/25/uber-low-battery/?sh=3e0df88a74b3> accessed 4 January 2021.

[4] Patrick McGee, ‘Apple: how app developers manipulate your mood to boost ranking’ Financial Times (7 September) <https://www.ft.com/content/217290b2-6ae5-47f5-b1ac-89c6ccebab41> accessed 27 June 2021.

[5] Shaun B. Spencer, ‘The Problem of Online Manipulation’ (2020)(3) University of Illinois Law Review 959.

[6] Johann Laux, Sandra Wachter and Brent Mittelstadt, ‘Neutralizing online behavioural advertising: Algorithmic targeting with market power as an unfair commercial practice’ (2021) 58(3) Common Market Law Review 719.

[7] Commission, ‘Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC’ (2020) COM/2020/825 final.

[8] Commission, ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ (2021) COM(2021) 206 final.


Timed influence: The future of Modern (Family) life and the law

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.