Shawn H E Harmon*

 

Cite as: S H E Harmon, “Phase I Report of Implanted Smart Technologies Project”, (2011) 8:2 SCRIPTed 212

 

Download  options

DOI: 10.2966/scrip.080211.212

 

© Shawn H E Harmon 2011.
Creative Commons License
This work is licensed under a Creative Commons Licence. Please click on the link to read the terms and conditions.

 

1. Introduction

Recent technical and technological developments in nanotechnologies, biotechnologies, nanotechnologies, information and communication technologies (ICTs), and artificial intelligence (AI) have prompted the need for collaborative work, not only between natural and physical scientists, but also between arts and humanities researchers. The Implanted Smart Technologies Project (IST Project) is a multi-pronged project intended to explore and advance academic thinking around the techno- and socio-legal aspects of these converging technologies, specifically implanted smart technologies, and their implication for concepts of normality. It is undertaken under the auspices of SCRIPT, the AHRC Centre for Intellectual Property & Technology Law,1 and ISSTI, the Institute for the Study of Science, Technology and Innovation,2 both at the University of Edinburgh. The Project Team is Shawn H.E. Harmon, Research Fellow, Wiebke Abel, PhD Candidate, Prof. Graeme Laurie, Director of Research, School of Law, Edinburgh University and Prof. Robin Williams, Director of ISSTI.

2. Project Objectives

Core objectives of the IST Project are to survey the technical state and trajectory of implantable smart technologies, and to interrogate their ethical, legal and social implications. We wish to explore concepts of ‘normalness’ and how they are captured and entrenched in law, and how new technologies such as these are forcing a re-evaluation of the concept and related concepts such as ‘enhanced’. In doing so, we will investigate the role of the law (both proactive and reactive) in this unstable and convergent field. The meta-question at the heart of the IST Project is:

What is the future and efficacy of the law in the realm of convergent high technologies generally and implanted smart technologies more specifically?

Key themes include:

  • How might conjoined technologies (in this case, implanted smart technologies in their existing and envisioned forms) challenge human identity and human rights, and what does that concept mean in different fora?

  • What are the most important relevant international and European biolaw instruments, which concepts and principles are brought into play, and how might they respond to anticipated concerns (ie: are they sufficient to ‘protect’ human rights and human identity, if the latter is deemed imperative)?

Generally, we are interested in theoretical questions about identity and normalcy, practical questions about the efficacy of the law in dynamic science settings, and methodological questions about how best to think about technologies and futures so as to create optimal regulation.

3. Report on Phase I

The first phase of the IST Project was a roundtable research retreat entitled “Implanted Smart Technologies: What Counts as ‘Normal’ in the 21st Century?”, which was held at the Maximilian Hotel in Prague, Czech Republic, on 9-10 June 2011. This retreat, hosted by the Edinburgh-based Project Team, was intended to serve as a platform where scholars from different disciplines might come together to exchange their ideas and identify shared interests and aims. This section of the report summarises the presentations given and discussions undertaken at the retreat, identifying core issues and themes.

3.1 Summary of the Presentations & Discussions

The first session of the retreat, “Science Trajectories & Convergences: Technologies Deeply Embedded?”, was intended to survey the state of the relevant sciences with an emphasis on implanted smart technologies (ISTs). The speakers were Brigitte Krenn, from the Austrian Research Institute for AI, Austria, discussing “Who is Who”, and Simon Biggs, from the Edinburgh College of Art, UK, who talked about “Becoming Borg”. Their presentations and the discussions can be summarised as follows:

  • Krenn is a computational linguist whose field of research involves the development of artificial systems. She noted that modern expectations and needs, including those of an aging society, are placing new and greater demands on technology (and AI systems specifically). There is a move toward ‘companion technologies’, which are expected to perceive, act meaningfully and purposefully, learn and be flexible, and are therefore very different from industrial robots. She highlighted the importance of emotional modelling (ie: detecting, understanding, and generating affective cues in textual and dialogical exchanges aimed at better understanding multimodal human/agent interactions) to these artificial systems; human reactions to them and human engagement with, and understanding of, their responses will be important to how acceptable these technologies might be as social actors. She highlighted that there is a rising awareness that this line of research requires collaboration with social scientists and lawyers to discuss the legal and ethical implications of these smart devices and interfaces. While her emphasis was on external rather than implanted technologies/systems, she noted ongoing work on brain interfaces which will permit the control of systems by neural signals.

  • Biggs is an artist who has explored identity, recognition of the other, and interaction with computer systems, through the process of making art. He articulated how the self can be viewed as bound up with brain, but noted that the self can expand (ie: a driver ‘feels’ the boundaries of her car and its proximity to objects). Thus, the expansion of self through implants, including AI or smart implants, is a ‘natural’ progression, or at least one with which we are equipped to cope. For centuries, we went places and saw things we otherwise wouldn’t see through language, then Google made that easier, and now we are learning to interact with computer-generated figures and avatars, who will further that capability (or further enrich the experience). As such, Biggs wondered if we have perhaps always been cyborgs.

The second session, “Humanness & Normalcy: Ideas Deeply Embedded?”, was intended to explore notions of normality and how they might be transgressed or transformed by new and converging technologies, including ISTs. Speakers were Roger Strand, University of Bergen, Norway, who gave a talk entitled “Human Normality and Identity: Some Philosophical Remarks”, Marianne Boenink, University of Twente, Netherlands, who gave a talk entitled “Time Will Tell: From a Population-Based to an Individual Conception of Normalcy”, and Saarah Chan, University of Manchester, UK, who gave a talk entitled “Inside/Outside: Implantable Technologies, Cyber-Enhancement and Refiguring the Post-Human Body”. Their presentations and the discussions can be summarised as follows:

  • Strand, a philosopher, observed that some technologies which at first seem intuitively repugnant (eg: ultrasounds in Norway in the 1970s due to sex selection concerns) are eventually regularised and become viewed as quite unproblematic. Some technologies are such that existing rights cannot be realised (eg: consent in the biobank setting, currently, but potentially privacy from monitoring after a xenotransplantation once and if they become effective). In such situations, are we ‘constructing a tragedy’; a technologically-driven social situation where human rights are necessarily undermined? For example, ISTs such as trackable implants in patients who may wander – which have an element of persistent monitoring and loss of privacy – have implications for ‘the human condition’ (ie: how human life is experienced and constructed, whether human rights are realised, what categories of people may be recognised). As such, a brief discussion about the stability of the human condition followed, and questions arose as to whether stability in the human condition should even be cherished or pursued.

  • Boenink picked up the idea of ‘stability’. Drawing on Noordman (2007) and Noordman and Rip (2009), Boenink reiterated that overly speculative ethics (current ethical assessments of future technologies) risk squandering ethical resources, and that there is greater value in considering more mundane and plausible, and less controversial examples such as in vivo monitoring devices with the ‘smart’ element outside the body (as exemplified by the Body Area Network and Fortina 2005 nano-devices). She noted that it is important to recall that technologies face multiple challenges, including technical (miniaturisation, reliability, propriety of chosen biomarkers), social (acceptability of role and responsibility distributions), and cultural-moral (meaning of health, meaning and significance of autonomy, tolerances for control). A strong sense of these matters, which may require further empirical evidence from the humanities, will feed into technology trajectories and evolving concepts of ‘normalcy’, which may transform into the proposition ‘normal for me’ rather than ‘normal for patients’ or disease groups. The basis for interventions becomes unclear with the result that we must question what kind of evidence is needed in order to justify intervention.

  • After highlighting a number of implantable technologies, including RFIDs and Deep Brain Stimulators, and cybernetically controlled technologies such as next-generation prosthetic limbs/hands and brain-machine interfaces (which might enhance memory, language fluency and recognition), Chan questioned the ‘big deal’ made about implantability. She focused on bodily integrity and inside/outside arguments, querying whether concerns associated with internal interventions might be grounded on risk and permanence. Revisiting the idea of ‘normalcy’, Chan noted that laments about post-humanism appear to be grounded on a static view of ‘humanness’, but we are already post-human, having made that shift when we moved from oral to written societies. Any idea of ‘normalcy’ that relies on ‘unique human experiences’ must be fluid, for concepts of human nature are malleable and changing. The discussion that followed questioned the propriety of the (necessary) role of ‘transgression’ (of humanity or normalcy) in reactions to the post-humans to which ever more complex and powerful ISTs might give rise. As reiterated in each of the presentations, we are already entangled with machines.

The third session, “Law and Regulation: Challenges for Justice?”, focussed on some of the challenges that ISTs pose for the law and governance. Speakers were Susan Brenner, University of Dayton School of Law, USA, who gave a talk entitled “Enhancing Normal: Speculations on the Dark Side of ISTs”, and Laura Klaming, University of Tilburg, Netherlands, who gave a talk entitled “Ethical and Legal Implications of Implantable Neurotechnologies”. Their presentations and the discussions can be summarised as follows:

  • Brenner began by challenging the claim that humans version 3.0 will be crime-free (ie: that IST enhanced humans will eliminate crime, not just because it will be easy to track people but because our enhanced intelligence will remove criminal motivation). Some crimes are rational and profit/property-driven, and others are passion crimes; intelligence will not eliminate these. Indeed, cognitive-based ISTs might enable crime. ISTs may become the targets of crime, IST-enhanced people may be become the targets of crime, or they may target non-IST-enhanced people (ie: there will be new grounds for stigmatisation and discrimination). New harms may emerge: how should the law approach the hacking of a pacemaker programmed by wifi which results in death? New laws will likely not be needed, but sentencing practices might change dramatically in some situations (eg: IST-enhanced person as victimiser). It is important to recall that the law, especially the criminal law, is about controlling behaviour; there is no level playing field, people need protection and that will not change.

  • Klaming described one particular implantable technology, in this case implanted electrodes which provide deep brain stimulation (DBS) treatment for conditions such as Parkinson’s Disease. Her talk demonstrated that side-effects, which are observed in a small percentage of patients, are varied and numerous, including infection, haemorrhaging, cognitive and psychological symptoms, including loss of memory and word fluency, anxiety, depression, and personality/identity changes, some experienced immediately and others delayed, but many of which can be remedied by adjusting the simulation parameters. Indeed, protagonists are discovering new ‘treatments’ through the side-effects reported in clinical usage (eg: DBS was found to trigger repressed memories and is now theorised as a treatment for Alzheimer’s Disease). Given this, the use of ISTs in clinical settings throw up clear safety, autonomy, and privacy issues. With respect to autonomy, a case was reported in which a patient receiving DBS became hypomanic, experiencing personality changes and improper behaviour (sexual, financial, etc.), and questions arose as to whether the person was mentally competent while under treatment. In that case, treatment had to be discontinued so as to return the patient to competence, at which point the person, who suffered from severe motor impairments when untreated, was asked if he wished treatment to be resumed on the understanding that if it did he would have to be institutionalised due to a return of his consequent incompetence. He chose treatment. The case highlighted that current health law regulation does not take the impact of new technologies such as DBS into account. It was discussed that legal concepts like automatism are implicated and may have to be revisited in situations of liability, and that the templates that we choose to make sense of novel developments condition our further understanding, and therefore our further development of innovations and our legal and social responses to them.

The final presentation was by Arie Rip, University of Twente, Netherlands, whose talk was entitled “How to Integrate Future and Speculative Possibilities with Ongoing Dynamics of Development”. Based on his involvement in nanotech development, Rip explored the dual dynamics of promising technologies and the desire to reduce the risks of failed innovation. He pointed to the Verichip, an RFID which made people ‘readable’ in many ways. All sorts of social uses were envisioned, but these uses did not take off, and the company has shifted its focus to diagnostics. His presentation can be summarised as follows:

  • Rip explained the role of ‘enactors’ (ie: people trying to get the new technology to function, including Ministries of Science or Industry, or Economic Affairs, whose intent is to get the technology to work, generate and deploy IP, and enrol early adopters before worrying about embedding the technology in society). Because of these enactors and the innovation and research funding systems that we rely on, there is a ‘promise requirement cycle’ where enactors articulate technological possibilities, signal opportunities, promise possible worlds which, if accepted, result in the provision of development resources and additional requirements. The early stages of this cycle prompt speculation and concerns about future worlds, which trigger further promises and concerns, and agenda-building thus occurs in a very diffuse way. Some concepts and slogans are generated which themselves generate ideas and achieve a life of their own (eg: hydrogen economy, info superhighway, etc.).

  • When considering future technologies, and performing (speculative) ethical evaluations of them, important practical questions arise such as: What possibilities should one focus on? What fictions are embedded in the promises being made? It is important to be aware of the things that must happen to transform the technology from a ‘possible’ to a ‘probable’ technology, the latter being the only one we should expend limited ethical resources on. For example, the Human+++ vision (ie: the vision of a body area network for health) requires as a prerequisite advances on energy consumption and batteries, or it will fail to lift off. It is useful to create scenarios of possible futures with input by actors who are formulating innovation strategies, which possible futures should then help inform those same actors. Depending who takes the lead on anticipating and encouraging uptake, different scenarios unfold. A current problem is that there is often a gap between enactors and users/regulators, a gap that needs to be bridged in the early ethical analyses (or foresighting exercises) and then bridged again in the governance of these technologies. Different actors play a range of games during the development and embedding stages, including waiting games, and these games are played in the shadow of authority (ie: against a background of law and regulation).

  • The question remains: What is the role of regulation and the law in this setting? Are we helpless with respect to development dynamics? How should we intervene in the evolution of technologies? Soft law, or ‘tentative governance’, may be important, but if it comes from outside the main actors, it will not help very much. Regulators are important, but they are often bound by requirements of expertise, capability, and impartiality (so they cannot lead the innovation process). A lot of ‘de facto’ governance occurs through a range of actors. Regulators, just like ethics analysts, must determine which part of the ‘hype cycle’ they will focus on when conducting their work (early hyperbolic promises or the plateau of productivity).

Graeme Laurie, University of Edinburgh, led a more interactive session on Regulatory Foresighting in which he posed the following questions as a means to shape the discussion:

  1. Is regulatory foresight futile?

  2. Can we learn anything from the technology foresighting field?

  3. What can we learn from old ‘new technologies’?

  4. What elements are necessary for a robust method for regulatory foresighting?

The following points of guidance emerged from the discussion:

  • It is not clear what the regulation of science and technology is about (ie: is intended to do). Is it about managing practices and artefacts, controlling risks, or ‘making politics’ (eg: the EU claims to be the ‘Innovation Union’ and fashions regulation as such)? The law tends toward risk-based regulation. However, risk-based regulation is not the only way we can respond; facilitation is just as important as restraint. Further, rules, which are the most common legal tool, are static and therefore may not be durable, and therefore may not be the most optimal method of governance.

  • Similarly, it is not clear what foresighting around science and technology is about. Again, is it about managing practices and artefacts, controlling risks, or ‘making politics’? Both technological and ethical foresight exercises are all too frequently blind to complexity, unintended outcomes, and co-production, and therefore often fail to deliver suitable governance outcomes. Anticipatory governance and speculative ethics are not necessarily helpful.

  • The idea of tentative governance was raised (ie: the use of soft law, responsible innovation, and the delegation of key decisions to non-regulatory actors while still retaining some oversight). This reiterated previous comments to the effect that reflexive structures (regulatory architecture) may be more suitable than rules. Within this reflexive setting (or tentative governance model), ‘exploratory ethics’ and ‘regulatory foresighting’ could be useful as a means of building futures.

  • Exploratory ethics (which might be techno-moral scenario based) should link up and be part of a wider and inclusive discourse which sensitises actors to issues. Inclusiveness is important because different actors have different roles. For example, scientists and engineers determine (and push) technical potentialities and generate the final vision of a technology (by actually producing it), sociologists are sensitive to the relational consequences of technologies and ethicists and lawyers are concerned about exemplary cases and the development of enduring principles. All need a seat at the table.

  • With respect to regulatory foresighting, policy options, including existing and posited regulatory and legislative instruments, could be ‘tested’ in different fact scenarios. Carefully crafted scenarios could be used to evaluate how robust rules, instruments, institutions, etc., may be, and therefore help to shape the direction in which we wish to move regulation. In such cases, reference to clearly articulated values, and an appreciation of the broader economic context will be important. Additionally, it will be important to recognise throughout the limits of the law, and its position within the broader governance framework.

Ultimately, it was agreed that there is value in preparing ourselves for surprises and change, and foresighting, including regulatory foresighting, can help do this while simultaneously building capacities (regardless of the particular issues and content discussed). It was concluded that, in dynamic technologies settings, regulatory foresighting might be most appropriate (or most likely to have efficacy) as part of a ‘hedge-and-flex, steer-as-we-go’ approach to governance, whereby stakeholders visit and revisit issues and make incremental adjustments to instruments and institutions.

3.2 Observations & Ideas

Some key observations and ideas that were reached over the course of the retreat were as follows:

  • New technologies will have multiple uses, some planned, some not planned, and that is true of ISTs. Additionally, technological platforms are developing faster than the knowledge which will allow us to deploy them meaningfully. As such, the commonly noted regulatory lag is not the only ‘lag issue’; knowledge about the genuine value and capability of the technologies also follows the development of the technologies themselves; so their very use, including their use in medicine, becomes experimental.

  • Though the social embedding of new technologies, including ISTs, is a gradual and diffuse process, there is value in trying to shape/influence that process. In doing so, we must have reference to values, and we must consider questions such as, ‘Who am I?’, ‘Who/what is other?’, and ‘What is our tolerance for change?’, and we must recognise that the ‘self’ can accommodate a lot of change.

  • The people conducting ethical and regulatory assessments of technologies will not be the same people who eventually take up the technologies because, once technologies are actually realised and rolled out, our sensitivities and parameters will have changed from what they are now (ie: individually, we are not now what we will be years down the road). Thus, it is important to understand that, in many ways, exploratory ethical analyses and regulatory foresighting is often more about acclimatising ourselves to new situations, preparing us for change, and normalising emergent ideas than they are about actually shaping trajectories.

  • Despite the many changes that ISTs might prompt within our bodies, in our bodily image, to our identity, and to the manner and scope of personal interactions, the law as it exists is, in many ways, well equipped to cope with those changes without significant alteration (ie: new laws and new crimes will not be necessary, but different considerations in assessing situations that give rise to legal disputes may be necessary).

  • As ISTs advance, data security, including liability for breaches and technical errors, will be an increasingly important issue. Additionally, free will is a foundation of the law, and we must have a care as to how ISTs might influence or erode free will.

4. Conclusions and Reflections

While none of the session panels confined themselves to the primary remit, the Project Team was pleased with the research retreat; we view it as an excellent kick-off for the IST Project and an auspicious beginning for an active, multidisciplinary network. Having said that, and as observed by some participants, the Project Team notes the following:

  • There was no preliminary discussion about the meaning of ‘implanted’ or ‘smart’, which might have gone some way in focusing some of the presentations and discussions more squarely on ISTs.

  • There was very little reflection on, or discussion about, the nature of ‘law’ (and regulation), and what this nature implies for converging technologies and ISTs.

  • There was no true drawing together of the presentations with a view to considering how they reflect(ed) on the questions posed at the outset (as identified above and in the retreat invitation).

While the first two matters have been considered within a publication being developed by the Project Team, it was decided that such definitional issues would not form an explicit part of the retreat’s programme; we felt it best to leave this first event open and exploratory, allowing participants to move beyond these important definitional questions if they felt it appropriate. This of course made drawing together the diverse presentations more difficult (and indeed very little time was given in the programme to do so). Such drawing together is made doubly difficult given that most participants did not engage explicitly with the original guiding themes and questions. Given the breadth and richness of the material presented, however, we do not consider this a shortcoming.

Ultimately, the following common themes or propositions appeared to run through a good many of the presentations and discussions:

  1. Technologies take longer to develop than we typically anticipate, so the rhetoric of the ‘new’ can be unhelpful, even dangerous, to the innovation undertaking.

  2. Things become less transgressive than we originally anticipate (‘radicalness’ is lost over time and by the time technologies become usefully available), so we should be cautious about imposing hard judgments on new or anticipated developments.

  3. All assessments are partial and incomplete, so the analytical processes and the conclusions reached ought to display humility, and we are better off paying attention to the methods for making decisions and the processes for shaping futures rather than the actual substance of those futures.

It was generally agreed that these propositions or ‘truths’ must be remembered with respect to any efforts aimed at regulatory foresighting.

Given the discussions at the retreat, a number of governance questions appear to be open (ie: ripe for further and more directed consideration), and well within the capabilities of the network to advance. With respect to the actual technologies (ie: the ISTs) being co-produced:

  1. Who is co-producing these technologies?

  2. What are the specific and anticipated technical effects or capabilities of these ISTs?

  3. How will these technical effects be realised?

  4. To what extent should governance frameworks shape the process of early development (ie: how should governance frameworks approach novelty, uncertainty, and proliferation/containment)?

  5. What elements of that framework might do so (eg: law, codes of conduct, etc.)?

With respect to the new functionalities that these technologies (ISTs) are creating for individuals:

  1. In what ways are these technologies challenging claims about normalcy/identity? (How are definitions produced, broadened, and broken down? What is people’s bodily experience and how might devices change that? How are conditions and views changed if devices have intention inscribed into them, free will being a cornerstone of responsibility?)

  2. What is the desirability of the new functionalities being sought?

  3. Against what measure might judgments about new functionalities be made?

  4. What should governance frameworks actually do about shaping those functionalities (ie: deploying values)?

  5. What is the scope of law vis-à-vis new technologies and functionalities?

Some of these questions can be considered within regulatory foresighting exercises, while others demand an empirical approach. In each case, they would benefit from multidisciplinary contemplation within the network, though their consideration must be tied to agreed technology/IST-specific case studies if the exercise is to have any value.


* Research Fellow, INNOGEN, ESRC Centre for Social and Economic Research on Innovation in Genomics, University of Edinburgh, and SCRIPT, AHRC Centre for Research on Intellectual Property and Technology Law, University of Edinburgh.

1 For more on SCRIPT, see “Arts and Humanities Research Council: SCRIPT” available at http://www2.law.ed.ac.uk/ahrc/ (accessed 01 August 2011).

2 For more on ISSTI, see “The Institute for the Study of Science Technology and Innovation: ISSTI” available at http://www.issti.ed.ac.uk/ (accessed 01 August 2011).

Phase I Report of Implanted Smart Technologies Project
Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.