"Interview with Langdon Winner: Autonomous Technology: Then and Still Now"

Interview with Langdon Winner: Autonomous Technology– Then and Still Now

[Published in: Autonomie und Unheimlichkeit: Jahrbuch Technikphilosophie 2020]

Herausgegeben von Dr. Alexander Friedrich, Prof. Dr. Petra Gehring, Prof. Dr. Christoph Hubig,

Dr. Andreas Kaminski und Prof. Dr. Alfred Nordmann, 6. Jahrgang 2020,

[612 The interview was conducted by way of an e-mail exchange with Alfred Nordmann between

May 5 and July 10 2019.]

NORDMANN: In 1977 you published your first book Autonomous Technology.1

[ 1 Langdon Winner: Autonomous Technology: Technics-out-of-Control as a Theme in Political

Thought, Boston 1977.]

Forty years later, autonomous technologies have become a favorite subject for philosophers,

engineers, and cultural critics. Vehicles, including drones, serve as primary exemplars,

but the self-learning algorithms of AI follow closely behind. There is a kind of morbid

fascination with cars that »decide« the philosophically popular trolley problem.

Others worry about attributions of responsibility when accidents happen and

mistakes are made or when self-learning systems develop very peculiar training effects. In

your book, »autonomous technology« appears as a matter of concern in that it refers

to »all conceptions and observations to the effect that technology is somehow out of

control by human agency«.2 You draw on Ellul to formulate what is at the same time

a philosophical challenge and a profound anxiety: »There can be no human

autonomy in the face of technical autonomy«.3

In contrast, today’s discourse appears to take technical autonomy pretty much for granted and seeks only to

manage its impact and implications. At the same time, it limits the question of technical autonomy to a

few cutting-edge technologies and does not include, for example, Charles Perrow’s

»normal accidents« or the alienation of labor in a factory setting. What do you make

of this – would you diagnose a radical disconnect between your questions back then

and today's discussions?

* * * * * * * * * * * *

LANGDON: Today’s conversations about ›autonomous technologies‹ explore themes and issues

that are both similar to and yet quite different from those in Autonomous Technology.

My primary concern back then was to find ways to pose questions about problematic

features of technology in their various modes and manifestations as they affect

modern politics. Hence, I examined a range of topics that seemed significant:

technocracy as a governance by experts; technological determinism as a way of shaping

social outcomes through the sheer force of technical change; and technological

2 Ibid., p. 15.

3 Ibid., p. 16.

295

politics as a collection of technology related conditions that tend to transform and over‐

whelm conventional political structures and practices.

The background for these inquiries was the simple fact that (at the time) neither

the varieties of political science and nor political theory had much to say about what

was clearly a powerful presence in society and politics – a rapidly growing, many

sided, highly influential, dynamic technosphere. Most perspectives on technology

were fully framed by standard notions of ›progress,‹ expecting an inevitable flow of

improvements in living conditions. Why not just go with the flow? Thinkers who did

have interesting, contrary things to say about the matter were outsiders, some the

philosophers of technology and social critics of the mid-century – Lewis Mumford,

Jacques Ellul, Herbert Marcuse, Martin Heidegger, Rachel Carson, as well as a

collection of writers on popular culture who commented upon the personality num‐

bing saturation of life by consumerism and mass media -- Vance Packard, and Betty

Friedan, for example. Also influential upon my thinking were works of science fiction

writing and film that often focused upon the loss of human autonomy to threatening

forces brought by science and technology. The underlying question in these stories

was usually: What if ...?

Writing about features of ›technology-out-of-control‹ involved a set of problems

that I hoped would arrive at a particular destination, one that emerges in the book’s

last chapter: a positive, forward looking, practical, democratic understanding of the

moral and political possibilities that technologies – new and old – present for

choices in public life. I hoped to suggest choices in the public realm and perhaps

even new possibilities for citizenship – participation in technological design and de‐

liberate choice about the configuration of limitations upon technological systems.

Such possibilities were not unthinkable at the time. The ›technology assessment‹

movement was very much involved with such prospects and even became a prominent

concern of the US Congress in the late 1960s and early 1970s.

In contrast most of today’s discussions about emerging ›autonomous technologies‹<

– self-driving cars, military drones, workplace automation, and the various

projects of so-called artificial intelligence – are predicated upon a much different project.

The attitude is to support and carefully monitor the fascinating technoscience

developments in the making. As the processes of Research & Development unfold

and reach culmination a scholar may find opportunity to comment upon interesting

properties in the workings of the various devices, systems, algorithms, and other

novelties, identifying their interesting ethnographic, philosophical and ethical features.

But the basic understanding is one fully characteristic of twentieth century techno-

think: Innovate first. Ponder the implications later.

In that light there is a consistent disposition to encourage potentially world changing

developments to unfold and to offer erudite, retrospective (but likely irrelevant)

commentaries as the fascinating prospects emerge. The possibility that an ethically

296

or politically autonomous human presence might intervene in time to make a

difference is seldom if ever on the agenda. The idea that one might announce a firm »no«

to any attractive, innovative pathway is simply out of the question. In fact, it seems a

matter of pride with today’s techno-cognoscente that the heretofore privileged

position of human beings may finally be overshadowed, even overthrown by the sheer

dynamism of various avenues in technoscience. This is quite different from the

prospective, active, critical, modes of study, reflection, judgment and political action so‐

me of us envisioned decades ago and might be explored even now.

* * * * * * * * * * * * *

NORDMANN: You refer to a fascination with the idea that humans beings might lose their

privileged position to technology. This fascination plays a major role also in your book

when you discuss Jacques Ellul, Kurt Vonnegut, E.M. Forster, Karl Marx and others

who worry about human alienation as technology takes on a life of its own, when it

assumes features and functions of life. You discuss this under the heading of techno‐

logical animism – which relates a premodern mindset to our most advanced civilization.

One might say that this in itself exposes the so-called animism as illusory (as

you show for Marx who, in the final analysis, always knows who is in charge). One

might also argue that the modern subject is profoundly unsettled and that – with all

our technology and rational control – we haven’t quite arrived in the modern world

and haven’t quite managed to assume our role as autonomous subjects. Which is it?

* * * * * * * * * * * * * * *

LANGDON: Pre-modern conceptions of animism expressed the belief that souls were widely

distributed in the world, beyond just humans but to other living creatures and per‐

haps even inanimate things. Even today there are believers in ›Panspiritism,‹ the

view that the entire world infused with spirit, filled with consciousness in various

manifestations.

As an occasional feature in representations of technology, animism appears as a

way of describing the experience of material things that seem to have taken on lifelike

qualities or to have appropriated spaces and functions that would normally be

attributed to human beings. Notions of that kind, of course, are standard themes in

science fiction writing and movies – the unsettling presence of artificial things that

seem to have taken on ›a life of their own.‹ From the rebellious robotic female in

Fritz Lang’s Metropolis to the runaway computer in The Forbin Project to the

beautiful, conniving, artificially intelligent woman in Ex Machina, images of technological

animism have long been a mainstay in popular culture. The possibility that

impressive technical devices can exhibit (or seem to exhibit) extraordinarily lifelike

characteristics is an enduring presence in modern thought. At present such possibilities

have become central, practical topics for research and development within the

algorithms of computer science as well as a wide range of projects in digital techno‐

logy and robotics. Works of that sort shed new light upon what is actually an ancient

theme.

297

It is true that surprises and troubles attributed to technologies that seem to have

become ›autonomous‹ can often be traced back to the persons and groups that are ›in

charge.‹ Marx describes the kinds of mechanical apparatus that fully claimed the

bodies and minds of factory workers in his day. As he explains such calamities in his

theory of Capital, it’s clear that the owners of the means of production bear full

responsibility for what happens. That’s a perfectly good explanation, as far as it goes.

While Jacques Ellul gives full credit to Marx for the depth and rigor of this insight,

he argues that Marx had not gone deeply enough into the varieties of subjectivity

and social formation involved in what Ellul terms ›la technique.‹ Crucially at stake

here, he argued, is the fascination with projects aimed at achieving demonstrable

improvement – more efficient, more productive, rigorously measurable outcomes in

whatever endeavor is at hand. At one point he refers to F.W. Taylor’s quest for the

›one best way‹ as a good, brief summary of mentalities and initiatives involved.

Thus, the kinds of subjects enmeshed in the arrangements of capitalist production

are also subjects deeply engaged with wide ranging projects in ›la technique.‹ The

›autonomy‹ of technique takes shape as people willingly set aside crucial commitments

that previously inspired their thinking, activity and institutional arrangements.

They embrace technical improvement as their central goal, life’s ultimate mission in

whatever domain of practice they pursue – industrial production, agriculture,

government administration, higher education, sports, sexual fulfillment, you name it.

In sum, Marx situates technology within an unfolding history of class struggle.

Ellul views the much same terrain as a story about the onset of a vast, insidious

cultural infection. In either version, what emerges is a highly unsettled way of life, one

that casts a shadow upon the prospects for what one might call the ›autonomous

subjects‹ of modernity.

How to escape the predicaments that Marx and Ellul describe in their different

ways? For me that is not merely an abstract, philosophical question. As a teacher of

budding scientific and technical professionals, I’m again and again struck by how

little sense of personal autonomy is part of today’s education, our modern ›Paideia.‹

Students hope to master the fundamentals of, say, one of the branches of engineering,

get a ›good job,‹ come up with some lucrative ›innovation‹ and live happily

ever after. Very often they simply lack any sense that they might reflect upon, talk

about, and seek to realize an independent, personal understanding of life’s possibilities.

Thus, the autonomy of technology often comes to the fore when ascertaining

people’s sense of basic priorities. But the intellectual and moral autonomy of today’s

students, employees and citizens? Not so much.

* * * * * * * * * *

NORDMANN: You reject, I take it, that technological animism and a re-enchantment of the

world issues from technological developments as such, but you attribute it rather to

a kind of feeble-mindedness or failure on the side of us technological critics. Accor‐

dingly, you go further than our Jahrbuch Technikphilosophie which is dedicated this

298

year to the topic »Autonomy and the Uncanny«. Ours is an attempt to move the

discussion of drones and autonomous vehicles beyond ethical quandaries and legal

attributions. Your book doesn’t stop there, however. While there is a chapter dedicated

to technological complexity, it is wedged between a critique of technocracy and a

call for epistemological luddism. Indeed, at the end of your book you thematize a

threat to human autonomy that arises from the simple fact that we have to live with

all our past choices in our humanly-built world: »even if one seriously wanted to

construct a different kind of technology appropriate to a different kind of life, one

would be at a loss to know how to proceed. There is no living body of knowledge, no

method of inquiry applicable to our present situation that tells us how to move any

differently from the way we already do«.4 Akin perhaps to Paul Feyerabend’s ›coun‐

ter-induction‹ you recommend epistemological luddism as a heuristic. We can assume

a free relation to technology only by questioning the unquestionable and imagining

also the destruction of our taken-for-granted technological infrastructures –

which, however, puts us at risk of being excluded from the club of so-called ›reason‐

able people‹. In the age of participatory design, responsible development, ethics on

the laboratory floor, and the co-creation of science and society, is the call for

epistemmological luddism obsolete or more important than ever?]]

* * * * * * * * * * * * * * *

LANGDON: The overall setting for my impish suggestion of ›epistemological luddism‹ is

located within ambitious calls for a substantial, even sweeping restructuring of modern

technology-centered societies as an answer to critical evaluation of the political and

environmental ills that philosophical reflection and historical examination reveal. At

the time there were a good number of proposals for the reform and reinvention of

existing technological societies from the bottom up, including those of liberal social

critics, neo-Marxist thinkers and countercultural visionaries. I mention the proposals

of Paul Goodman, Herbert Marcuse, Murray Bookchin, and others who had offered

steps toward seemingly promising programs of thoroughgoing reconstruction. Even

the American arch-technocrat Glenn T. Seaborg had recently offered the reassuring

advice, »Technology is not a juggernaut; being a human construction it can be torn

down, augmented and modified at will.<< All of this made perfectly good sense in the

realm of the imagination, but I wondered how realistic such visions were in the most

obvious, everyday sense.

Rather than sketch a utopia of my own, I laid out three or four general »useful

proposals.« I won’t summarize those ideas here. But self-critical of my own tendency

toward excess, I went on to observe that »these proposals have overtones of

utopianism and unreality, which make them less than compelling.« After some further

rumination I suggest that »One must take seriously the fact that there are already

4 Ibid., p. 328.

299

technologies occupying the available physical and social space and employing the

available resources.«

My suggestion, therefore, is to try taking some tiny, modest steps – the epistemo‐

logical luddism experiment. »The idea is that in certain instances it may be useful to

dismantle or unplug a technological system in order to create the space and opportunity

for learning.« As the device or system is removed, even if only briefly, what

jumps forth as significant? What does such learning suggest as regards any large

scale or small changes in technology related patterns of living?

I do not say it explicitly in the book, but the basic thought here was, »OK, big

shot. You’re proposing to map a thorough reconstruction of the technological society

in quest of a more favorable set of social, political and environmental patterns.

That’s excellent! But let’s start with a more modest test of concept. For a short period

of time – a week, a month or so – you and I will disconnect from a clearly cru‐

cial part of the overall techno-system and adapt our perspectives and activities to

this condition and see what problems and possibilities come into view.«

I go on to sketch some of the situations in which the experiment might be done in

a deliberate controlled way or even ones in which such opportunities arise by accident.

My basic understanding is that ›we‹ – you and I and everyone in societies similar

to our own – are completely – even hopelessly – dependent upon a whole host of

technological devices and systems, such that doing without even one of them for a

short while is an extremely taxing prospect. What does that recognition suggest

about the grand visions of technology criticism and, if I may, the intricate insights

and suggestions commonly offered by philosophers of technology?

In fact, over the years I have asked my students to do the epistemological luddism

experiment in various university classes. I ask them to identify a technology upon

which they depend in their everyday comings and goings and to disconnect from it

for just one week. They should notice what happens, and write down their experiences

so we can discuss their findings. I also ask them please not to do anything that

would affect their overall safety and wellbeing.

Items of disconnection that students have chosen include: mechanical transportation,

clock time, prepared food, artificial lighting, synthetic fabrics, and other material

features of everyday student life. The results have been fairly uniform. Most students

fail to complete the experiment altogether and come to recognize their utter

dependence upon the devices they’ve chosen. This becomes a teachable moment in

our conversations. Since many of my undergraduates are engineers, one can ask

them about the conditions of intelligibility, control, adaptability, and even addiction

that the devices themselves are making will present to eventual end users.

In a larger perspective, I often take note of instances of technological breakdown

and the lessons that might be derived from them. This is fairly tricky business be‐

cause the social patterns that emerge from somewhat similar cases are far from

300

uniform. The electrical blackout in New York City in 1965 was widely reported to have

evoked cooperative, generous responses from the populace, as people apparently felt

the need to offer aid and comfort to each other in a time of crisis. In contrast, the

1977 New York power outage resulted in widespread looting, violence and other

varieties of criminal behavior.

My sense is that an incessant series of epistemological luddism experiences will

likely characterize coming decades of climate emergency. As Earth’s biosphere and

modernity’s major technological systems enter periods of high stress and break‐

down, which varieties of understanding, which philosophies, will offer guidance and

solace? Are we – you and I and world societies overall – any better equipped to

learn from such episodes today than in earlier times?

So far the signs are not especially promising. Although there is much excited

prattle about the wonders of ›disruption‹ – »Move fast and break things,« as they say

in Silicon Valley – the prevailing worldview is still deeply rooted in beliefs about a

stable, slowly unfolding, ultimately benevolent continuity. In my view, that’s an

existential condition to which humanity is no longer entitled.