Saturday, October 21, 2017

Book summary: Radical Technologies

Radical Technologies: The Design of Everyday Life, by Adam Greenfield.

I first met Adam Greenfield when he accepted an invitation to deliver a
guest talk at a computer systems conference I co-organized in 2009.  His
talk on what would later become known as "smart cities" was ahead of its
time and (in my mind) firmly placed him as a modern urbanist, well
within the tradition of Jane Jacobs but with a deep technology
sensibility, as his later book "Against the Smart City" revealed.  In
his latest book he emerges as a true humanist, again with a deep
understanding of the role of technology.  The questions he poses to the
reader here go well beyond urbanism, to an existential examination of
the friction between what we think we are here for and the precipitous
acceleration towards a 100% technology-mediated lifestyle.

The basic message of the book is that mediation by extremely complex
technology stacks has (at least) four pernicious effects.  It erases the
"wetware" versions of quotidian activities such as hailing a cab or
clustering around a TV, which, though mundane, build social capital.  It
further divides haves from have-nots.  It litters the socio-technical
landscape with technological ingredients (in the form of code libraries,
e.g.) whose functions may be benign or even banal when they first
appear, but can rapidly and almost invisibly be put to use to subvert
our individual or societal goals, and indeed to move those goalposts.

And it eliminates the assumption of an underlying shared reality, in a
dark, Gibsonian-dystopia sort of way. You and I see different features
on Google Maps, receive different pricing and suggestions from Amazon,
are shown different news headlines, and although we may be occupying the
same space at the same time, we're each simultaneously in two different
"somewhere elses".  Yet we generally don't know whose values or reasons
underlie the differences between the choices presented to you and those
presented to me.

Socioeconomically, this means (for example) that Google Home defaults to
using OpenTable for making restaurant reservations, which diverts money
from the restaurant to the service but appears frictionless to the
consumer; Google Maps presents Uber as a frictionless transportation
option alongside driving or transit, to the exclusion of other choices;
and so on, to show how attention, culture, and dollars are subtly
steered in specific directions, for ends usually opaque to the very
users they claim to serve.

Politically, one could not hand an authoritarian government a better
tool to divide and control its subjects.

In short, we have invited companies, standards bodies, and potentially
malicious hackers to intervene in the "innermost precincts of our
lives", perilous precisely because those activities are so banal we're
not prone to worrying about who is observing or intermediating them.
Indeed the "smart cities" and "Internet of things" credo seems to be
that there is "one and only one universal and transcendently correct
solution to each identified individual or collective human need; that
this solution can be arrived at algorithmically, via the operations of a
technical system furnished with the proper inputs; and that this
solution is something which can be encoded in public policy, again
without distortion."  Yet data is hardly without biases, starting with
the decision of what data to collect and how to taxonomize it, and even
in the best-intentioned cases, can be misused after the fact, as
occurred when occupying German forces "weaponized" Dutch identity-card
data to hunt down those of "undesirable" ethnicities and races (and the
Trump administration aims to do with DACA registrations).

Rapidly-adopted and soon-to-be-ubiquitous technologies seem to fall into
two categories: those that are ostensibly well-intentioned but whose use
in practice falls ludicrously short of their original aims, and those
that are banal but potentially dangerous if "weaponized" by immoral
actors (with which history is replete).  And so digital fabrication,
once conceived as a way to end scarcity, becomes a narrow channel for
people to obtain things the market cannot provide, because they are
either bespoke or illegal.  Cryptocurrencies, or more specifically
"smart contracts" and their derivatives Distributed Autonomous
Organizations (essentially virtual corporations run entirely by
algorithm), obscure rather than clarify their networks of ownership and
power and exist in a vacuum oblivious to human foibles.  Robotics are
being developed apace in Japan not to assist humans, but to replace them
in such human-centric roles as care assistants for the aged.  Machine
learning algorithms that could help predict where and by whom crimes
might be committed are instead being deployed in China to encumber
citizens with a "karma points" system that will determine access to
virtually all social goods and services--eerily similar to the
fictitious one in "Nosedive", Season 3 Episode 1 of "Black Mirror".  In
all, Greenfield asks, did the creators of these technologies really
think through the risks associated with developing and deploying them?
And if so, did they really conclude that a future embodying those risks
was one worth pursuing?

The lament of the book is that it doesn't have to be this way.
"Sensitive technical deployments" of technology are more than possible,
such as an app that uses facial recognition and Internet search to
gently remind those of us with bad memories of a colleague's name at a
social function, smoothing out social friction rather than creating
social isolation.  Yet the patterns of smartphone use (to name just the
most obvious technological manifestation of Greenfield's concerns) are
just the opposite: receiving the notification of a message or a call
tends to cause an immediate social disruption, and the concept of shared
public life suffers as a result.  (It is in these lines of argument that
Greenfield's intellectual heritage as an urbanist comes through most
clearly.)  And too often when technologists attempt to deploy technology
to serve rather than supplant social interaction, it has the effect of
using technology to "paper over" social inequities and friction rather
than attempting to eliminate them.

Greenfield wraps up with a warning and a call to action.  The warning is
that we should evaluate a technology not on the basis of what it was
intended to do, however noble, but only on the basis of what it is
observed to do in practice, and how rapidly it is rechanneled to
entrench existing power structures to the detriment of you and me.  (Or
in the words of cyberneticist Stafford Beer, "[the] purpose of a system
is what it does.")  The call to action takes the form of presenting four
visions of possible technology-mediated futures, the extremes of which
are not too dissimilar from those sketched in the unrelated novella
"Manna", as a call to action to the reader: "...people with left
politics of any stripe absolutely cannot allow their eyes to glaze over
when the topic of conversation turns to technology, or in any way cede
this terrain to its existing inhabitants, for to do so is to surrender
the commanding heights of the contemporary situation."

Although once in a while the author's voice crosses over into the
overtly polemical, the book as a whole is an informed tour de force that
should be required reading not only for anyone working at the
technological frontier, but for anyone who wants to understand the
opportunities we are potentially leaving on the table by allowing the
social infiltration of those technologies to develop untrammeled.

And for an excellent right-brain companion to the book, watch the British TV
series "Black Mirror".

Book summary: A Thread Across the Ocean

A Thread Across the Ocean: The Heroic Story of the Transatlantic Cable by John Steele Gordon The most wonderful thing about the writ...