Friday, December 15, 2017

Holiday reading 2017

Every time I teach my software engineering class, I try to offer the students some more general life perspective in addition to just straight-ahead software skills. One way I do this is to recommend books that have been particularly life-changing or behavior-changing, or that I've found to be profoundly important, for school, work, life, or all three. For those possibly interested in reading any of them, here is the December 2017 list, with Amazon links to each. As I mentioned, my reviews and recommendations are strongly opinionated, but if you expose yourself to lots of different and contrasting opinions, you will maximize the depth of your own learning and understanding. Read and enjoy. After all, as Mark Twain said, “The man who does not read good books has no advantage over the man who cannot read them.”
Why We Sleep: Unlocking the Power of Sleep and Dreamsby UCB Prof. Matthew Walker, one of the world's leading sleep scientists.  Getting the right amount and right type of sleep can improve your learning and retention, your happiness, your health and longevity, and your overall performance. Dr. Walker explains the science behind these startling findings, and the do's and don'ts of getting good sleep. I found this to be a life-changing, behavior-changing book. 
Representative extended quote: [U]nable to maintain focus and attention, deficient learning, behaviorally difficult, with mental health instability…these symptoms are nearly identical to those caused by a lack of sleep. … [T]here are people sitting in prison cells [for] selling amphetamines to minors on the street, [yet] pharmaceutical companies broadcast prime-time commercials highlighting ADHD and promoting the sale of amphetamine-based drugs (Adderall, Ritalin)…We estimate that more than 50 percent of all children with an ADHD diagnosis actually have a sleep disorder.”

Excellence Without a Soul: Does Liberal Education Have a Future? by Harry Lewis, Ph.D., longtime Harvard Dean of Students. Lewis reflects on what college is supposed to be for students, and how professors, administrators, and students themselves are thwarting these very goals and impoverishing the student experience. His framework accommodates such wide-ranging topics as cheating, sexual harassment on campus, grade inflation, and more. If you ever expect to work with college students in any capacity, you can't afford not to read this. [My review on Amazon]
Representative extended quote: “A [student with a disability] who turned in a plagiarized paper … [argued that his] typist must have typed up his notes rather than his actual paper, and he turned in what the typist gave him without checking it. His family assured us that they would take the College to court … if he were found guilty of plagiarism. It was not his fault that the paper […] was the work of others. … His family … may have taught him how to make the system work for him, but did they teach him anything about character? [T]he College is expected to coddle students when they should be learning about life by trial and error.”

The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail,  by Clayton Christensen, Harvard Business School. If you insist on using the word "disrupt", you should understand what it means. Disruption is profound, unexpected, and has happened in a wide range of major industries outside technology, long before bloggers and pundits misappropriated the term. Reading this won't prevent disruptive tsunamis but it may help you see them coming.

Radical Technologies: The Design of Everyday Life by Adam Greenfield. Have you thought deeply about the technology you are creating and the social and economic structures in which it's embedded? No? After reading this, you won't be able to not think about it. If you work at the cutting edge of CS and you don't read this, you leave yourself morally liable. [My review on Amazon]
Representative extended quote: Watch what happens when a pedestrian first becomes conscious of receiving a call or a text message …what does our immersion in the interface do to our sense of being in public, that state of being copresent with and available to others that teaches us how to live together? … [T]here is a very real risk that those who are able to do so will prefer retreat behind a wall of mediation to the difficult work of being fully present in public… The internet of things in all of its manifestations so often seems like an attempt to paper over the voids between us, or slap a quick technical patch on all the places where capital has left us unable to care for one another.”

The Lessons of History by Will and Ariel Durant. This team of historians wrote the definitive ~20-volume "Story of Civilization" in the 60s, then stepped back and wrote this condensed 100-page "design patterns of history" book. This is like getting a preview of everything that has happened in the 20th century or will likely happen in the 21st, based on what has happened in the past.  As Santayana said, “Those who do not know history are condemned to repeat it.”  Don‘t be that guy.
Representative quote:  If our economy of freedom fails to distribute wealth as ably as it has created it, the road to dictatorship will be open to any man who can persuasively promise security for all; and a martial government, under whatever charming phrases, will engulf the democratic world.”

Program or Be Programmed by Douglas Rushkoff. Bing! you got a text. Bing! someone liked a Facebook post you made. Bing! someone you're following just tweeted. Bing! Bing! Bing! Ever wonder about the effects of being in "constant standby" and interrupt-driven on your work habits, your ability to concentrate over extended time intervals and absorb new material, your social capital? Neither Rushkoff nor I are arguing for a technology-free lifestyle, but your actions and choices should be informed. This book can help.  
Representative extended quote: [W]e sacrifice the thoughtfulness and deliberateness our digital media once offered for the false goal of immediacy—as if we really can exist in a state of perpetual standby. We mistake the rapid-fire stimulus of our networks for immediacy …This in turn encourages us to value the recent over the relevant. We can watch a live feed of oil from an oil well leaking into the ocean, or a cell phone video of an activist getting murdered…But with little more to do about it than blog from the safety of our bedrooms, such imagery tends to disconnect and desensitize us rather than engage us. Meanwhile, what is happening outside our window is devalued.”

Wednesday, November 29, 2017

Vanishing GenX

[Note: this post was originally called Vanishing Americana but it makes more sense to call it Vanishing GenX, hence the weird permalink.] I guess one gets nostalgic as one approaches middle age. Or maybe, as a professed progressive liberal, I'm nostalgic for the Bush Jr years—what we now call "the good old days". For whatever reason, I thought I'd transcribe a list of things that most fellow GenX'ers will recognize as fixtures from their formative years but that most millennials (the group I mostly teach as undergrads at Berkeley) may not even recognize the names of. Email me if you want something added to the list.

In no particular order, and inspired by the book Going, Going, Gone: Vanishing Americana (which is itself inspired by the original Vanishing Americana: Pictures of the American Past), here is my list of newly-vanishing Americana…


  1. Microfiche at the library. Remember how one had to look up old newspaper articles before the web?
  2. Projectors in classrooms; the AV club. There was always the one kid who could set up the 16mm projector to watch “educational” films. The kid was often part of a mysterious coven called the AV club. Now it’s all on YouTube. By the way, the film projector is itself a small marvel of engineering.
  3. X-acto knives in publication layout. If you were a school newspaper editor, yearbook photographer, etc., paste-up sheets and X-acto knives were tools of the trade. Now it’s done using desktop publishing.
  4. Having to plan ahead when meeting up with friends, because there are no cell phones. It used to be that getting a group of friends together required significant advance planning, especially if you were going to meet in a crowded place, like New York City or Disneyland, since once you left the house there was no way to reach your friends. 
  5. Going to the video store. Video rental stores weren't just about renting the video—the act of going to the store became a cultural fixture itself. (The intriguing book From Betamax to Blockbuster chronicles this social history and its effect on the moviegoing consumer, and the documentary Rewind This! chronicles its effects on the entertainment industry, both of which are more profound than most of us realize.)
  6. Phone books. It’s hard to explain this concept given that voice calls are barely a thing anymore and that phone numbers are exchanged via SMS. It’s particularly hard to explain the Yellow Pages.
  7. Dialing 411. Ditto. (I wonder how many of my students have heard but not understood the expression “Here’s the 4-1-1…”)
  8. Physical special effects and "trick photography". Movies like 2001: A Space Odyssey and Star Wars featured groundbreaking special effects long before digital video was possible; they had to do it the hard way. 
  9. Fotomat and other 1-hour photo booths. Time was when a 1-hour turnaround before you could actually see a photo was considered miraculous. And of course you’d pay the incremental extra fee to get two copies of each photo—how else could you share them?
  10. Metered long distance and outside-exchange calls. Imagine always being in voice roaming mode. If you had an SO who lived outside your telephone exchange (itself a complicated concept to explain, in an age of mobile phones)—or if you were a geek like me who liked to spend time on BBSs—it mattered whether the phone number was inside or outside your exchange, otherwise the calls could be metered and quite expensive. Now that area codes are roughly irrelevant and unlimited long distance calls are features of most plans (assuming you still use the phone system and not Skype or WhatsApp for voice calls…), metering is basically a thing of the past—except when you’re roaming.
  11. Typing your term papers, using carbon paper and Wite-Out. (Thanks to Steve Hand.) I was an early adopter of word processors, but up until 1981 I was still typing. In fact I wish I'd saved the typewriter, I have kind of a nostalgia for one now.
  12. Getting off the couch to change the TV channel, even though only about 6 channels of VHF were available in the days before cable. Hard to know where to even begin to explain this one to millennials. (Thanks to Allison Jaynes.)
My colleague Robert Jones from Intel suggests adding [I edited his list slightly]:

  1. Actual card catalogs at the library
  2. Encyclopedias
  3. Wall telephones…with a real bell…and a long cord that had to be untangled regularly…and a rotary dialer
  4. Dial-up modems. The only place you hear the sound now (well, it’s close) is a fax machine
  5. Fax spam.
  6. Mix tapes. Besides cassettes heading for obsolescence, it took just as long to make a mix tape as to listen to it, so when you received one, you knew the other person had invested a lot of time in making it for you.
  7. Film. My 15-year-old twins stared blankly at me recently when I described it.
  8. VCR programming.
  9. Dot matrix printers and the sound they make.
  10.  8-track tapes. They were already on the way out when we were kids.


Things that are on the way out but haven’t made the list yet (younger readers might at least recognize these as “things their parents are familiar with”):
  1. Navigation using maps 
  2. TV Guide
  3. Passing (paper) notes in class
  4. Fast-food restaurant birthday parties
  5. Clocks you have to set periodically
Send me a note if you have other suggestions…


Saturday, October 21, 2017

Book summary: Radical Technologies

Radical Technologies: The Design of Everyday Life, by Adam Greenfield.

I first met Adam Greenfield when he accepted an invitation to deliver a
guest talk at a computer systems conference I co-organized in 2009.  His
talk on what would later become known as "smart cities" was ahead of its
time and (in my mind) firmly placed him as a modern urbanist, well
within the tradition of Jane Jacobs but with a deep technology
sensibility, as his later book "Against the Smart City" revealed.  In
his latest book he emerges as a true humanist, again with a deep
understanding of the role of technology.  The questions he poses to the
reader here go well beyond urbanism, to an existential examination of
the friction between what we think we are here for and the precipitous
acceleration towards a 100% technology-mediated lifestyle.

The basic message of the book is that mediation by extremely complex
technology stacks has (at least) four pernicious effects.  It erases the
"wetware" versions of quotidian activities such as hailing a cab or
clustering around a TV, which, though mundane, build social capital.  It
further divides haves from have-nots.  It litters the socio-technical
landscape with technological ingredients (in the form of code libraries,
e.g.) whose functions may be benign or even banal when they first
appear, but can rapidly and almost invisibly be put to use to subvert
our individual or societal goals, and indeed to move those goalposts.

And it eliminates the assumption of an underlying shared reality, in a
dark, Gibsonian-dystopia sort of way. You and I see different features
on Google Maps, receive different pricing and suggestions from Amazon,
are shown different news headlines, and although we may be occupying the
same space at the same time, we're each simultaneously in two different
"somewhere elses".  Yet we generally don't know whose values or reasons
underlie the differences between the choices presented to you and those
presented to me.

Socioeconomically, this means (for example) that Google Home defaults to
using OpenTable for making restaurant reservations, which diverts money
from the restaurant to the service but appears frictionless to the
consumer; Google Maps presents Uber as a frictionless transportation
option alongside driving or transit, to the exclusion of other choices;
and so on, to show how attention, culture, and dollars are subtly
steered in specific directions, for ends usually opaque to the very
users they claim to serve.

Politically, one could not hand an authoritarian government a better
tool to divide and control its subjects.

In short, we have invited companies, standards bodies, and potentially
malicious hackers to intervene in the "innermost precincts of our
lives", perilous precisely because those activities are so banal we're
not prone to worrying about who is observing or intermediating them.
Indeed the "smart cities" and "Internet of things" credo seems to be
that there is "one and only one universal and transcendently correct
solution to each identified individual or collective human need; that
this solution can be arrived at algorithmically, via the operations of a
technical system furnished with the proper inputs; and that this
solution is something which can be encoded in public policy, again
without distortion."  Yet data is hardly without biases, starting with
the decision of what data to collect and how to taxonomize it, and even
in the best-intentioned cases, can be misused after the fact, as
occurred when occupying German forces "weaponized" Dutch identity-card
data to hunt down those of "undesirable" ethnicities and races (and the
Trump administration aims to do with DACA registrations).

Rapidly-adopted and soon-to-be-ubiquitous technologies seem to fall into
two categories: those that are ostensibly well-intentioned but whose use
in practice falls ludicrously short of their original aims, and those
that are banal but potentially dangerous if "weaponized" by immoral
actors (with which history is replete).  And so digital fabrication,
once conceived as a way to end scarcity, becomes a narrow channel for
people to obtain things the market cannot provide, because they are
either bespoke or illegal.  Cryptocurrencies, or more specifically
"smart contracts" and their derivatives Distributed Autonomous
Organizations (essentially virtual corporations run entirely by
algorithm), obscure rather than clarify their networks of ownership and
power and exist in a vacuum oblivious to human foibles.  Robotics are
being developed apace in Japan not to assist humans, but to replace them
in such human-centric roles as care assistants for the aged.  Machine
learning algorithms that could help predict where and by whom crimes
might be committed are instead being deployed in China to encumber
citizens with a "karma points" system that will determine access to
virtually all social goods and services--eerily similar to the
fictitious one in "Nosedive", Season 3 Episode 1 of "Black Mirror".  In
all, Greenfield asks, did the creators of these technologies really
think through the risks associated with developing and deploying them?
And if so, did they really conclude that a future embodying those risks
was one worth pursuing?

The lament of the book is that it doesn't have to be this way.
"Sensitive technical deployments" of technology are more than possible,
such as an app that uses facial recognition and Internet search to
gently remind those of us with bad memories of a colleague's name at a
social function, smoothing out social friction rather than creating
social isolation.  Yet the patterns of smartphone use (to name just the
most obvious technological manifestation of Greenfield's concerns) are
just the opposite: receiving the notification of a message or a call
tends to cause an immediate social disruption, and the concept of shared
public life suffers as a result.  (It is in these lines of argument that
Greenfield's intellectual heritage as an urbanist comes through most
clearly.)  And too often when technologists attempt to deploy technology
to serve rather than supplant social interaction, it has the effect of
using technology to "paper over" social inequities and friction rather
than attempting to eliminate them.

Greenfield wraps up with a warning and a call to action.  The warning is
that we should evaluate a technology not on the basis of what it was
intended to do, however noble, but only on the basis of what it is
observed to do in practice, and how rapidly it is rechanneled to
entrench existing power structures to the detriment of you and me.  (Or
in the words of cyberneticist Stafford Beer, "[the] purpose of a system
is what it does.")  The call to action takes the form of presenting four
visions of possible technology-mediated futures, the extremes of which
are not too dissimilar from those sketched in the unrelated novella
"Manna", as a call to action to the reader: "...people with left
politics of any stripe absolutely cannot allow their eyes to glaze over
when the topic of conversation turns to technology, or in any way cede
this terrain to its existing inhabitants, for to do so is to surrender
the commanding heights of the contemporary situation."

Although once in a while the author's voice crosses over into the
overtly polemical, the book as a whole is an informed tour de force that
should be required reading not only for anyone working at the
technological frontier, but for anyone who wants to understand the
opportunities we are potentially leaving on the table by allowing the
social infiltration of those technologies to develop untrammeled.

And for an excellent right-brain companion to the book, watch the British TV
series "Black Mirror".

Friday, August 25, 2017

How to unfuck Mac OS X Calendar

I sync my OS X Calendar and my iPhone calendar to Google Calendar—that is, Google Calendar is the truth and the backup storage. Theoretically, via CalDAV any changes made to any of the three should eventually propagate to the others. This mostly works, but for some reason recently it hasn't worked reliably with Calendar—it's as if some update events from Google Calendar don't get downloaded properly, although changes I make in Calendar do seem to propagate reliably to Google.

Here is a script I wrote that I run periodically to fix this, based on a fix I found somewhere for when Calendar doesn't sync/import Google Calendar changes correctly even after forcing a refresh. I've saved this script as ~/bin/unfuck-ical (since iCal is the old name of the Calendar app). I'm considering it just running as a cron job, since this fuckage occurs pretty frequently now. Hope this helps someone else.


Tuesday, June 20, 2017

Day trips from the Bay Area

Visiting us in SF? If you have a car, here are some day and 2-day trips not far afield.

Wine country: Napa is closest (~1 hr), but Sonoma/Mendocino are more chill (1.5-2 hours). It’s getting hard to find mom-and-pop wineries there—everything is very commercialized and most places charge for tastings. That said, Mumm has interesting champagne cellars you can tour and a lovely outdoor tasting deck. Avoid weekends at all costs; it’s mobbed. We have maps of the region showing various wineries.

Santa Cruz Mountain wineries: The Santa Cruz/Monterey growing region produces lovely chardonnays and pinots (vs. the zins, merlots and cabs for which Napa/Sonoma/Mendocino are more known). And there are many more mom-and-pop operations that are more intimate and casual. Allow about 1.5 hours drive along the fast route (I-280 to CA-17 or CA-84) or 2 hours along the scenic route (CA-1 or CA-35 along the ridge). We have maps of the region showing various wineries.

Santa Cruz: Hippie/surfer town, amusement pier, and great fish restaurants. Can be a lunch stop on the way to Monterey if you drive that route. Right next door to achingly cute Capitola.

Monterey/Carmel: Its Cannery Row/Fishermans Wharf are no doubt touristy, but still lovely for oceanside dining when weather permits. The Aquarium is world class and includes indoor and outdoor exhibits emphasizing local fauna but with a good selection of exotics too. Expensive to stay here. 2 hour drive by the fast route, 2.5 to 3 by the scenic (coastal) route. Carmel is another ~30 minutes beyond that, and equally cute, though there is less to do; but plenty of good dinner options.

Big Sur: A challenge to get to right now due to the bridge and road washout; extremely remote by any standard. San Simeon/Hearst Castle is nearby but I personally find it less compelling than the crazy scenery. Allow at least 2.5 hours driving each way, plus sightseeing time.

Sausalito: No car required! Take the ferry there from the Ferry Building or Pier 39 (ask us for maps/timetables), or if adventurous, rent bicycles next to the Ferry Building, bike across the Golden Gate Bridge, and take the ferry back with your bike. A nice lunch/brunch stop, and if you have time, visit the fascinating SF Bay Model, constructed by the Army Corps of Engineers to evaluate whether landfilling the entire Bay would be a good idea (no). ~20 minute ferry ride from downtown SF.

Golden Gate Bridge: The car parking plazas can be a pain, but the true experience of the bridge is obtained by walking or biking across it, which means you either park at one end or bike there (or you can Uber, or take the Golden Gate Transit bus; ask us for directions).

Yosemite: A bit ambitious for a day trip; 4.5 hours each way to the valley. Ask us for recommendations of where to stay outside the park (it can be tricky to get campground space inside the park in high season; there’s a couple of lodges including the beautiful and expensive California-Arts-and-Crafts Ahwahnee, on which Disney’s Grand Californian hotel is based, but they also fill up fast.)

Friday, April 21, 2017

"Damaged" by BASIC

At a recent conference I attended where a main theme of many papers was intelligent tutors to help novice programmers, an audience Q&A after one of the talks turned to the affordances available to novice programmers in computing’s early days, including the BASIC programming language, which was developed expressly to introduce nontechnical students to computing.

It didn’t take long for one of the discussants to mention that during Edsger Dijkstra’s storied career, he ranted against many things in computer science that he thought were “considered harmful,” including the BASIC language. Specifically, Dijkstra wrote: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.” (COBOL and FORTRAN were the target of similarly withering remarks.)

Now, with all due respect, to some of us them’s fightin’ words. A number of us in the room were of the generation that cut their teeth on BASIC, and hey, we turned out OK. So as the session moderator, I abused the moderator’s privilege and told the audience that anyone who felt indignant at Dijkstra’s smackdown of BASIC might find some comfort in reading my essay In Praise of BASIC: The Cultural Impact of the World’s Most Maligned Programming Language. Until I figure out whether any journal or other publication might want this screed, you can read it at bit.ly/damagedbybasic. Enjoy.

Sunday, March 19, 2017

Learn programming by gamifying? How about by reading?

On impulse I spent a couple of dollars on Amazon Marketplace to buy the out-of-print book Micro Adventure No. 1: Space Attack.  It's a "second person thinker" adventure novella: like old-school interactive fiction (i.e. text adventures),  it's written in the second person, as in "Although you'd like to rest for a few minutes, Captain Garrety insists that you get to your feet…"

In this short story aimed at pre-teens—the first in a series of at least 10, dating from the early 1980s—you must defend a space station from alien attack. But the interesting bit is that eight BASIC programs are embedded into the text of the story, as the page scan below shows.

The initial program just has to be typed in and run in order to reveal the "secret message" that will describe your mission to you. But as the book progresses, the programs require you to debug, analyze, or otherwise modify them as part of the story line. Some programs have bugs you must fix; in other cases you're asked to write a short program that automates a simple task, such as showing mappings between text characters and their ASCII codes (this is pre-Unicode, remember), in order to help "decode" intercepted enemy messages.


Of course, failing to do the puzzles can't block your progress in the story, because nothing stops you from just turning the page to keep reading. But this strikes me as an interesting way to get kids to learn how simple programs work. (I don't know how effective it was.) There is a "reference manual" at the end of the book explaining how the programs work, giving hints on solving the puzzles, and, of course, indicating which modifications must be made to allow the programs to run on different microcomputers. (Whereas code in a modern scripting language like Python will behave the same on all platforms, BASIC "dialects" differed enough across different computers that almost any non-toy program required changes to work with other computers' BASIC interpreters.)

An entire generation of programmers was first introduced to computing via the BASIC language. I've been looking for an example of an old geometry or physics textbook containing "Try it in BASIC" examples (we didn't use any of those at my middle school), but this seems a lot more fun. While I'm pretty convinced today's kids don't read books anymore, perhaps this approach could be adapted into an interactive format in which you actually play an adventure game but have to solve programming-related puzzles to make actual game progress. 

Monday, March 13, 2017

Book summary: America in the Seventies

Beth Bailey and David R. Farber. America in the Seventies (Culture America series). University Press of Kansas, 2004.

The premise of this book, as with similar books of observations of the American 70s by other writers, could be summed up as "the 70s is when the 60s were implemented." While the seeds of civil rights, gender equality, labor solidarity, etc. may have been sown in the 60s, the actual policies that put these ideas in practice happened during the 70s. At the same time, the US confronted a series of setbacks: Vietnam was not only a military embarrassment with enormous human costs, but a war that polarized the nation on moral grounds, with none of the moral clarity or national purpose of WW II; expanded government programs and higher-paid labor to meet the social demands of the 60s, combined with the replacement of American heavy industry with imported goods and the movement of labor-intensive production overseas, resulted in "stagflation" (inflation combined with economic stagnation); the Arab oil shocks painfully emphasized America's utter dependence on the whim of a small group of nations whose culture in some ways could not be further from our own. Richard Nixon's Watergate scandal made the public cynical that the government was not only incapable of resolving these economic woes, but lacked integrity and was not invested in the well being of the middle class. Social structures were challenged by movements involving gender roles, racial identity, and sexual identity, destabilizing social norms that were perceived to have anchored the country for decades and leaving many people casting about for their personal identity and purpose as well as confidence in their country. This toxic combination led to a nationwide anomie and alienation as expressed in gritty (and now-iconic) 70s movies like Taxi Driver, Looking for Mr. Goodbar, Midnight Cowboy, and Saturday Night Fever.

One very significant result of this existential crisis was the emergence of the New Right with the Reagan election of 1980. By latching on to the common denominators of dissatisfaction with government incompetence and corruption and the alienation bred by changing social roles, the New Right assembled a constituency of anti-tax activists, critics of "big government", and the religious right. Reagan and his successors used this mandate to gut the government altogether, following an existing conservative agenda that just needed dusting off after losing its social luster during the 60s.

The book is a collection of well chosen independent essays, each treating one of these social or economic upheavals in detail. As an academic myself, I approached it with some trepidation since academic writing can be ponderous and needlessly self-indulgent, but these are vigorously written and eminently readable by a non-expert like me. I commend the editors on their choices, though I would have enjoyed some connective material to introduce each essay or place it in the context of the larger themes, as is common in "edited by" collections. Notwithstanding, this is a highly readable and informative account of how the "me decade" of the 70s, in trying to in implement the social reforms of the 60s, ironically enabled the rise of the New Right and "greed is good" in the 80s.

Book summary: The Next America

Paul Taylor. The Next America: Boomers, Millennials, and the Looming Generational Showdown. PublicAffairs, 2016.

To paraphrase a famous scientist, the nice thing about data is that it doesn't matter whether you believe it or not. This book contains a tremendous amount of (summarized) data about the current and future demographics of the United States, gathered from both public sources (eg statistics published by the Bureau of the Census, the IRS, and other Federal agencies) and from one of the world's best-known nonpartisan survey-based research foundations (Pew).

I'd summarize the biggest takeaways as follows:

Generational attitude shift. The combination of immigration, intermarriage, and changing social morés among younger generations (the author identifies today's primary generational groups from oldest to youngest as Silent, Boomers, GenX, and Millennials) mean that the social attitudes of current and future voters lean overwhelmingly towards what most people would associate with "progressive values" or with the Democratic Party. In particular, as the Republican Party has tacked farther and farther to the right, the segment of the electorate receptive to their messages is shrinking and in fact dying. On the other hand, these younger-but-growing segments of the electorate have a much poorer voter-turnout record than their older and more conservative counterparts. This combination of elements has profound consequences for future elections.

Socioeconomic consequences of an aging population. The biggest coming "showdown" (to which the subtitle alludes) is the aging of the world's population. Japan, China, and some European nations will get there ahead of the US, in part because although birth rates are falling everywhere throughout the developed world, in the US that effect is partially offset by immigration, especially economically (since most immigrants arrive ready to work rather than newborn). But all these countries are rapidly approaching a point where fewer and fewer working people are supporting more and more seniors. (In Japan the ratio will approach 1:1 by about 2040 if current trends continue.) There is an unfortunate positive feedback loop in countries like the US where most legislation is made democratically: the older generations constitute a large and growing voter bloc to whom politicians must cater, and that bloc has been using its influence to appropriate a growing share of government wealth redistribution. In the US, Social Security and Medicare are basically on the ropes. At some level most of us know this, but the statistics and trends presented to quantify the situation are stark.

In other words: not only will the older and younger generations find themselves at odds economically on how to redistribute wealth, but their positions will be even farther apart because their social contexts are so different. As the author states in the introduction, "either transformation by itself would be the dominant demographic story of its era."

The book does a nice job of including enough charts and graphs inline when necessary to illustrate or back up a point, but relegating vastly more charts and tables to an Appendix you can browse at leisure or for more detail.

There is also a fascinating and well written appendix describing in high level terms the survey methodologies used by Pew and other professional research organizations, for those who think surveys are just a matter of asking some questions and tabulating answers. The appendix covers random sampling; a lay-person explanation of sampling error and reweighting; various biases including recency, confirmation, and self-selection; running meta-surveys to test the effect of different phrasings or presentations of the same questions; and much more. Indeed, this appendix is useful reading for anyone involved in doing rigorous surveys, whether they are interested in the rest of the book's content or not.


Whether it cheers you up, depresses you, or just causes you to raise an eyebrow may depend on where you fall on the political spectrum, but regardless of where you do, this is essential and well-reported information.

Book summary: The Wealth of Humans

Ryan Avent. The Wealth of Humans: Work, Power, and Status in the Twenty-first Century. St. Martin’s Press, 2016.

What follows is my summary of the book's main argument. There's a number of useful reviews on Amazon, including some written by very informed people who disagree with key points of the author's argument. The main objection seems to be that the author overstates the extent to which income inequality is an inevitable by-product of technological change (section 1 of my summary below), and understates the extent to which it is affected by politics/institutional decisions, e.g. infrastructure spending programs that can locally increase labor demand and social conventions to boost wages. 

Executive summary

In most economically free societies, the two mechanisms of wealth-sharing are work (employers shift wealth to employees by paying them) and redistribution (taxes pay for goods and services that may not be redistributed in proportion to how much you paid), and the society has a definition of who is "in" (eligible to participate in both mechanisms). This book asks: What happens to these mechanisms when increasing automation is squeezing the first, and those controlling the wealth are opposed to expanding the reach of the second?

Its overall responses are: (1) while it's true that policy everywhere has tipped to favor wealth concentration, the essential problem is structural; (2) As a result of this fundamental structural problem, most efforts to "create jobs" will run into problems that ultimately doom them; (3) therefore, for better or worse, some form of non-labor-based redistribution will become necessary (eg universal basic income).

1. Productivity-enhancing technology thwarts a balanced labor market 

Henry Ford's innovation was to de-skill individual roles to vastly decrease the cost and increase the per-employee productivity of making cars; precisely because the de-skilled jobs were tedious, he raised wages and coddled his employees to attract labor and reduce turnover, something he could afford to do because of their high productivity. But this scenario comes with 3 problems.

First, the high productivity makes it affordable to pay higher wages, but workers in low-productivity industries such as education and healthcare that suffer from Baumol's "cost disease" (it costs about the same to educate 1 student or care for 1 patient as it ever has) are in the same labor market, so their wages must rise *despite* stagnant productivity, thereby increasing the cost to the consumer of purchasing those goods or services. That is, wealthy companies can afford to pay employees more because of the employees' much higher productivity, so that most income inequality is due to wage gaps *between* firms/sectors rather than within them.

Second, productivity-enhancing de-skilling paves the way for complete automation of those jobs, so the benefit to low-skill workers is short-lived.

Third, since higher productivity leads to a labor glut even before automation takes over, it pushes wages down. This is bad because while the effective price of some goods also falls due to that productivity (cars, cell phones), the effective price of others doesn't, either because supply is scarce (housing) or because they suffer from Baumol's cost disease of stagnant productivity (education, healthcare).

This is an example of how "job creation" systems can end up working against themselves. Future employment opportunities will likely satisfy at most 2 of the following 3 conditions ("employment trilemma"): (1) high productivity and wages, (2) resistant to automation, (3) potential to absorb large amounts of labor. To see the dynamic, consider the solar-panel industry. Increased productivity in manufacturing solar panels has caused them to drop in cost, creating a large market for solar panel installers, a job resistant to automation (meets criteria 2 and 3). But that same increased productivity means most of the cost of acquiring solar is the installation labor, limiting wage growth for installers (fails criterion 1). As another example, consider healthcare. As technology increases the productivity of (or automates) other aspects of care delivery, healthcare jobs will concentrate in non-automatable services requiring few skills besides bedside manner and the willingness to do basic and often unpleasant caregiver tasks. As a third, consider artisanally-produced goods, whose low productivity is part of their appeal (meets 1 and 2). But the market for them is limited to the small subset of people who can afford to buy them (fails 3).

Can education help? Higher educational attainment is still key to high wages, but not to high wage *growth*. The level of education required for that has been climbing higher and higher, putting it beyond the economic (and possibly intellectual) reach of most people, yet those are precisely the credentials needed to participate in the most lucrative parts of the economy. The displaced workers "trickle down" the skill-level chain and depress wages even higher in the wage hierarchy. So improving education, while a good idea, won't help people in poor countries as much as simply moving them into a rich country to work in that economy.

2. Hence, social capital is increasingly key to successful companies…

Since WW2, developed-nation economies have increasingly "dematerialized" to where most of the value in goods being produced was in knowledge-worker contributions, rather than physical manufacturing or the labor therein. (iPhones and cars are built overseas, yet most of their value is in design and software, which aren't outsourced.) Increasingly, the "wealth" of a company is not in its capitalization or even the material output of its employees, but its "culture" -- its way of absorbing, refactoring, and acting on information in a value-added way that is difficult to replicate and produces a product customers want to buy.

(This is also why cities are resurgent -- they permit a dense social/living fabric that promotes evolution of social capital, and the larger/denser the city, the more productive it becomes because of this effect, supporting high levels of specialization and social networks that facilitate labor mobility. The demand results in high housing costs, but NIMBYs oppose building more housing because even though the benefits would be spread over the whole city, the costs would be concentrated in their neighborhood.)

By definition, culture is a group phenomenon, not a set of rules handed down by a boss. Social capital cannot be exported like material goods; all you can do is try to create (or impose) conditions under which it can develop by allowing the free flow of ideas and labor (ie, the people in whose heads social capital lives), as the EU is trying to do within Europe. This is troubling for developing economies whose societies lack social capital.

Hence, China, having spent a fortune to create physical infrastructure to improve worker productivity, has reached diminishing returns: further productivity improvements must now come from "deepening" the workers' social capital, which has been wrecked by decades of cultural mismanagement by a totalitarian regime. Similarly, India's outsourcing boom and China's hyper-rapid industrialization occurred because technology allowed them to temporarily bypass the difficult step of building social capital, by "biting off" chunks of activity taking place in richer economies: India hosting outsourced call centers, or China jumping into a global supply chain established by rich economies and uniquely facilitated by the digital economy, in both cases offering labor at lower cost. But this era is ending: other countries can play the same trick (eg Indonesia as the new China, depressing Chinese wages), automation is coming, and the relative advantage to outsourcing decreases as products become more information-centric. (Though note that while "reshoring" is happening, it's not creating more jobs: Tesla would rather pay a few highly skilled engineers to oversee an automated assembly plant than pay lots of low-skilled factory workers to build something manually and less reliably elsewhere.)

It used to be thought that poor countries were poor because they lacked financial capital, but it's now clear that they can build factories without resulting in good social capital (India, China). Indeed, highly-educated workers in poor countries become more productive when they move to rich countries, suggesting it's the country's social capital that is lacking.

3.   …yet the benefits of social capital don’t accrue to those who create and embody it 

Yet as important as social capital is, when a worker leaves a company, his knowledge of that company's "culture" is generally not useful at a new firm, so he has little leverage (though this is somewhat counterbalanced by the pressure to not have *most* workers quit, which would destroy the culture). Conversely, a chief executive is harder and costlier to replace, so has more leverage as an individual. Herein lies the problem: "social capital" is in the collective heads of individual workers, but its benefits flow disproportionately to the owners of financial capital. A corollary is that the efficiency gains achieved by fluid (ie non-unionized) labor markets haven't been redistributed to the workers whose bargaining power was sacrificed to achieve those efficiencies. Marx predicted that that dynamic was unsustainable, and the society would collapse because either the workers would revolt and upend the government and the social norms it curates, hence destroying the wealth for everyone, or that the wealth-owners would asymptotically reach a point where no further wealth could be generated and harvested so they'd start fighting each other over the fixed amount of wealth, again destroying the society. Piketty notes that the 2 world wars did a lot to disrupt this downward slide because wars, taxation, inflation, and depression destroyed many of the superconcentrated fortunes made in the industrial age, but as noted above, the change was temporary.

The consequence of this structural problem is that some form of non-labor-based redistribution is likely to be the only nonviolent way forward. This path has at least two challenges. One is that the act of doing work has other benefits -- agency, dignity, reinforcement of socially-useful values -- that would be lost; although surveys show that people saddled with extra free time due to weak job markets tend to spend it sleeping or watching TV, ie, at leisure. A second challenge is that such "highly redistributive" societies tend to emerge in ethnically/nationally coherent political units, and motivate the society to draw a tight boundary around itself. E.g. Scandinavian countries have generous welfare states that make them desirable to immigrate into, but as a result the load on the welfare system generated by lots of immigration is tearing at the seams of their welfare economies. That is, we can't expect rich liberal countries to throw open their borders heedlessly when the potential pool of immigrants dwarfs those working to generate the wealth that is redistributed.

Saturday, March 11, 2017

Book summary: From Betamax to Blockbuster



Joshua M. Greenberg, From Betamax to Blockbuster: Video Stores and the Invention of Movies on Video. Cambridge, MA: MIT Press, 2016.

Summary: Although the VCR was originally positioned as a device for time-shifting TV, its dominant use quickly became the viewing of pre-recorded content. The book tells the story of that evolution, and how it affected both the medium and the content: how the mismatches between the technology of the VCR/TV vs. theaters affected movie viewing, the social and commercial constructs such as video rental stores that sprang up around the experience, and the cultural shift in the perception of what, exactly, a "movie" was and what the experience of "watching a movie" came to mean. Video rental stores, which provided the intermediary that brought these mismatched perspectives together, did such a good job that they ultimately rendered themselves obsolete.

Technological prehistory. In 1969 Sony invented the U-Matic, the first cassette-based color videotape recorder and ancestor of the Betamax, which could record up to an hour of video in the NTSC (American analog TV) format. Up to then, reel-to-reels with low-density tape had been used for "kinescoping" a TV broadcast: a show would be shot on the East Coast, a kinescope pointed at a monitor to record the playback, and then the film would be developed and rebroadcast around the country. Selling the U-matic was hard since there was no "software"; initial attempts focused on getting educational companies to convert their materials to the format for in-school use; in practice, adult video arcades probably did more to launch the industry, replacing "film loops" with cassettes.

Sony positioned the 1975 Betamax (price: $1,295) as a device for "time-shifting TV", hence underestimated consumer demand for blank cassettes. In addition, Betamax tapes could only record 1 hour of video. For the first 2 years of Betamax's existence, the only prerecorded tapes users could legally buy were public domain films or pornography. Japanese competitor JVC (Japan Victor Corporation) came up with its own incompatible format called VHS, which could record two full hours albeit with slightly lower quality than Betamax. JVC also triggered a price war by licensing the rights to manufacture VHS equipment to any manufacturer, whereas Sony was the exclusive manufacturer of Betamax equipment. One VHS manufacturer, Matsushita (Panasonic), struck a deal with RCA to manufacture a unit that would allow 4 hours of recording on VHS tape at substantially lower quality, allowing sports events to be captured in their entirety. Sony (and most experts) insisted that Betamax's recording quality was superior, but that seemed less important to consumers than longer recording time and lower-priced equipment. Sony eventually responded to these technical and business challenges with improvements to Betamax, but by then VHS had basically won the format war with consumers.

Late 1970s: early adopters lead to the birth of a consumer-facing business. Early videophiles (usually white males, 21-39) would record and archive entire TV miniseries (or better, movies) and even edit out commercials to make the experience closer to viewing a movie. They would copy and trade tapes, by mail or in person at informal gatherings; they formed nationwide networks supported by amateur magazines, phone numbers, and mailinglists used to distribute photocopies of TV Guide listings from other regions.

A pilot test of a third format called Cartrivision, which could hold 2 hours of video and was used to distribute "classic" movies, failed due to poor implementation: technical problems made the tapes disintegrate prematurely and damage the players; the tapes could not be rewound except by a dealer, to ensure that renting only allowed a single viewing, which angered users (a necessary concession to movie studios, who refused to license movies unless they could closely control the viewing experience); and the tapes were delivered by mail, taking days to arrive. Indeed, when Sony released the Betamax in 1975, chairman Akio Morita had tried to strike a deal with Paramount to distribute movies, but again failed because the studio feared losing control of the user's viewing experience. In essence, attempts to create a movie-distribution market were hobbled by tying the studio-imposed constraints of distribution to the technology used. VHS sidestepped most of that and became the dominant format, so it was effectively poised to become the vehicle for distributing movies to consumers; but the studios were still resistant, seeing it as a threat that would cannibalize their existing business model of distributing movies to theaters.

Nonetheless, some entrepreneurs saw a market for media in the home, and started making inroads:
  • Noel Gimbel, owner of the Chicago electronics store Sound Unlimited, thought he could stimulate VCR sales by selling public-domain movies on tapes. Later, he would convince Paramount that that studio's ill-fated exclusive with Fotomat for distributing movies was failing, as video store owners were simply distributing bootlegged copies.
  • Don Rosenberg, who worked for a record distributor, had the idea of going door-to-door convincing music retailers to expand into video, which was tricky because the distribution model for video was based on the model for the appliances with which blank tapes were sold—retailers paid for stock and sold it. In contrast, music was like books—dealers got paid only when customers bought something, and had 90 days to return unsold goods. 
  • Entrepreneur Andre Blay is credited with kickstarting the media-in-the-home industry by making successful deals with 20th Century Fox to establish a rental membership plan for movies. His company Magnetic Video did video duplication and distribution for studios, and he had seen that studios licensed 20-minute "digests" of movies for distribution on 8mm tape; why couldn't they make even more by licensing full-length movies? Fox ultimately acquired Magnetic Video as Fox Home Entertainment, and other studios followed suit and set up their own Home Entertainment divisions. This forced the hand of distributors and retailers in the music industry, and the home entertainment retail industry became a hybrid of the previous music model and the new video rental model.
  • Because of the questionable moral standing of pornographic video, the societal stigma of going into a porno theater, and in some cases its ties to organized crime, pornographers were more willing to embrace risky distribution strategies. Porno was instrumental in launching the home media industry. (Porno theaters showing bootlegged tapes were paid a visit by organized crime.)
Slowly the material nature of the cassette began to give way to the abstract nature of "buying entertainment", as video stores started stocking shelves with empty boxes or box covers while keeping the tapes stored elsewhere (usually for security reasons), and the VCR itself, originally intended as the focus of consumer attention for time-shifting TV, became an incidental artifact used to play back movies. Early video stores were often staffed by movie buffs with no retail experience who just enjoyed being around movies and offering personalized advice to customers, and customers offering advice to each other while browsing the shelves; "going to the video store" became a social ritual as much as watching the movie itself. Local stores hence became social spaces "like bars without alcohol" (consumption junctions, in the language of media theory).

The maturation of the rental industry: franchisization and disintermediation. By the early 1980s, the nature of the rental industry changed as video rental took off. Early video-rental stores took advantage of the "first purchase" rule that applies to books, wherein the original purchaser can do whatever they want with their copy of a video, including rent it an unlimited number of times with no royalty payments to the studio; in retaliation studios began licensing "rental-only" copies at much higher cost, and uneasy truces were eventually reached as a result of retailers and distributors forming advocacy organizations that could negotiate licensing and royalty terms with the studios. Still, with rapidly growing consumer demand for renting movies, self-styled entrepreneurs with no retail experience wanted to open video stores; some successful video chain owners even had a side business providing consulting or "turnkey setup" of your own new video-rental business, most of which were no longer staffed by movie buffs as in the early days. The transformation was complete when entrepreneur Wayne Huizenga saw the first Blockbuster Video store in Florida: clean and bright, family-friendly (no adult-video room in back), prominently displayed children's programming section, and the accoutrements of the movies (popcorn, candy, etc.)—something a few independents had started to do, but became a formula with Blockbuster. The chain reached such efficiency that it could load an 18-wheeler with everything necessary (furniture, tapes, electronic equipment) to turn an empty storefront into an operating retail location within 24 hours.

What is a movie? The spread of VCRs challenged the Platonic ideal of "the movie". Previously the movie as artifact had been wedded to both the technology of the theater (albeit widely varying) and its cultural setting. TV had a different commercial milieu (embedded advertising; FCC constraints and scheduling constraints that led to often heavy "editing for TV"), a different cultural one (sitting in the dark with strangers vs. sitting in living room with family/friends; pausing to go to the bathrrom), and a different technological one (1.33 aspect ratio vs. 2.35 widescreen; mono or stereo vs. surround audio). The introduction of "letterbox" VHS tapes was bumpy because for some consumers watching movies on TV was framed as watching TV, which was supposed to fill the screen, whereas for others it was framed as watching movies, in which case it was a more "movielike" experience. (Ironically, the 1.33 aspect ratio of TVs was chosen to imitate the early movie industry; 2.35 was adopted later when the movie industry perceived itself as under threat from TV and in need of differentiation.) Similarly colorization: some actors, notably Cary Grant, evaluated it in terms of its matching the physical sets on which filming had occurred, whereas some directors and many critics blasted it because it distorted their only experience of the movie, which had been watching it in B&W.

Finally, the lack of social stigma around "being unable to program my VCR" (unlike, say, admitting you were unable to operate a phone) suggested that the act of programming it (i.e. time-shifting TV programs) was no longer central to the VCR's technological frame.

Conclusion. Video stores were the "mediators" between two cultures in many different ways. Studios weren't used to distributing movies on tape, or comfortable with a rental market; but that's what consumers wanted. The commercial models around distribution and retail didn't match consumers' expectations. TV technology didn't match theater technology as a way to view a movie, but along with consumers' evolving perception of what "watching a movie" meant, at once embracing "theater accoutrements" like candy and popcorn in video stores and confounding them by changing the social interactions around movie-watching, video stores were there to mediate the transition and bring consumers and producers together. Ironically, they were so successful at doing so that they have been disintermediated:
  • Technologically, VCRs gave way to DVDs. Although DVDs provide higher picture quality, they did not initially enable the amateur market (direct-to-video indie films, home movies, etc.) in the ways the VCR did, which was critical to the cultural rise of video stores. (Today indie filmmakers can shoot direct to digital and distribute via YouTube, but that wasn't true when DVDs arrived in the early 2000s, and was barely true in 2006 when DVD movie sales first outsold VHS movie sales.) In addition, DVDs "demystify" movies by bundling making-of, interviews, etc. with the feature itself, something completely alien to the theater experience, suggesting that the transformation of consumers' perception of "watching a movie" is complete.
  • independent video stores gave way to chains (Blockbuster, Hollywood Video), which themselves went out of business as direct-from-distributor services like Netflix arose.
The overall lesson may be: without intermediation, new cultural phenomena such as the video-movie revolution could not happen; but once underway, the intermediaries themselves become redundant. (I wonder if a similar argument could be made for retail computer sales—independent stores gave way to national chains like Computerland, then to computers being sold directly in office-supply stores like Office Depot as the computer became mainstream, then eliminated in favor of direct-from-distributor online ordering.)

Tuesday, March 7, 2017

The CRT is dead, long live the CRT

I am a child of the 80s (and a little bit the 70s), and as a youngster I spent many, many quarters in arcade video games. (Tempest was among my favorites that I was good at.) It might be hard for today’s young adults to imagine the appeal of paying-per-game to play a game that lasted only a few minutes, had to be played standing up (usually), and was located in a pizzeria, bar, movie theater, or video arcade. But the first highly successful home gaming console (the Atari 2600, which sold over 40 million units during its 14-year lifetime) didn’t arrive until 1977, and while arcade games started rapidly improving after the release of Taito’s Space Invaders (1980), home games’ graphics and sound lagged far behind arcade hardware well into the late 1980s, even though Atari and others aggressively licensed the rights to produce home versions of popular arcade games. A typical arcade cabinet game might retail for $4,000, vs. around $200 for a home console. (Not to mention that going to the arcade was a social event. You know, that's the kind of event where you get together with real people to have real pizzas and real interactions, rather than "interacting" with them online.)

Today arcade cabinet games have an ardent following among retrocomputists (e.g. me), collectors, and nostalgists. But perhaps not for long: outside of this niche market, there’s virtually no demand for manufacturing CRT displays anymore, and they are surprisingly labor-intensive to manufacture, as this 5-minute video shows. In particular, few 29-inch “arcade grade” CRTs remain in the world, and the capacity to make or repair them is basically gone.

Without arguing whether new display technologies (plasma, LCD, LED) are better or worse than analog CRTs, it is certainly true that authors of older games had to work around (or more creatively, work with) the color-mixing and display constraints of analog CRTs, which are quite different from those of true discrete-pixel displays. This was especially true when designing games for home game consoles designed to connect to TV sets: these had the additional constraint that the video signal fed to the TV had to follow the somewhat quirky NTSC standard for analog color video. (It has been popular to malign NTSC, repurposing the abbreviation to “Never Twice the Same Color” or simply “Not The Same Color.” This is unfair: NTSC color video had to be designed to allow color television programs to be backwards-compatible, that is, such that the luminance information of the color video allowed the program to be viewed in grayscale on existing TVs. It’s far from obvious how to add color to such a system.) Famously, the Apple II video circuitry exploits idiosyncrasies of NTSC to produce high-resolution (at the time) graphics for a low (at the time) cost, at the expense of being very tricky to program. The fascinating book Racing the Beam recounts how both the console designers and game designers for the Atari 2600 leveraged the physical and electrical properties of NTSC color to create appealing games on exceedingly low-cost (for its time) hardware, even creating a custom chip to deal with some of the quirks of NTSC (the TIA or Television Interface Adapter, code-named “Stella”). And indeed, while Atari 2600 emulators are still popular and original 2600 hardware can be connected to modern LCD and plasma screens, the color effect is subjectively different from viewing it on old-school analog sets. In contrast (get it? <groan/>), although arcade video games also used large (29”) CRT displays, they weren’t bound by the signal limitations of NTSC, so they could produce graphics far superior to what home gamers could view even on comparably sized TV sets.

June 12, 2009, was the last day for all US broadcast television stations to switch from analog (NTSC-encoded) broadcasting to digital broadcasting. Various other countries that had been using NTSC started phasing it out as well, with the last of them, Mexico, ceasing NTSC broadcasting by the end of 2016. NTSC is now effectively a dead standard, and the hardware that was so ubiquitously associated with it—CRTs—is on a path to meet the same fate. Before that happens, get yourself to a “classic games” arcade and take a step back to when the best gaming graphics and sound were found in pizzerias, bars, and candy stores. 

Book summary: Radical Cities

McGuirk writes about an emerging urbanist philosophy manifesting itself in cities across Latin America that comes from a different philo...