Saturday, December 29, 2012
Concentrating Class: Learning in the Age of Digital Distractions
Scott Rogers, Susan Matt and I recently had an Educause article published on our class titled Concentrating Class: Learning in the Age of Digital Distractions. The article is a distillation of a much longer piece which I will be posting in a few months when we complete our report to the NEH.
Sunday, December 9, 2012
The Perils of Walden Zones
In Judd Apatow's new movie This Is 40 there is a funny scene where the parents decide to turn off their wifi and then try to convince their kids to disconnect. The kid's reaction (not suprisingly) is not all the parents would have hoped for. Hear the kid's outraged reaction in the following 40 second audio clip:
If your browser doesn't render the above audio tag listen to it here.
Wednesday, November 28, 2012
Tim Berners-Lee, The Web, And The Pursuit of the Public Good
Historians often lament the fact that we don’t really know enough about our past to make sense of who we are today. That historian’s critique can be directed at all sorts of narratives, like the American founding, the Civil War, the New Deal or any other important story in America’s past that define us. But given how much our lives are defined by the Web, maybe there are additional benefits in reading about it’s origins. Such was my reason for reading Tim Berners-Lee’s Weaving The Web.
The book goes over ground that is familiar to many of us. Like that moment in the fall of 1993 when I first heard about the Mosaic browser. Or the time in 1994 when I downloaded Netscape and began surfing the Web. It’s often only in retrospect that we recognize what moments hold import and for me anyway this was somewhat true then. To be sure I marveled at Netscape and the way it had contracted the world. For example I distinctly remember being awed that I could instantly read a Web page that has been served up by a computer half way around the world. But I don’t think I recognized the moment’s true weight and how ubiquitous the internet would become in my life. For instance I’m sure I had no idea that I’d spend a significant portion of my working day interacting with the Web. At the time it was something I dialed into through a modem and used during one or two discrete moments of the day. In other words, I liked it. But it wasn’t yet an ambient presence that I followed (or maybe better put , followed me) everywhere and at every hour.
And that is what is interesting about Berners-Lee’s history. While he too hadn’t recognized the entire import of his creation in the early 90s, he was a lot more prescient about it’s consequences than most of us. And while he wasn’t entirely in control of how his creation would be adopted (what inventor is?) his history is important because the texture of so many of our lives have been defined by events and visions that he was closely associated with.
One way to historicize our present online life is to simply mark our current surfing selves as the present and skip back to the moment Berners-Lee launched the first Web page at CERN in 1991. That first Web page is a historical monument which deserves a place in our collective memories as much as say, the joining of the transcontinental railroad or the first telephone communication by Alexander Graham Bell. But what’s left out, and what Berners-Lee’s book helps to illuminate is that our present online life doesn’t just rest on that achievement alone but is also contingent and the product of a myriad of other successes, failures, and ongoing battles. What is often lost when we merely think of Berners-Lee as the inventor of the Web and the first person to post a Web page is that the Web wasn’t just a technological invention but an evolving set of communication practices and agreements that Berners-Lee was instrumental in forming. Berners-Lee, after all, wasn’t the first to provide a clickable GUI that people could use to get information off the internet. Those achievements were preceded by companies like Prodigy and AOL. What Berners-Lee really did was to persuade a threshold number of user to adopt a set of communication protocols that no one company (as yet anyway) has been able to monopolize and make solely their own. Today we can jump on the Web with a large number of browsers owned by a variety of different companies in large part because of Berners-Lee’s work and his belief that this was the right thing to do.
There were times in the early days of the Web when it looked like a particular company’s browser might become so ubiquitous and successful that it’s functionality would drive and define Web protocols. And Berners-Lee, had he decided to form his own browser company, or join an existing one, might have crystallized such an outcome. But Berners-Lee (at least as he recounts his story) didn’t have the same inclinations as a Marc Andreeson or a Bill Gates. His primary interest was in making the Web into a thriving ecosystem rather than in the profit and success of an individual company. It was this reason why, instead of creating a company, he decided instead to form and direct the World Wide Web Consortium (W3C) that would maintain and expand on the Web standards he had introduced through the introduction of his first Web site. As he puts it:
The book goes over ground that is familiar to many of us. Like that moment in the fall of 1993 when I first heard about the Mosaic browser. Or the time in 1994 when I downloaded Netscape and began surfing the Web. It’s often only in retrospect that we recognize what moments hold import and for me anyway this was somewhat true then. To be sure I marveled at Netscape and the way it had contracted the world. For example I distinctly remember being awed that I could instantly read a Web page that has been served up by a computer half way around the world. But I don’t think I recognized the moment’s true weight and how ubiquitous the internet would become in my life. For instance I’m sure I had no idea that I’d spend a significant portion of my working day interacting with the Web. At the time it was something I dialed into through a modem and used during one or two discrete moments of the day. In other words, I liked it. But it wasn’t yet an ambient presence that I followed (or maybe better put , followed me) everywhere and at every hour.
And that is what is interesting about Berners-Lee’s history. While he too hadn’t recognized the entire import of his creation in the early 90s, he was a lot more prescient about it’s consequences than most of us. And while he wasn’t entirely in control of how his creation would be adopted (what inventor is?) his history is important because the texture of so many of our lives have been defined by events and visions that he was closely associated with.
One way to historicize our present online life is to simply mark our current surfing selves as the present and skip back to the moment Berners-Lee launched the first Web page at CERN in 1991. That first Web page is a historical monument which deserves a place in our collective memories as much as say, the joining of the transcontinental railroad or the first telephone communication by Alexander Graham Bell. But what’s left out, and what Berners-Lee’s book helps to illuminate is that our present online life doesn’t just rest on that achievement alone but is also contingent and the product of a myriad of other successes, failures, and ongoing battles. What is often lost when we merely think of Berners-Lee as the inventor of the Web and the first person to post a Web page is that the Web wasn’t just a technological invention but an evolving set of communication practices and agreements that Berners-Lee was instrumental in forming. Berners-Lee, after all, wasn’t the first to provide a clickable GUI that people could use to get information off the internet. Those achievements were preceded by companies like Prodigy and AOL. What Berners-Lee really did was to persuade a threshold number of user to adopt a set of communication protocols that no one company (as yet anyway) has been able to monopolize and make solely their own. Today we can jump on the Web with a large number of browsers owned by a variety of different companies in large part because of Berners-Lee’s work and his belief that this was the right thing to do.
There were times in the early days of the Web when it looked like a particular company’s browser might become so ubiquitous and successful that it’s functionality would drive and define Web protocols. And Berners-Lee, had he decided to form his own browser company, or join an existing one, might have crystallized such an outcome. But Berners-Lee (at least as he recounts his story) didn’t have the same inclinations as a Marc Andreeson or a Bill Gates. His primary interest was in making the Web into a thriving ecosystem rather than in the profit and success of an individual company. It was this reason why, instead of creating a company, he decided instead to form and direct the World Wide Web Consortium (W3C) that would maintain and expand on the Web standards he had introduced through the introduction of his first Web site. As he puts it:
…[m]y primary mission was to make
sure that the Web I had crated continued to evolve. There were still many things that could have
gone wrong. It could have faded away,
been replaced by a different system, have fragmented, or changed its nature so
that it ceased to exist as a universal medium…My motivation was to make sure
that the Web became what I’d originally intended it to be – a universal medium
for sharing information. Starting a
company would not have done much to further this goal, and it would have risked
the prompting of competition, which could have turned the Web into a bunch of
proprietary products. (page 87)
Given that the book is by Berners-Lee it’s possible that passages like the above are just self-congratulatory autobiography. But there’s a reason why Berners-Lee was knighted: by heading the W3C he’s been genuinely, insistently, and abidingly, interested in creating a universal medium for communication that transcends the domain of any single company. We are where we are now not just because of his original inventiveness but because of his interest in developing a larger public good.
Saturday, August 25, 2012
To MOOC or not to MOOC?
As a way of keeping tabs on the development of MOOCs, I signed up for Chuck Severance’s Internet History, Technology and Security MOOC on the CourseEra site. While one can’t officially enroll in the course this late in the game it is still possible to visit the class. Chuck has collected excellent oral histories for the course, including interviews with Robert Caillau (Co-inventor of the Web), Joseph Hardin (who played an important role in the development of Mosaic), and Brendan Eich (the inventor of JavaScript). Interspersed among the interviews are videos of Chuck pretending to confide the real narrative that lies behind his interlocutors stories. (The “confiding” aspect is a little ironic since the class is open to the public.) The material is superb and much of it is Creative Commons licensed so I’m considering incorporating a bit of it in my own course on Web development that I’m teaching this fall.
Of course with the ouster and reinstatement of president Teresa Sullivan at UVA at the hands of a board who didn’t think she was jumping quick enough into MOOCs, we’re all wondering whether MOOCs are the next disruptive innovation that are going to turn the academy on its head. Are we about to get left behind if we don’t sally forth into this brave new world? One provisional answer to this can be found in a Times op ed piece titled “The Trouble With Online Education” by Mark Edmundson who teaches at UVA. In the closing paragraph of that piece Edmundson writes:
“You can get knowledge from an Internet course if you’re highly motivated to learn. But in real courses the students and teachers come together and create an immediate and vital community of learning. A real course creates intellectual joy, at least in some. I don’t think an Internet course ever will. Internet learning promises to make intellectual life more sterile and abstract than it already is — and also, for teachers and for students alike, far more lonely.”
An eloquent soliliquey but does Edmundson describe the student experience in a MOOC accurately? Here’s my provisional answer based on my own MOOC experience:
First I’m in agreement with Edmundson that a MOOC is lonely. This is because there’s very little two-way interaction between the instructor and the students (how could there be very much in a class where the instructor-student ratio in my particular class started at 1 to 42935?).
Second, the peer learning that is supposed to replace the lack of student-instructor interaction mitigates this loneliness to some degree but not very much. And, by way of illustration, in the P.S. to this post I include our first writing assignment, my response, and the peer feedback I received. Since the feedback is anonymous I still feel like it’s a little impersonal; no tonic for overcoming loneliness there.
Third, pace Edmundson, and in spite of the loneliness, there’s still some “ intellectual joy” to be found in a MOOC. The videos (check them out) are really interesting and personalize the historical development of the web in a very rich way. There’s true erudition and edification happening even if it’s not based on a lot of student-to-student or student-to-instructor interaction. Moreover, the peer feedback I’ve received on my essay isn’t that much less substantive than many comments I’ve gotten back on essays I wrote as an undergraduate. And they compare favorably (at least in number of words) to the amount of commentary I’ll give back to a student who I grade in my own online courses. The comments might be anonymous, and they might not be as substantive as they could have been, but I still experienced at least a modicum of intellectual joy in reading them.
There are no grand conclusions to draw from all of this except to say that instead of pronouncing from the sidelines about online’s relative worth, it’s helpful to actually participate in a course and use it to shed light on how serious a threat MOOCs pose to traditional forms of pedagogy in higher education. In a Tech Therapy podcast last month George Siemens (who was one of the first academics to host a MOOC) put it this way:
When you hit a time of uncertainty when you don't have an answer to a question you begin to experiment. You try different approaches to get ahold of the phenomena you are trying to grapple with. Well today the university system itself is becoming the subject of that research. Greater numbers of researchers are starting to recognize that maybe the university system isn't the optimal model. So I would say open online courses are just one attempt at trying to research what might a university look like in the future.
In other words, we need to investigate these options. But even Siemens would agree that we don't have to adopt them wholesale. Such explorations can help steer a middle ground between educational boards (like UVA’s) who might be attendant to markets but are hardly expert teachers, and professors, who know more than boards do about teaching , but are embracing change a little less quickly than many boards would like.
Faculty should take heart in the symbolic victory represented by the reinstatement of Teresa Sullivan and the fact that the views of Professor Edmundson are being given a voice on the national stage. Faculty after all deserve to set the direction of their university as much as any board does. But that victory isn’t a pretext for ignoring the way that technological innovation and market forces are challenging traditional pedagogical arrangements. To share influence responsibly means that we need to investigate these new developments first hand – by participating in their development we have a better chance of making them serve the ends of education. In charting a path forward our best counsel isn’t so different from that which was pronounced by Alexander Pope during a former revolution: “Be not the first by whom the new are tried, Nor yet the last to lay the old aside.” Between a board like UVA’s and the conservatism of professors like Edmunson are a large group of people who embrace change but are interested in doing so at Pope’s pace. Discovering the virtues and liabilities of MOOCs through actual hands on practicums can help clarify what that sensible rate of change actually is.
PS:
The Assignment
In many ways, the Internet is the result of experts exploring how people, information, and technology connect.
Describe one example of these areas (people, information, and technology) intersecting, and how that connection ultimately helped form the Internet. Your example should be taken from the time periods we covered in the first two weeks of course (Week 1: 1930-1990).
Write 200-400 words (about 2-4 paragraphs) and keep your answer focused. Don't make your answer overly long. In your answer connect back to concepts covered in the lecture. You can also make use of sources outside the course material. If you use material from outside the course to support your essay, please include a URL or other reference to the material that you use.
My Submission:
I appreciated Chuck’s short history of store-forwarding which seemed (based on the presentation anyway) to eventually be replaced by packet-switching. Both of those developments seem relevant to the assessment question in that they represent examples of people (academics mostly), and institutions (universities and the national government) and technology (forwarding-computers and routers ) connecting and forming larger and denser networks in ways that seem to anticipate the Internet as we know it today.
In the store-forwarding narrative I really keyed in on Chuck’s point that universities had a financial incentive to increase their connections and that the local connections in some ways were the most fiscally rewarding to cultivate: even if academics in Ann Arbor wanted only to connect and communicate with colleagues in Cleveland, their university had a financial incentive to connect with intermediary institutions (like University of Toledo’s) because doing so reduced the cost of their leased line. I hesitate to say that this development and concomitant economic imperative formally represents an example of “experts *exploring* how people, information, and technology connect.” But the fact that it’s a story about a growing electronic network, and one that was undoubtedly supported by experts who were trying to reduce connection costs for their universities (if not formally exploring these relationships) qualifies as an example in my book.
The packet switching narrative, and Chuck’s talk about Arpanet, is in many ways an example that is more germane to the assessment question (which specifically asks us to focus on the enterprise of “exploration”) since it was a formal research project about networking and connectivity and research, by definition, is about “exploration.” That example, speaks for itself; it powerfully elucidates how government sponsored research, and the appropriation of monies to expand our understanding of how best to form human connections via electronic means, were key drivers in the development of the modern Internet and all of the positive legacies that brings to us today. (Let that be a lesson to all of you Grover Norquist fans out there!). But if government was a key player (especially in the Arpanet story), the store-forwarding example suggests that markets, and the sheer desire to reduce the cost of one’s leased line, also played a role in incentivizing the exploration and refinement of electronic connection.
Peer Feedback:
student1 → Great job, written with an interesting perspective. The style is a bit conversational, but otherwise it's a good paper.
student2 → Well-written and enjoyable to read. A question that I have for Dr. Chuck is whether he finds it acceptable to be writing responses as informally as you have done. That is, your response is in the first person and presents a subjective position rather than sticking to a third-person perspective with positions that are entirely supported with historical examples.
student3 → Well written piece , my only suggestion would be include a specific example from outside the covered material . Have a look at LISTSERV as an example where people information and technology was used to provide a solution to the problem of shared interest communication.
student4 → Loved this one the best of the five I was sent. I think that someone who knows who Grover Norquist was would appreciate reading this! I don't :-( ....But I will Google him and start learning. Thanks for a great read. You should submit it to the forum. I'd vote for it.
student5 → This is quite an interesting take on the classes so far and very well written. It is an interesting point where you say "That markets, and the sheer desire to reduce the cost of one’s leased line, also played a role" I had often though of the markets as companies like AT&T that had been against the idea of the internet but you make a good point that there was non-government pressure as well. Certainly made me think, well done.
student6 → Nice work
Tuesday, June 19, 2012
Funny jokes Javascript developers tell.
Gary Bernhardt at CodeMash 2012 makes some observations about odd behaviors in Javascript in the following video titled WAT. (Jump into it around minute 1:30). Hilarity ensues!
Tuesday, June 5, 2012
When technology "wants", what does it want?
In getting students to grapple with the concept of technological determinism, and the larger issue of whether we shape technology or whether it shapes us, students often conflate technological determinism with tech dystopianism. The conflation is easy to make since in Hollywood that's the usual association: in movies like 2001: A Space Odyssey or Frankenstein, technology is often a malevolent and out-of-control presence that's leading to bad ends. But while that's the conventional depiction, it's possible to have a different combination. For example, in Kevin Kelley's view, technology has an inherent logic (e.g. in his words it has it's own 'wants') and those wants are leading, however gradually, toward progressive ends. These two examples represent only two permutations on the tech determinism-instrumentalism and tech utopianism-tech dystopian spectrums. There are many other possible permutations which I try to map out in the following graph (click on the graph to expand it):
I'll admit that the graph has some serious limitations -- at times I'm locating authors with more specificity on these scales than is actually warranted. But the larger point is to get students to think about these frameworks and to at least ask where technology pundits fall on these scales and where in turn, they as students fall. In my experience most students are instrumentalists: the young in general tend to confer a lot of faith on free will. Curiously I also couldn't think of too many dystopian instrumentalists. Morozov might not even fall in that quadrant but I'll place him there as a way of contrasting him to Kelley who he critiques in e-Salvation. If there are authors who I've mislocated let me know. Likewise, if you know of technological declensionists (e.g. dystopians) who locate the engine of history in something else than technology let me know; I'd like to put a few more thinkers in the lower left hand quadrant.
I'll admit that the graph has some serious limitations -- at times I'm locating authors with more specificity on these scales than is actually warranted. But the larger point is to get students to think about these frameworks and to at least ask where technology pundits fall on these scales and where in turn, they as students fall. In my experience most students are instrumentalists: the young in general tend to confer a lot of faith on free will. Curiously I also couldn't think of too many dystopian instrumentalists. Morozov might not even fall in that quadrant but I'll place him there as a way of contrasting him to Kelley who he critiques in e-Salvation. If there are authors who I've mislocated let me know. Likewise, if you know of technological declensionists (e.g. dystopians) who locate the engine of history in something else than technology let me know; I'd like to put a few more thinkers in the lower left hand quadrant.
Sunday, April 29, 2012
Is Facebook Making Us Homesick?
In recent weeks a cast of essayists, psychologists and
sociologists have been debating whether Facebook and other media technologies are making us lonely. This
isn’t the first time scholars have been fretting over this question. But the debate was rekindled by an article in
The Atlantic by Stephen Marche titled “IsFacebook Making Us Lonely?” This was
quickly followed by a spate of other articles which (in order of appearance)
included Eric Klinenberg’s “Facebook Isn’t Making Us Lonely,” Sherry Turkle’s “TheFlight From Conversation,” Claude Fischer’s “The Loneliness Scare: Isolation Isn’t a Growing Problem” and Zeynap Tufekci’s “Social Media’s, Small,Positive Role in Human Relationships.”
I’m especially persuaded by the arguments that Fischer
forwards which in many ways are anticipated by his earlier work AmericaCalling: The Social History of the Telephone.
Despite high brow fears that phone use encouraged “idle chit chat” and
mere “gossip,” Fischer concludes that the telephone, at least until 1940, was a
“technology of sociability” that expanded the “volume of social activity” in
ways that its users generally welcomed. (America
Calling , p. 254). In Still Connected (2011),
Fischer comes to similarly un-alarming conclusions about modern digital
technologies:
“using the internet has little effect on average user’s level of face to face contact….Although access to the Internet may have vastly expanded American’s acquaintances – the Facebook “friends” sort of circle – it would not have been a revolution in their personal relationships, just a nudge.” [p.96]
Eric Klinenberg (who was actually Fischer’s student not so
long ago) concurs in his recently published book GoingSolo. While Americans are living alone
in greater numbers than ever before, the Internet “affords rich new ways to
stay connected.” With the Internet, and a
variety of other modern social infrastructures, “Living alone and being alone
are hardly the same…” p.19 Klinenberg
even titles chapter five of his book “Together Alone.” This obviously recalls, and implicitly
challenges Turkle’s book with the reverse title (e.g. Alone Together). Despite fears to the contrary, Fischer and
Klinenberg argue that social networking technologies are enhancing rather than
detracting from our ability to connect and form bonds with others.
While I’m swayed by the empirical firepower that Fischer and
Klinenberg bring to the debate, in the last two weeks I feel like it’s been
used in ways that discredit Turkle and Marche overmuch. For example, in a recent head to head debate
between Marche and Klinenberg on CNBC Marche developed a hand dog look that, combined with his self effacing claims
to be merely an essayist rather than a
sociologist made his point of view come off much the worse for wear. Similarly, on a post on the very engaging
Cyborgology blog (and some of the twitter streams of its acolytes), Turkle is
dismissed as an elitest whose high brow, nostalgic, and digial dualist sensibilities are preventing her from seeing how mobile devices actually augment reality. (Our banal prattle
about lolcats actually does serve an important social purpose!) Finally, the populist critique reached its most entertaining
and literal lowpoint in a Downfall parody that made a raging Hitler, besieged
in his Berlin bunker, the mouthpiece of Turkle’s worries:
So where then is the middle ground? Is there some sort of synthesis or
reconciliation that can be reached for between these positions? I’m biased of course, but I find it in the
work of my wife Susan Matt whose Homesickness, An American History dwells at
length on the way that homesickness (an emotion which shares much in common
with loneliness) is shaped and reshaped by transportation and communication
technology. In brief, the history begins
in the colonial period (our antecedents were intrepid but many still longed for
what they had left behind). It concludes
with a look at college kids and recent immigrants who, being a little less technologically bereft than our
forebears, use Facebook , mobile phones, and Skype to assuage their longing for
home. Of course, the experience of
homesickness has changed in the last 250 years (in the past people actually
used to die of it). However one thing that
has remained constant is that our technologies sometimes compound homesickness,
sometimes mitigate it, but have never annihilated it:
It is possible that these new
technologies actually heighten feelings of displacement. MarÃa Elena Rivera, a
psychologist in Tepic, Mexico, believes technology may magnify homesickness.
Her sister, Carmen, had been living in San Diego for 25 years. With the rise of
inexpensive long-distance calling, Carmen was able to phone home with greater
frequency. Every Sunday she called Mexico and talked with her family, who
routinely gathered for a large meal. Carmen always asked what the family was
eating, who was there. Technology increased her contact with her family but
also brought a regular reminder that she was not there with them.
The immediacy that phone calls and
the Internet provide means that those away from home can know exactly what they
are missing and when it is happening. They give the illusion that one can be in
two places at once but also highlight the impossibility of that proposition. (The New Globalist is Homesick)
To me this is the history that leads to the middle ground. Loneliness (like homesickness) is a perennial
condition of American society and one we may be particularly prone to given our
cult of the intrepid pioneer who is willing to cut home ties. More importantly these technologies sometimes
mitigate these discontents. But in other
instances they sharpen them. So we can
take solace in the general, largely benign, trends that Fischer and his fellow company
of sociologists have highlighted: the development of steam ships, the
telephone, air travel, and the Internet have by and large drawn us closer
together. But that doesn’t mean that in
individual instances these technologies always have that effect. Put another way our individual discontents
are not somehow suddenly erased or rendered meaningless by aggregate social
trends. Which is why Turkle and Marche’s
worries still have traction for the rest of us.
That traction is especially manifest in the closing
paragraph of Turkle’s essay where she describes her walks on a Cape Cod beach
and her impression that in the past people experienced those walks more
profoundly than when we have our nose in our cell phones (as apparently we do
now). Some critics have taken her to
task for those passages as nostalgic yearnings for a past that never existed or
as a celebration of a privileged experience that a good portion of humanity can’t
afford. Those criticisms are spot on as
far as they go. But I’ll wager that
Turkle’s sentiments aren’t just the sentiments of an elite or of someone stuck
in the memories of a false past. Take
for example this short viral video put out by a Thai mobile phone company called
Disconnect To Connect:
Like the phone company we might want to acknowledge that if using
communication devices is generally good, that doesn’t mean it’s always good. Given
the research, most of the time we will be using our social networking technologies
in healthy ways. But let’s continue to
watch out for those circumstances in which they work against our interests –
whether we’re walking on a Cape Cod beach or a Thai one.
--------
Here are two other really good blog posts on the subject:
Is Technological Determinism Making Us Stupid?
Facebook and Loneliness: The Better Question
--------
Here are two other really good blog posts on the subject:
Is Technological Determinism Making Us Stupid?
Facebook and Loneliness: The Better Question
Wednesday, April 11, 2012
Teaching Technological Determinism
In our
course "Are machines making you stupid?" we just finished reading The
Shallows -- Nicholas Carr's excellent rumination on the way digital
technologies may be rewiring human intelligence (and not always for the
better). I'm struck, having now read the
book twice, by how much the book refers to the movie 2001 A Space Odyssey-- and
the way that movie, and Carr's references to it, help delineate the differences
between instrumentalism and technological determinism.
These isms
are core frameworks for understanding how humans and technology relate and Carr
summarizes the concepts nicely on p.46 of his text:
For centuries, historians and philosophers have traced, and debated, technology's role in shaping civilization. Some have made the case for what the sociologist Thorstein Veblen dubbed 'technological determinism': they've argued that technological progress, which they see as an autonomous force outside man's control, has been the primary factor influencing the course of human history.....At the other end of the spectrum are the instrumentalists -- the people who....downplay the power of technology, believing tools to be neutral artifacts, entirely subservient to the conscious wishes of their users. Our instruments are the means we use to achieve our ends; they have no ends of their own.
As it turns
out, these isms are also well represented in 2001 A Space Odyssey. In the opening scenes a hominid is playing
with a bone and gradually realizes that the bone can be used as a tool which
(s)he uses in a later scene to attack another hominid. After the attack the bone spins high into the
air gradually dissolving into a spaceship.
This scene is one of the more familiar and classic transitions in
Hollywood film making but what's nice about it in the context of technology
studies is that it illustrates what instrumentalism is. The bone is a tool or weapon that is
inanimate. While it empowers the hominid
and makes him/her more violent, the tool has no agency of its own. It isn't, in other words, an autonomous
technology that operates independently of the hominid who wields it. Here's some imagery to help you recall the
scene:
In contrast,
later in the movie, technology becomes more autonomous. HAL -- the computer -- attempts to take over
the spaceship and Dave (one of the astronauts) is compelled to remove HAL's
memory banks as depicted in the following scenes:
Here, of
course, technology is no longer depicted instrumentally. If in the beginning of the film the hominid
shapes his tools, by the middle, the tools are reshaping the hominids and are
doing so in ways that aren't always in keeping with the hominid's best
interests. Here's how Carr summarizes the scene in the last paragraph of his book:
What makes it so poignant and so weird, is the computer's emotional response to the disassembly of it's mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut -- "I can feel it. I can feel it. I'm afraid" and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
For a
variety of reasons, students are reluctant to admit that the relationship
between our tools and ourselves can be anything but instrumental. It's a constant challenge getting them to
consider autonomous technology as anything but fantasy. 2001 A Space
Odyssey is a nice venue for exploring the possibility of a more complex
relationship between humans and machines. And Carr takes it one step further by showing that these same challenging relationships exist between ourselves and our more earthbound digital devices.
Thursday, March 22, 2012
The New Globalist Is Homesick
My better half had an op ed piece in the New York Times today titled The New Globalist Is Homesick. I'm very happy for her! (And doubly so since there's a technology theme in the article). Here's an excerpt:
Technology...seduces us into thinking that migration is painless. Ads from Skype suggest that “free video calling makes it easy to be together, even when you’re not.” The comforting illusion of connection offered by technology makes moving seem less consequential, since one is always just a mouse click or a phone call away.
If they could truly vanquish homesickness and make us citizens of the world, Skype, Facebook, cellphones and e-mail would have cured a pain that has been around since “The Odyssey.”
....The immediacy that phone calls and the Internet provide means that those away from home can know exactly what they are missing and when it is happening. They give the illusion that one can be in two places at once but also highlight the impossibility of that proposition.
Friday, March 16, 2012
Codifying the Humanities, Humanizing Code
In a recent post titled Don't Circle The Wagons Bethanie Nowviskie observes that while the humanities tend to have a more theoretical orientation, coders tend to engage in a lot more praxis. While one could nitpick Nowviskie about how much this observation really accords with reality (coders can spend a good deal of time honing tools before actually using them to produce anything useful) it does point to a semantic issue that lies at the core of the Digital Humanities. DH, as Kathleen Fitzpatrick has defined it, uses digital tools for humanities work but at the same time uses the frameworks of the humanities to make sense of digital technologies. Louis Menand, in The MarketPlace of Ideas speaks of this duality too. Although in his view it's not particular to the humanities but is instead a tension that exists more generally in universities that promote the liberal arts and more utilitarian disciplines:
Liberal education is enormously useful in its anti-utilitarianism. Almost any liberal arts field can be made non-liberal by turning it in the direction of some practical skill with which it is already associated. English departments can become writing programs, even publishing programs; pure mathematics can become applied mathematics, even engineering; sociology shades into social work; biology shades into medicine; political science and social theory lead to law and political administration; and so on. But conversely, and more importantly, any practical field can be made liberal simply by teaching it historically or theoretically. Many economics departments refuse to offer courses in accounting, despite student demand for them. It is felt that accounting is not a liberal art. Maybe not, but one must always remember the immortal dictum: Garbage is garbage, but the historyof garbage is scholarship. Accounting is a trade, but the history of accounting is a subject of disinterested inquiry—a liberal art. And the accountant who knows something about the history of accounting will be a better accountant. That knowledge pays off in the marketplace. Similarly, future lawyers benefit from learning about the philosophical aspects of the law, just as literature majors learn more about poetry by writing poems.
In embracing the university as a place that produces but also interprets, maybe one thing we need to do, as Digital Humanists take up the call to learn how to code, is to learn it in a way that embraces the dualities that Fitzpatrick and Menand describe. Of course one can't learn to code simply through studying it's history. But maybe, when we teach and learn code we should spend more time dwelling on its origins. As I begin to think how I'm going to teach an introductory course on Web programming next fall I wonder if there's room for the following video by Chuck Severance on the history and origins of Javascript:
My hope is that there's at least a little bit of room for history in these courses. If there is, we'll be in a better place to bring interpretive approaches to bear on technical subjects while also bringing technical know-how to more interpretive disciplines.
Friday, March 9, 2012
Unpacking Code, Composition, and Privilege: What Role can Dilbert and the Digital Humanities Play?
I ordinarily wait a little longer between blog posts and try
to write with a bit more polish but I wanted to jot down two questions that emerged after writing last week’s post on Code Versus Composition. Hopefully I'll get to these concerns over
the next couple of months:
In the Digital Humanities blogosphere and in books like
Unlocking the Clubhouse: Woman In Computing there is much to learn about the way that coding
culture may create and sustain groups of privilege. In turn, the methodologies
of class, race and gender studies can help to lend insight into this
culture. Despite the fact that we like to think we live in a post-class, post-gender and post-racial society those categories aren't going away yet. And until they do there's room for sequels to Unlocking the Clubhouse books. Humanists, and digital
humanists in particular, may be in a good position to use these methods of
analysis since they've honed these methods in other disciplinary endeavors.
The question I have, however, is whether class, race and gender are the only lenses through which privilege and the distribution of power can be tracked. While they are powerful tools do their methods generate attention blindness that obscure other forms of privilege? To this point there are very insightful technological theorists who haven't placed the triad of class, race and gender at the core of their analysis. For example, Neil Postman's short address "Five Things We Need to Know About Technological Change" and his "second idea" in that essay provides a really useful way for uncovering how privilege (and deprivation) are realized during technological change:
" the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others.....Who specifically benefits from the development of a new technology? Which groups, what type of person, what kind of industry will be favored? And, of course, which groups of people will thereby be harmed?"
While Postman's approach certainly prompts us to think about race, class and gender groups, it isn't constrained by it. Other groups can also be considered. For example, in our current N.E.H research my colleagues and I are examining how digital technology is shaping and reshaping cognition. While it's certainly worthwhile to ask whether these changes privilege particular genders, classes or races, an equally salient question is whether it favors a type of person who is better able to multi-task. In creating more and more digital distractions are coders generating the social conditions in which multi-taskers will prevail? And in my open source software advocacy work one should ask whether a particular form of coding collaborative work privileges groups with a particular political and economic ideology. The same question applies to the study of growing global networks: are those networks privileging people who harbor sympathies to neo-liberalism and antipathies to more communitarian ideologies?
On a more humorous level, the Dilbert cartoons also illuminate. But his lens, more often than not revolves around the tensions between technicians and managers:
The question I have, however, is whether class, race and gender are the only lenses through which privilege and the distribution of power can be tracked. While they are powerful tools do their methods generate attention blindness that obscure other forms of privilege? To this point there are very insightful technological theorists who haven't placed the triad of class, race and gender at the core of their analysis. For example, Neil Postman's short address "Five Things We Need to Know About Technological Change" and his "second idea" in that essay provides a really useful way for uncovering how privilege (and deprivation) are realized during technological change:
" the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others.....Who specifically benefits from the development of a new technology? Which groups, what type of person, what kind of industry will be favored? And, of course, which groups of people will thereby be harmed?"
While Postman's approach certainly prompts us to think about race, class and gender groups, it isn't constrained by it. Other groups can also be considered. For example, in our current N.E.H research my colleagues and I are examining how digital technology is shaping and reshaping cognition. While it's certainly worthwhile to ask whether these changes privilege particular genders, classes or races, an equally salient question is whether it favors a type of person who is better able to multi-task. In creating more and more digital distractions are coders generating the social conditions in which multi-taskers will prevail? And in my open source software advocacy work one should ask whether a particular form of coding collaborative work privileges groups with a particular political and economic ideology. The same question applies to the study of growing global networks: are those networks privileging people who harbor sympathies to neo-liberalism and antipathies to more communitarian ideologies?
On a more humorous level, the Dilbert cartoons also illuminate. But his lens, more often than not revolves around the tensions between technicians and managers:
Finally, while coding and the product of what coders produce
is certainly subject to class, race and gender studies critique, and more
largely the critique of Neil Postman, we shouldn't forget that coding, in
creating privilege and division, can also often be a bridging activity that
brings together and harmonizes cultures that conventionally are portrayed as at
odds. On our campus the College of Arts
and Humanities and the College of Applied Sciences and Technology don't mix
that much. It's a division that is reminiscent
of the one C.P. Snow popularized 50 years ago.
But coding doesn't have to be this way nor is it always this way
now. It can bring different cultures
together. Speaking metaphorically, it's not always Code versus Composition but
sometimes very much Code and Composition. That I think, is at least one hope of
the Digital Humanities. That hope shouldn't be forgotten even as we engage in
class, race and gender critiques and it raises a concomitant question: In
recent years how much has this hope been satisfied and what more work needs to
be done in order to have this hope fulfilled?
Saturday, March 3, 2012
Code Versus Composition
Given how ubiquitous code is becoming in life (to wit: as I
write this, code is processing my typing and is also providing the medium
through which you read this) it seems plausible to think of code as a possible
new basic literacy that gives definition to the ideal of an educated
person. Since I'm about to begin
teaching code in the fall I welcome this interest: it adds to
the marketability of my teaching as well as that of my colleagues. And it's nice to see it portrayed for what it
is: an activity that in addition to being intrinsically fun also leads to
exciting rumunerative careers. But is it
really a basic literacy?
I'm of two minds about it.
On the one hand, it is plausible to think of it as a literacy which
everyone should have:
For one thing, code, like the printed word, is
everywhere. In a culture that doesn't
read or write composition doesn't have much use. It isn't a basic literacy. But once reading and writing become
entrenched in everyday activities it does become a basic literacy. Given how widely code has spread, it would
seem like the same logic would apply here too.
Code is everywhere so everyone needs to understand code.
For another thing, like the activity of writing, the
activity of coding trains our minds to think in ways that give order to a world
that probably could use a little more ordering (pace Max Weber fears of the world as an over-rationalized iron cage). Composition illuminates. Coding also illuminates. Ergo, code and composition are (or at least
have become) basic literacies.
Finally, code increasingly has become the way we
interface with tools. Why is this
important? More so than any other
species, we are our tools. As Winston Churchill once said, "We shape our
buildings, and then they shape us."
Similarly, we shape our tools and then they shape us. But to keep that reshaping a two-way street,
and to make sure we don't just devolve into whatever machines want us to become, we have to shape our tools. And if you
want to be directly part of the shaping, these days you have to know how to
code.
On the other hand, in spite of the above rationales, I'm not
quite ready to accept coding as a literacy that is as basic as
composition:
For one thing, while code is everywhere, it's embedded and
hidden in our machines. It doesn't pop
up unmediated on a street sign, or on a Hallmark card or in an email or in a
newspaper editorial. Even programmers
don't ordinarily use code to navigate through a new town, to write a valentine,
or to refine a political position.
For another thing, code is primarily used to communicate
with machines. You don't use it (without
ancillary devices) to connect and bond and lead an initiative with other
people. The CNN piece reports that Mayor Bloomberg
has taken up the challenge to code. Who
knows, maybe he actually went through with it.
But I doubt his coding skills have brought much more civic order to New
York. Code ( to follow an Aristotelian
paradigm ) is a language which gives
order to our material lives. But, (at
least until the programmers take over our spiritual and political lives) it
isn't the language we use to sermonize or legislate about political matters.
For a final thing it may be true that our tools shape our
humanity and that in turn, our code shapes our tools. But that doesn't mean we can't shape the
programmers who code our tools. In
effect, we're not fated to have our destiny controlled by machines just because
we don't personally code. We can control
out destiny and shape our tools by hiring a programmer.
Ok. So where does that leave us? If you are Mayor
Bloomberg, or Audrey Watters (a technology commentator who has dived into CodeAcademy) or Miriam Posner or the legion of other people
who've taken up Rushkoff or CodeAcademy's call to code:
Take heart! It's fun! And yes, coders are changing the world and our
definition of what it means to be human.
But that task isn't the province of coders alone. Nor, despite their best
efforts, is it ever likely to be.
Tuesday, February 21, 2012
KCPW Radio Interview
Susan Matt (my spouse) and I were interviewed on KCPW today about our course "Are Machines Making Us Stupid?" Here is a link to the podcast on the KCPW site or listen to it here as well:
Segment 2: Living the Tech Life
Today’s conventional wisdom may be that a well-rounded life must include Facebook, iPhones and constant connectivity. But does technology and omnipresent media really enrich our relationships, boost our moods and enhance our intellectual capacity? Professors Susan Matt and Luke Fernandez join us to explore the question: Are machines making us stupid?
Guests:
Dr. Susan Matt, Professor and Chair of the History Department, Weber State University
Dr. Luke Fernandez, Manager of Program and Technology Development, Weber State University
Saturday, February 11, 2012
William Powers and The Technological Humanities
Last week William Powers visited Weber State University and spoke about his book Hamlet's Blackberry. Recent articles in The Atlantic and in the New Yorker have cast him as a bit of a grouch about technology. Such portraits don't do justice to his message. While Powers says it can be beneficial to disconnect (via Walden Zones or via Digital Sabbaths) he's also quite upbeat about the ways that technology has drawn us closer together. The point in taking an occasional recess from our technologies and from our social connections is that it can complement our more social selves. By moving between these different experiences we can lead richer and more meaningful lives than if we simply seek one of these experiences while excluding the other.
He also isn't trying to dictate to anyone. Each of us needs to find our own balance between inner directed activities and outer directed ones. The way to find that balance is to examine our personal patterns of technology adoption and to identify the combination that develops this equilibrium in our selves. Diversity is good. If you don’t feel that the “world is too much with us” William Powers (unlike William Wordsworth) isn’t going to hold it against you.
Of course, in defending Powers, I'm not also trying to say that everyone needs to like his book. In fact, a portion of the students in the course I'm co-teaching this semester ( titled "Are Machines Making You Stupid?" ) took issue with Powers' claims about digital maximalism. (See footnote below.) That's fine. The larger point is that Powers visit sparked interesting conversations in our local community that complement ones taking place regionally, nationally and globally. Below are two short viral videos whose popularity suggest how salient these issues are in the zeitgeist (Powers showed them during his talk):
I. Disconnect to Connect
II. Girl Fall Into Fountain (sorry this one I can't embed)
Finally, if these issues seem present globally it's also worth noting that they are present historically. As our class is discovering, anxieties about technology are not new. We've been wondering for centuries whether our inventions are making us smarter or dumber, shallower or deeper. But just because we've been worrying about these questions since the time of Socrates doesn't mean we can stop worrying about them now. In order to adopt technologies wisely each generation needs to think these questions through anew. That's the curse (and blessing) of the "technological humanities."
-----------------------------
Footnote:
For our first writing assignment we had students respond to the following question:
In Hamlet's Blackberry, William Powers asserts that "we've effectively been living by a philosophy . . . that (1) connecting via screens is good, and (2) the more you connect, the better. I call it Digital Maximalism, because the goal is maximum screen time. Few of us have decided this is a wise approach to life, but let's face it, this is how we've been living."
For your first writing assignment, we would like you to respond to this assertion. Do you agree with Powers's claims here? If so, why? If not, why do you disagree? You might also consider the following questions: is it truly a philosophy (or is it something else)? Do we truly value maximum screen time? Is it truly how we've been living?
A significant portion of the class questioned whether digital maximalism was as pervasive as Powers claims. They did so by referring to examples in their own lives or their family's lives in which they had been able to spend time away from screens. They also were reticent to blame technology for any pathology or addiction that might emerge in the presence of technology. To do so, in their view, would constitute an abdication of personal responsibility.
While those criticisms are fine as far as they go, I hope, as the course progresses, to encourage them to dwell a little more on this issue. In my view, taking personal responsibility and finding blame in technology are not necessarily mutually exclusive or contradictory positions. In fact, often times they complement each other. By uncovering ways in which technology encourages certain behaviours while discouraging others we're in a better position to make informed and responsible choices about how to use our tools.
Getting students to speak with nuance about the ways that we shape our tools, and in turn, how tools shape us is a perennial challenge in courses like this. Students tend to think about these things in binary categories: either we're completely free beings who must take complete responsibility for the way we use our tools or we are "tools of our tools" who therefore can't have any responsibilities. Few consider whether there may be a spectrum of states in between these poles.
Beyond the conundrum of technological determinism I also hope that we get to explore digital maximalism in terms of Neil Postman's third idea:
The third idea, then, is that every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.
He also isn't trying to dictate to anyone. Each of us needs to find our own balance between inner directed activities and outer directed ones. The way to find that balance is to examine our personal patterns of technology adoption and to identify the combination that develops this equilibrium in our selves. Diversity is good. If you don’t feel that the “world is too much with us” William Powers (unlike William Wordsworth) isn’t going to hold it against you.
Of course, in defending Powers, I'm not also trying to say that everyone needs to like his book. In fact, a portion of the students in the course I'm co-teaching this semester ( titled "Are Machines Making You Stupid?" ) took issue with Powers' claims about digital maximalism. (See footnote below.) That's fine. The larger point is that Powers visit sparked interesting conversations in our local community that complement ones taking place regionally, nationally and globally. Below are two short viral videos whose popularity suggest how salient these issues are in the zeitgeist (Powers showed them during his talk):
I. Disconnect to Connect
II. Girl Fall Into Fountain (sorry this one I can't embed)
Finally, if these issues seem present globally it's also worth noting that they are present historically. As our class is discovering, anxieties about technology are not new. We've been wondering for centuries whether our inventions are making us smarter or dumber, shallower or deeper. But just because we've been worrying about these questions since the time of Socrates doesn't mean we can stop worrying about them now. In order to adopt technologies wisely each generation needs to think these questions through anew. That's the curse (and blessing) of the "technological humanities."
-----------------------------
Footnote:
For our first writing assignment we had students respond to the following question:
In Hamlet's Blackberry, William Powers asserts that "we've effectively been living by a philosophy . . . that (1) connecting via screens is good, and (2) the more you connect, the better. I call it Digital Maximalism, because the goal is maximum screen time. Few of us have decided this is a wise approach to life, but let's face it, this is how we've been living."
For your first writing assignment, we would like you to respond to this assertion. Do you agree with Powers's claims here? If so, why? If not, why do you disagree? You might also consider the following questions: is it truly a philosophy (or is it something else)? Do we truly value maximum screen time? Is it truly how we've been living?
A significant portion of the class questioned whether digital maximalism was as pervasive as Powers claims. They did so by referring to examples in their own lives or their family's lives in which they had been able to spend time away from screens. They also were reticent to blame technology for any pathology or addiction that might emerge in the presence of technology. To do so, in their view, would constitute an abdication of personal responsibility.
While those criticisms are fine as far as they go, I hope, as the course progresses, to encourage them to dwell a little more on this issue. In my view, taking personal responsibility and finding blame in technology are not necessarily mutually exclusive or contradictory positions. In fact, often times they complement each other. By uncovering ways in which technology encourages certain behaviours while discouraging others we're in a better position to make informed and responsible choices about how to use our tools.
Getting students to speak with nuance about the ways that we shape our tools, and in turn, how tools shape us is a perennial challenge in courses like this. Students tend to think about these things in binary categories: either we're completely free beings who must take complete responsibility for the way we use our tools or we are "tools of our tools" who therefore can't have any responsibilities. Few consider whether there may be a spectrum of states in between these poles.
Beyond the conundrum of technological determinism I also hope that we get to explore digital maximalism in terms of Neil Postman's third idea:
The third idea, then, is that every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.
If digital maximalism isn't the "idea" or "philosophy" that is embedded in recent digital developments what philosophy is it then?
Monday, January 2, 2012
Google's Doodles and the Waning of Serendipity
I just finished reading The Filter Bubble by Eli Pariser, who
is the current president of moveon.org.
In keeping with the interests of that organization, Pariser’s book is an
attempt (at least tacitly) to expand the communitarian and civic capacities of
the Web. But he makes his way there by
arguing that the Web is confining rather than expanding our cognitive horizons. Instead of introducing us to a broader and more
varied set of people, the Web is increasingly taking us to points of view that are
congruent rather than divergent with our own.
With personalized search and personalized social networking, the 'net
introduces us to places and people we already like and that we’re already
interested in. As searching and matching
algorithms improve, we’re increasingly exposed to material that is already
relevant to our lives. This, of course,
is good up to a point: we like relevance.
The downside is that we’re challenged less and less to consider or visit
perspectives that differ from our own.
These trends have been in the works for many years now –
Cass Sunstein famously identified them as far back as 2002 in the book
Republic.com. But, as Pariser argues,
what makes them more worrisome in 2012 is that they’ve become more
insidious. In the past we narrowed our
horizons through conscious acts: we went to nytimes.com instead of foxnews.com
(or vice versa) by choice and more or less deliberately. But as the Web has become personalized, these
choices are increasingly made for us behind the scenes in ways that we’re only
vaguely aware of. When I visit
Amazon.com and shop for The Audacity of Hope, Amazon also suggests I buy Bill
Clinton’s memoir, but not say, Bill O’Reilly’s Pinheads and
Patriots. And when I visit Facebook, my
friends, more often than not, seem to share similar points of view. Pariser doesn’t reference Marx, but the filter
is the modern generator of false consciousness.
In the past we did our own Web filtering.
But now our filters are selected behind the scenes. In the brave new world of the personalized
Web our false consciousness is created for us.
In Pariser’s closing chapter, he offers up a number of things
that individuals, corporations and governments can do to allay the more
insidious effects of filtering. He
suggests that as individuals we occasionally erase our tracks so that sites
have a more difficult time personalizing their content. (To paraphrase Pariser: “If we don’t erase our [Web] history we are condemned to repeat it"). For corporations, he suggests that their
personalization algorithms be made more transparent and that a little
serendipity be introduced into searches so we’re occasionally exposed to
something beyond our current interests and desires. And for governments he suggests a stronger
role in overseeing and regulating personalization.
There are problems with Pariser’s suggested solutions and
Evgeny Morozov, in his own review of Pariser, brings a very important one to
light. In expanding our civic and
communitarian and serendipitous encounters, it would be nice if Google
occasionally popped up a link to “What is happening in Darfur?” when we type
“Lady Gaga” into Google. But who exactly is supposed to decide what these serendipitous experiences are to
be? We may want to allay some of the
cognitive deficiencies that the current 'net breeds. But the danger in doing so is that we replace
one bias with another. In looking a little
further into this I visited the thousands of doodles (e.g. custom banners) that
Google has generated in the past couple of years. Not surprisingly I didn’t see much there
that’s over-the-top civic or political.
But maybe that sin of omission is better than the alternative: I prefer “don't be evil” (their current motto) to “do
good but risk partisanship and bias in the attempt.”
Pariser may not provide convincing fixes, but his description
of the problem makes the book a worthy read.
One would think that as the information stream accelerates we’d become
increasingly subject to distractions and to new ways of seeing the world. In fact, Clay Shirky touches on this point in
“It’s Not Information Overload. It’s Filter Failure:” the filters which the
mass media industry imposed on late 20th century media consumers have been
corroded by the advent of the Web. But the
trends that Shirky makes light of may be
reversing. Our cognitive horizons may be contracting rather than expanding in the age of personalization. And our attention blindness may be increasing
rather than decreasing as the filter bubble grows. In bringing those concerns to light,
Pariser’s has done good work.
Subscribe to:
Posts (Atom)