How Did Howard Rheingold Get So "Net Smart"?: An Interview (Part Two)

There has been a tendency to adopt totalizing views about emerging technologies, so that Twitter either "destroys our attention span" or it "paves the way for revolutions around the world." Yet, as you note early on, “Twitter is a recent example of a social media which can either be a waste of time or a multiplier of effort for the person who uses it, depending on how knowledgible the person is in the three related literacies of attentional discipline, collaborative know-how and net saavy.” This approach reframes the question away from technological determinism and onto issues of use and knowledge, which reflect an awareness of human agency (both collective and individual) in terms of what we do with media. Why do you think it has been so hard to get to this point, where new media is understood not in utopian or dystopian terms, but in terms of choices we are making about the role these tools play in our lives?

I'm certainly not the first to point out that totalizing belief systems, whether they are religious or political, make it easier emotionally for people to deal with a complex world. Knowing that there are certain answers makes a large part of the world's population feel right about living in the world. It's not just easier in some way to believe a radical oversimplification about a new technology, it's far easier to persuade people to believe things that don't have much or any evidence.
I think you can tell by this point that I see socio-technological issues as confluences and hybrids of many technical, psychological, social developments. Time and again, the way a new communication technology changes society is influenced by the way people use it, and the circumstances of their use. Chinese and Korean inventors created moveable type before Gutenberg, but there were so many differences in social circumstances. China had greater centralized political power at a time when Europe was divided among dozens of warring states. Elizabeth Eisenstein pointed out how Protestant theology of individual Bible-reading intersected with the technology of the printing press and the emerging entrepreneurial capitalism of the printing trade -- all circumstances that were unique to a time and a place and to strong beliefs.
SMS was invented by network engineers and transformed into a global medium by teenage girls who discovered they could communicate without their parents hearing. ARPAnet was for sharing computer resources across distance, but ARPAnet engineers started using it for social communication. Why should the mobile, social Web be any different?
So I argue that human agency is likely to be important in determining the way digital media and networks will end up for historical reasons. However, I also came to see that believing in technology determinism -- "Is the Web driving us mad" was a Newsweek cover story in the summer of 2012 -- can become a self-fulfilling prophecy. As a Darwinist, I believe I come from a long line of ancestors who must have thought "there has to be a way out of this apparently possible predicament." Thinking about solving serious threats to one's existence or humanity isn't guaranteed to solve those problems, but thinking the problems are insoluble because they are determined by external forces is almost certainly going to lead to failure and perhaps extinction.
I am not arguing that all the effects of widespread use of social media are salubrious. People will be no less cruel, venal, and ignorant online than they are offline. Screens are definitely attention-magnets and (one of the reasons I wrote Net Smart) it's easy to fall into click-trance and waste hours online that would have been better spent elsewhere.
The issue is mindfulness, as I see it, and the good news is that a little self-awareness of the way we are deploying our attention via large screens and small is a lot more helpful than no self-awareness. The evidence, as I marshal it in my book, is that paying attention to our attention in light of our intention can change our mental habits. (Note that I'm avoiding the obsolete cliche about "rewiring the brain," and I've called for a moratorium on the phrase "squirts of dopamine" in describing the way social media affects our nervous systems.)
Another reason for the persistent popularity of lurid techno-determinism in the media is that responsibility in a non-determinist world extends to you and me. If we do have the power to influence the way emerging media will reshape our lives, then it's up to us to do something about it. So simple, black-and-white views of social media are not just emotionally easier to adopt, they don't require believers to consider their own responsibility in determining the future.
I used Twitter, as you quoted, as a real example of the difference that know-how makes. The most common criticism of Twitter, that it looks like a torrent of trivia and noise, could be applied to the Internet. Knowing how to discover who knows something worth knowing or who communicates in an entertaining way is essential to a Twitter user who wants to devote their attention to something worthwhile. Knowing how to attract the attention of other Twitter users, how connect with Twitter communities, make Twitter lists, makes all the difference between noisy trivia and worthwhile flows of information and entertainment, even channels for sociality. Twitter is a medium in which the users have invented powerful social conventions such as retweets and hashtags. What's interesting is what people do with that medium, such as cultivating Personal Learning Networks.
Some will be surprised to see you write about “Twitter Literacy” given many perceive Twitter to be a subliterate or semiliterate form of communication. How are you defining this term? Where do our ideas about what constitutes effective or thoughtful use of Twitter come from?
I guess this is where it shows that I am not really a licensed academic, but that rare and odd species, an independent scholar. I really didn't start out to do it this way. I was a freelance writer and I tried to write accurately and to be careful to source my material and attribute when necessary.
Then some of my freelance writing was taken up by scholars in what has grown into cyberculture studies and I found myself taken to task for utopian enthusiasms, deterministic language, unsupported generalizations. So I learned to think more critically, to examine whether my choice of words robs humans of agency (some things are determined by forces outside individual control and some things are not and we make unconscious decisions about this issue when we attribute determining agency to technology), to recognize unsupported generalizations (and to look for empirical research that could support or change my hypotheses), All of which is to say that I understand that there are schools of literacy studies that define literacy differently.
And I am aware that the word "literate" is most often associated with the ability to read and write. When I talk about social media literacies I mean (to repeat myself) both the skill of encoding and decoding (from reading and writing to capturing, editing, and uploading video) and the social environment in which that skill is embedded, the community of literates, whether they are typing about books in online forums, making videos for each other, collectively growing a conversation thread around a blog post, refactoring wiki pages together. Each skill involves the knowledge of how to use the skill effectively to get things done with others.
So, with literacy out of the way ;-)  I can recall what motivated me to write Twitter Literacy. It was one of the elements that led up to writing Net Smart, but in the moment it was written as a blog post, it was one of those "for heavens' sake, don't the critics know the first thing about how to use the medium they are criticizing" blog posts that one writes very quickly. I got tired of people saying "I don't care what celebrities had for lunch and I certainly don't care about what somebody I never heard of had for lunch." (I argue elsewhere, in my discussion of social capital in Net Smart, that apparently trivial small talk can lubricate networks of trust among people online, making it more likely that they will cooperate with one another.)
I discovered that if I was selective about who I "followed" on Twitter -- who I chose to pay attention to -- I could learn things, even have a laugh, occasionally make a new friend. That meant actively examining the people that I do follow and evaluating whether, after attending to them for some time, I still believe they offer knowledge and/or entertainment in return for my attention. I had to try people, then decide to stop following people whose output didn't pay off for me. I learned to look at who the people I learned to respect were following. I learned to harvest people to follow by examining Twitter lists of knowledgeable people. Then I learned to feed the network of people who follow me by sharing something not entirely trivial that reveals something about who I am and what I do, share links and knowledge I've gained that others who share my interests might benefit from, answer questions posed by those I follow and reply to those of my followers who address me.
Again, none of this is rocket science. It's not difficult to understand, although it does take some discipline and effort. It certainly pays off for me in terms of knowledge capital, social capital, friendship, and fun. I've had almost entirely fascinating meals with former strangers in London and Bogota, Amsterdam and Baltimore, who responded to my tweet-up offering.
In Net Smart I deconstruct twitter literacy to show how it employs elements of attention literacy (who to pay attention to), participation literacy (how to reward the attention others pay me), network literacy (how to spread my own words through networks), and crap detection (knowing when not to retweet a rumor about breaking news).  Ideas about effective use of Twitter come from the same place a great deal of lore about how to use new media come from -- from the enthusiastic users. Twitter the company did not create retweets or hashtags -- those were both invented by early Twitter users, later to be incorporated by Twitter into its platform. Tweetchats and personal learning networks emerged from communities of users.
As I said in the article and the book, Twitter is not a community, but it offers tools with which people can build communities.

The recent report from MacArthur’s Youth and Participatory Politics survey found that 85 percent of young people would welcome more help in learning how to distinguish between reliable and unreliable information online. You describe this in terms of “crap detection.” Why has our current educational system done such a bad job in teaching issues of credibility and discrimination in networked environments?

I don't want to be too cynical about this, but there's a very fundamental underlying conflict involved in teaching crap detection online, especially in regard to the broader habit of mind in which crap detection is embedded -- critical thinking. Teaching your children, students, customers, citizens to think for themselves and to question authority can be a pain in the ass.
It's not as easy as authoritative answers. But authority, as 500 years of literates knew it in the Gutenberg era, was based on the text. Gatekeepers -- degree-granting institutions, editors, fact-checkers, publishers, teachers, librarians were responsible for vetting published material and granting the imprimatur of authority. For better (I think, mostly) and for worse, search engines and the democratization of publishing have rendered that system obsolete.
My daughter and search engines came of age around the same time. She was in middle school, Google had not been invented yet, and she and her classmates were not just using library books to research compositions -- they were submitting queries to Altavista and Infoseek. So I sat down with her in front of the computer and explained that unlike a book, which was vetted by the authority-granting system I just described, anything she finds through online search has to be vetted by her. She has to look for an author and search on the author's name. She has to think like a detective and look for clues of authenticity or bogosity in the text.
Librarians and educators certainly are interested in teaching critical thinking. But not only is it not easy to do, the fear (and sometimes regulatory or statutory limitations on the use) of the Internet in schools prevents educators from using the most essential tool for teaching online information literacies. Having mentioned "information literacy," I would add that many forward-looking librarians today talk about a suite of literacies that include search and verification, but also include knowing when and how to use information, how to create, publish, network, and use information to solve problems. It can be argued that these have been essential learning skills for a long time, but the ubiquity of smartphones, tablets, laptops, PCs and the explosive growth of networked information resources have dramatically changed the infosphere from the 500 years when printed information was more controllable and reliable.
I went to Reed, where the liberal arts tradition of learning how to think for yourself and how to access the millennia-long discourses of other thinkers, how to learn and how to learn how to learn new things, were central values. And I spent four years as editor of Whole Earth Review and the last editor of the Whole Earth Catalog, enterprises based on the old American values (Emerson! Self-Reliance) of individual responsibility and freedom of thought and action. Don't wait for some distant institution to do it. Learn how to do it yourself, and learn the tools you need to do what you want to do ("Access to tools" was the subtitle of the Whole Earth Catalogs).
So I regard critical thinking and self-reliance as healthy values as well as important life tools. However, I have to recognize that many people still believe that obedience to authority is paramount. As I said, I think this conflict is a fundamental one -- like the question of whether people are essentially sinful and need to restrained from exercising their baser instincts, or whether people are essentially good and need to be educated in positive values.
There has been, as you note, ongoing controversy over the issue of multitasking. What did your review of the neuroscience literature teach you about this debate? You end up suggesting that the key is learning to manage our attention. What specific steps do you recommend to help people deal with issues of attention control more effectively?
Cliff Nass, whose work is most often cited as proof that "multi-tasking doesn't work," has an office down the hall from my own, and I discussed the issue with him and his co-author when their study first came out. First, within the limits of their methodology, Nass and Ophir found that when people attend to multiple media their performance on the cognitive tasks associated with each media channel degrades rather than improves. This is true for a large percentage of subjects.
First I think it's important to understand the methodology. The kind of research that Nass and Ophir necessarily have to do is a simplification of the way people attend to media. In a laboratory, it's about remembering strings of letters backwards or recognizing the color of a numeral flashed on a screen. What we don't know a great deal about is what happens when all those streams of media are coordinated and focused on a single subject. When I'm working on a book, I have my database of research up on one screen, the text in front of me, a Twitter conversation about the subject of my writing going on in another window. I might take a few minutes to watch a video from the research database. Can people learn to multitask effectively if all the tasks are centered on the same inquiry?
There is not yet a lot of evidence about what the small percentage of successful media multitaskers are able to do -- is it innate or learned?
But most importantly, I think it's necessary to see focused attention, diffuse, scanning attention, multitasking, distraction as elements of a toolbox of attentional tools that we mostly don't know how to use all that well online. I know that in my own work, losing efficiency in my overall production is sometimes offset by orders of magnitude by the collective intelligence effects of attending to a network while I'm writing. And sometimes it isn't about productivity at all -- it's about seeing connections, systems, big pictures.
The key is what (I've learned) is called "metacognition." Wikipedia has a pretty good page about it. Metacognition is not only about being aware to some degree of where you are directing your attention and why; it's also about knowing when you need to screen out distractions and focus your attention narrowly and when you are better off diffusing your attention or switching between a small set of tasks -- it's about knowing what circumstances call for each mind-tool and how to best apply the mind-tool in those circumstances. It's more complicated to explain than to do.
In trying to find ways to contextualize my own metacognition -- to give me a reason for choosing one form of attention over another on a day to day, hour to hour basis -- I started writing down two or three objectives for my day's work in a very few words and large letters on an index card, which I replace daily at the periphery of my vision, right under my main computer display screen. Every once in a while, my gaze falls upon the paper and I have the opportunity to ask myself whether what I am doing online right now is in line with what I set out to do today -- and whether that matters, and why.
At first, thinking about where and why my attention is directed was cumbersome, but it swiftly became semi-automatic. I won't trot out the neuroscience -- there are plenty of references in my book -- but there's little controversy over the contention that people can train and retrain their brains through directed attentional practice. As Maryann Wolf so eloquently explained in Proust and the Squid, brain retraining through directed attentional practice is what we do when we learn to read.
Howard's Story

I fell into the computer realm from the typewriter dimension in 1981, then plugged my computer into my telephone in 1983 and got sucked into the net. In earlier years, my interest in the powers of the human mind led to Higher Creativity (1984), written with Willis Harman, Talking Tech (1982) and The Cognitive Connection (1986) with Howard Levine, Excursions to the Far Side of the Mind: A Book of Memes (1988), Exploring the World of Lucid Dreaming (1990), with Stephen LaBerge, and They Have A Word For It: A Lighthearted Lexicon of Untranslatable Words and Phrases.(1988).

I ventured further into the territory where minds meet technology through the subject of computers as mind-amplifiers and wrote Tools for Thought: The History and Future of Mind-Amplifiers (1984) [New edition from MIT Press, April 2000]. Next, Virtual Reality (1991) chronicled my odyssey in the world of artificial experience, from simulated battlefields in Hawaii to robotics laboratories in Tokyo, garage inventors in Great Britain, and simulation engineers in the south of France.

In 1985, I became involved in the WELL, a "computer conferencing" system. I started writing about life in my virtual community and ended up with a book about the cultural and political implications of a new communications medium, The Virtual Community(1993 [New edition,MIT Press, 2000]). I am credited with inventing the term "virtual community." I had the privilege of serving as the editor of The Whole Earth review and editor in chief of The Millennium Whole Earth Catalog (1994). Here's my introduction to the Catalog, my riff on Taming Technology and a selection of my own articles and reviews from both publications.In 1994, I was one of the principal architects and the first Executive Editor of HotWired. I quit after launch, because I wanted something more like a jam session than a magazine. In 1996, I founded and, with the help of a crew of 15, launched Electric Minds. Electric Minds was named one of the ten best web sites of 1996 by Time magazineand was acquired by Durand Communications in 1997. Since the late 1990s, I've cat-herded a consultancy for virtual community building.

My 2002 book, Smart Mobs, was acclaimed as a prescient forecast of the always-on era. In 2005, I taught a course at Stanford University on A Literacy of Cooperation, part of a long-term investigation of cooperation and collective action that I have undertaken in partnership with the Institute for the Future. The Cooperation Commons is the site of our ongoing investigation of cooperation and collective action. The TED talk I delivered about "Way New Collaboration" has been viewed more than 265,000 times. I have taught Participatory Media/Collective Action at UC Berkeley's School of Information, Digital Journalism at Stanford and continue to teachVirtualCommunity/Social Media at Stanford University, was a visiting Professor at the Institute of Creative Technologies, De Montfort University in Leicester, UK. In 2008, I was a winner in MacArthur Foundation's Digital Media and Learning competition and used my award to work with a developer to create a free and open source social media classroom. I have aYouTube channel that covers a range of subjects. Most recently, I've been concentrating on learning and teaching 21st Century literacies. I've blogged about this subject for SFGatehave been interviewed, and have presented talks on the subject. I was invited to deliver the 2012 Regents' Lecture at University of California, Berkeley. I also teach online courses through Rheingold U.

You can see my painted shoes, if you'd like.

 

Howard Rheingold / hlr@well.com