panels without borders

Isn’t it peculiar that we are reading an excerpt from this metacomic but not the metahypertext piece by George Landow that our editors refer to in the intro?

Upon finishing McCloud’s brilliant sketch of comics, i found myself reflecting that it was a useful analog to the way we experience computer media—a verbal media enhanced with pictures (and often also sound and motion). It seems that our most successful digesting of digital information happens when there is effective pairing between the verbal and the image so that the affordances of each interact with each other to surface a more profound meaning. (Like the RSA illustration of that lecture on education that Jim linked.)

McCloud’s treatment of  panels was helpful in thinking about the frame in which we experience Web content. If Nelson had his way, instead of  there being  a frame around the Web-page, would the information exist within an ever-expanding panel? The Web today seems to me to do a pretty decent job of keeping those frames pliable through hyperlinks.

mimesis-eating shark

In our last class we spent some time tracing the narrative (infra)structures of drama, computers and video games and examining how they are aligned. We also agreed that we had certain mimetic expectations for each medium; as Carrolle explained, the reason she doesn’t like Lost is that its unpredictable, non-lifelike plot doesn’t follow Aristotle’s rules about the six elements and their material and formal causality. As Lance pointed out, this is similar to the frustration we feel when our computers crash or fail to do what we expected them to do—”we’re offended by stuff that jumps the shark”, as he put it.

Our discussion was directed by the assumption that the more something followed Aristotle’s ordering of narrative, the more mimetic it was. Aristotle’s theory of the six elements of drama and their causes is an expression of human preference for order and the need to recognize pattern in shaping what we perceive life to be like.

As our conversation continued, comparisons were drawn between the narratives of early films and contemporary full-length features—namely, that the first movies ever made lacked narrative. For instance, the difference between Lumiere’s 1895 L’Arrivee d’un train en gare de la Ciotat (Arrival of a train at Ciotat) and The Godfather.

Jim reflected that an equally significant narrative evolution exists in the development of video games. As some of our gamers shared about the importance of narrative in video games, I thought of the difference between NES games of yesterday and FPS games today–

Both comparisons chart an evolution of technology as much as they chart a growing emphasis on narrative. In each comparison, I think we would agree that the latter offers more mimetic experiences than the former. As I was thinking about this in class, I wondered—is Halo more lifelike than Duckhunt and Godfather more lifelike than L’Arrivee because of the evolution of technology, or because of the evolving importance of narrative?

Our class discussion revolved around the importance of narrative in humans’ carving meaning out of their lives (we edit our lives into being according to our ideas of what life should look like, or what is life-like). Because of this, I’m more inclined to say that the human insistence to impose narratives on the developing creative forms of movie and video game was the most central element in their becoming increasingly lifelike.

(Or is my notion of “lifelike” contrived… The irony of my point that Godfather is more mimetic than L’Arrivee d’un train is in the purported reaction of audiences to L’Arrivee‘s first showing: as the story goes, viewers observing the life-like train hurtling forward on the screen ran away from the image screaming—a response I don’t think was seen in first viewers of The Godfather. Does this qualify L’Arrivee as a more mimetic experience than Godfather?)

are we humans, or are we editors.

The poignant truth foundational to Viola’s essay is that we exist in edited realities. this has been a theme of many thinkers whose work I find most interesting. For one, Viola’s notions that as “things are perceived as discrete parts or elements, they can be rearranged. Gaps become most interesting as places of shadow, open to projection. Memory can be regarded as a filter” and that “we quite literally carve out our own realities” reek of Errol Morris and his idiosyncratic films about human constructions of reality.

"Life without editing, it seems, is just not that interesting"--Viola makes a valid point (464). The question I guess then is who is the editor? The question is ceaselessly interesting, regardless of your notions of human purpose and whether you assign agency to the self or external forces (god? media?).

Interestingly, Viola's conception of the holism of digital computers and software technologies seems more aligned with Renaissance versions of cosmological order and religion than postmodern relativism. He writes, "I saw then that my piece was actually finished and in existence before it was executed on the VTRs" (465). His thought sounds very much like Michelangelo's notion of forms:

Michelangelo believed that the artist’s function was to bring preexistent forms out of the material at hand: “the greatest artist has no conception which a single block of marble does not potentially contain within its mass, but only a hand which obeys the intelleto can accomplish that” (Clements 16). Art forms, or the concetto, exist independently of the artist, and are implanted in matter by nature. The artist’s function was to draw these forms out of the material.

Michelangelo's neoplotonism

Much of Viola’s theory concerns historical patterns of art: “the original structural aspect of art, and the idea of a ‘data space’ was preserved through the Renaissance …” (467). We can learn a valuable lesson from Viola’s tactic of examining the present and future of interactive video arts with an eye to all the art that came before it; and our readings have helped us to keep in mind the history of human consciousness as we study human communication. It’s hard to say whether we have been talking more in NMFS about media, or the user of media.

Alan Kay speaks to nmfs!

I hope our whole class will see where one of Dr. Campbell’s New Media Consortium colleagues, Alan Levine, blogged about our NMFS and our Alan Kay Dynabook reading and Alan Kay responded; the two now have a bit of a back and forth going (how cool!). I’d like to highlight in Kay’s comment the idea that there was “an enormous “imagination gap” between the inventors of personal computing and Internet, and those businesses and entrepreneurs who wanted to exploit the inventions“. Kay’s claim suggests where our market economy and big business has gotten in the way of the computering we know today becoming what Engelbart and Nelson envisioned. we’ve spent more of our time this semester talking about the ways our computerings do live up to the imaginative sketches in the New Media Reader than wondering why we haven’t gotten to the levels the writers designed. maybe if we start thinking in terms of what is controlling our digital forward movement instead of “we the user”, we will contract the fervor of Nelson and learn the vision of Kay– (another line from his comment to Alan Levine:

What we were hoping for is the other rarer phenomenon, and that is the process of real education making us less like traditional humans and more like civilized humans. This can happen, and has happened in isolated cases, but as with TV the commercial forces are dominating — and they want cave people to sell to!

Bookburning

Kay and Goldberg’s word metamedium a powerful way to conceive of the computer (394). Metamedium expresses all that the tired metaphors of desktop and paper fail to signify. I think one of the most optimistic summation of the current state of the digital age our editors have made so far is that the “desktop and virtual paper metaphors are meeting a significant challenge, and may themselves fade” (391). As we do away with thinking that labels the computer in terms that we already know (instead of allowing it to become an entirely new thing) and replace it with Ted Nelson-style and sized thought, the closer we come to actualizing Engelbart’s vision of augmented intellect. There is much to show that we limit ourselves by thinking of computers in terms of knowledge-organization tools that we already know. For one, I’m curious about the precise evolution of the desktop metaphor. We may assume that it was first used to express the physical location of the computer (as suggests later words like “laptop”); if this is true, then using the word “desktop” to refer to the monitor’s screen and its activities is a fallacy that evolved out of convenience or metonymy—in the same way that we use the word “Kleenex” to refer to tissue. An actual desktop hardly resembles my computer screen:

a desktop

although it has “documents” that are organized into “files”, I agree with Nelson that imposing archaic file cabinet systems of organization affords little to the digital medium. Think of the way information is organized on a (well-designed) website compared with a filing system or even a (poorly-designed) website that works with text as a linear entity.

The web interface promotes a level of information exchange that paper medium inhibits. The death of desktop and virtual paper metaphors will be the liberation of vivacious, uncharted human-computer interaction. Further evidence of this is Kay and Goldberg’s observation of children’s interactions with computers: “Their attention spans are measured in hours instead of minutes” (394); as indeed any of us know who have observed a five year-old at a computer. In contrast to the adult whose human faculties have the molding of years of non-digital expression and functioning, the child’s unformed infomations-palette takes naturally to the computer/the digital/the metamedium. Does this not suggest that the computer allows for record-making/knowledge-playing that is more aligned with natural thought processes than have been the page, the book, the library … Instead of understanding computer media entities by comparing them to what we were familiar with (page, document, file), we must allow them to become their own, new ___.

MIND THE [user-programmer] GAP

I’m still reeling from Nelson’s visionary overhaul of the computer experience; despite his departure from Engelbart’s foundational model, the excerpts still had a spectacular bootstrapping quality about them: in reading Nelson’s users’ manifesto, I, the user (and someone in need of the user-friendly*) was able to understand concepts of the interactive computer system that Engelbart wanted his reader to grasp (but couldn’t because of Engelbart’s invented language). I do need to thank Charles Ulrich III, one of our “Memex to Youtube” counterparts, for helping me see where i have missed some of Engelbart’s vision. While I still feel that Engelbart focused too much on a kind of computer-elite, more essential to his goal for computer systems is the “co-evolution between computers and people themselves” as Charles termed it in his comment on my blog. Charles, i really like your vision of us all joining Engelbart’s elite: “[we] will have the tools to be augmented by, and to easily augment the computer before [us], not just use it. If everyone was in the computer-elite as Englebart had seen, all of our browsing experiences, peripherals, machines would be different based on our OWN preferences, not a company’s market research, and the best of these Ideas created by everyday people would rise to common use”, and i think we should still strive for that kind of bootstrapping. Before reading Nelson’s piece, however, i was still uncertain of how this realistically reconciled the gap between the user and the programmer. Engelbart and Nelson think of the necessary relationship between  programmer and user differently: whereas Engelbart hopes that the user will build the uses for himself, for Nelson, only the uses are taught to the user:

“interior computer technicalities have to be SUBSERVIENT, and the programmers cannot be allowed to dictate how it is to behave on the basis of under level structures that are convenient to them … we the users-to-be must dictate what lower-level structures are to be prepared within”

The user’s existence in this “prefabricated environments carefully tuned for easy use” is a happy reality that we know today, as is especially imaged (I think) in Apple products. It makes sense that in the 1970s Apple would provide its employees with Computer Lib/Dream Machines. Do we not think of Apple as the ones who brought the computer to the People? And as my brother pointed out to me in his kitchen this morning while he was preparing his famous technicolor velvet cake, Macs are as programmable as the user wants them to be—but software designers have made those features only visible to those who know how to or want to manipulate them. So, while my brother does with his Mac a multitude of things I may not be interested in (including using Open Office technologies, that present day tribute to Engelbart), the option to bootstrap my computer into advancement lies waiting for me should i choose to learn how.

*we don’t need this term user-friendly and its connotations of helplessness; i think Nelson would call it user-mindfulness-in-the-construction-stage

or: How we all Learned to Stop Worrying and Love the Blog

In our editors’ helpful introduction, they highlight the gap between the ARC’s intent for their technology and the work that followed it. Number four says Engelbart’s “primary goal was to allow people to work together to solve difficult problems more easily …  networking and shared information spaces were essential” (232). However, the editors also say  that Engelbart didn’t intend his technology to be accessible by Everyman and Anyman, but for experts: today’s “gulf between [software] users and creators” is incongruent with the concept of “bootstrapping” (232). Still, I would argue that it is because of this gulf, because technology is increasingly “user-friendly” for an increasing number of social types and strata  that we can actualize Engelbart’s goal of networking and working together online–unless, of course, he only envisioned only one demographic:

taking on the complex problems of the world—but I think we will all agree that a man of Engelbart’s brilliance could not possibly be so myopic.  

A central theme in the “mother of all demos” is communication. Engelbart’s term for the computer’s actions in the service exchange is “feedback” (239). He mutters “that’s good feedback” when the computer does what he wants it to do. Engelbart and the rest of the white males at the ARC had to invent a new language for the service-system software; and it is indeed part of their work’s genius. But i’d like to ask the question: might it also have been part of their disease? Language is powerful. It is at once the locus of human connection and a vehicle that can inhibit interconnectivity. “Lofty intellectual discourse” can alienate those not fluent in its idioms, and put a roadblock in idea exchange. In the blogging community, I think we could find a model for equitable interconnected communication (see Robert Wright’s thoughts the good of social connectivity). The Blog is one significant area of communication technology where we can see how its progression as a technological tool parallels its growth among more diverse groups of users. (Many have argued that while the blog started as a frontier largely settled by the white male, bloggers today reflect a more accurate representation of all members of society.) As much as the blog owes to Engelbart and the ARC, maybe the blog answers to some of what “A Research Center for Augmenting Human Intellect” misses. To solve the world’s problems will require an augmented human intellect, but it will also take all types: addressing global problems will require global fluency in computer language so that we can all talk to each other.

The Dream of a Common terminology and notation

As I started to say in my comment on Marcie’s blog, what I enjoyed most about class last week was hearing the varied responses shared in answer to the same question: even as each perspective pointed at the same truths, it did so in the terms/language/expressions of the speaker’s respective discipline. The theologian in our class posed a question about the changing role of the educator in an age when his students have access to more knowledge than he could ever pass on to them on his own. The rhetorician in the room answered first with every writer’s cardinal rule: it depends on what his audience expects of him. The librarian answered next: “it’s not knowing the answer but knowing where to find it”. Then the neuroscientist answered, referring to her use of statistical analysis software for her computations—while she uses them to arrive at answers, she still has to understand how it does what it does. That our learning experience in NMFS is distinctly interdisciplinary is a simple observation, but it is fascinating to be learning among such a diverse crowd of experts for the first time.
 
This spirit of interdisciplinarity is fundamental to Engelbart’s augmented human intellect. Indeed, solving the globe’s most complex problems requires the knowledge of several fields. This is not to say that NMFS 2010 is equipped to take on the issues currently threatening humanity; I fear we lack something most pivotal in Engelbart’s vision. More than just emphasizing the interdisciplinary nature of human problems and solutions alike, Engelbart is insistent on the necessity of a uniform language; a language that would allow one individual to understand the work of another “even if you find his structure left in the condition in which he has been working on it—that is, with no special provisions for helping an outsider find his way around” (108). In his introduction, Engelbart sketches “life in an integrated domain” in which “streamlined terminology and notation” is part and parcel of the augmented intellect (95). Everyone who answered Steve’s question in class used experiences relevant to their field to affirm the same truth: the widespread access to knowledge that is character of our digital age  in no way makes the educator obsolete. However, each of us did so using the various languages unique to each profession. There is a bit of a gap between the way disciplines interact today and the way I think Engelbart wants them to interact. For one, not only do we not have the streamlined terminology Engelbart described, we use very different softwares to accomplish our daily tasks.  If you’ll allow me to make an uneducated, oversimplified assumption—the epidemiologist and the policy maker and the sociologist and the social worker who will work together to combat AIDS will not set off on their task using the same tools, nor will they share a precisely analogous vocabulary. “Augmenting Human Intellect” at once highlights the major gains humanity has most recently accomplished to fight the gravest of societal ills and reminds us where we fail perilously short—at the most basic level of language.

Cyborgs today, memexes tomorrow

from Spike Jonze short "I'm Here"

Our editor’s crafty little association-drawing sidebars accomplished their task: after reading the nod to Donna Harraway on the first page of the “As We May Think” intro, the feminist’s cyborg vision was on my mind and shaped my reading of Vannevar Bush and his memex. In the same way that Bush’s article and the Atlantic Monthly editor who prefaced it speak of machines as extending man’s powers, Harraway’s cyborg vision has a theme of supplementing what is human with something machine (37, 516). Her basic definition of cyborg is “a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction”—creatures whose humanity has to be read/understood in the context of a new cyber reality (were these words “cyber” and “reality” recently antonyms?) in which they exist (516). As I think about the legitimacy of the cyborg concept and look for examples of it, I think of the many ways technology is part and parcel of many of my daily moments. We are cyborgs in that as we hold a smart phone in our hands, it interacts with us and our world so that the moment is changed by our wielding of the phone—in this sense, the phone is part of our human experience, and (Harraway would say) part of what makes us human.

Similarly, reading has become an increasingly cyborg activity. Even if it is in print medium, there is hardly a time that I read without having the internet beside me, chasing any unknown allusions or words—and this is no lofty OED search; wiki generally is most useful. (However many would call that cheap or cheating, I think an equal many would call it engaging a mediated reality: the reading/information processing is enhanced by coupling it with the knowledge of others.)

Harraway’s 1991 sketch is astute. I believe that we will continue to prove the truth that undergirds her myth: that the machine and human are more alike than unalike, and in embracing that unity, there is liberation (for social structures, for daily mundanities, for women, for humanity). Herein lies part of Bush’s prescience: the foundational character of his memex is its ability to think like a human, i.e. associative indexing. He says that in 1945, the ineptitude of technology/information storage is its selection by indexing rather than selection by association, as operates the human mind: that “speed of action, the intricacy of trails, the detail of mental pictures” (44). Indeed, the machines we know and use today are more human in this regard, and are more like the memex than the technology Bush describes in the first half of his article. For example, during my reading, I used my Droid’s “Google Goggles” app; with its visual search engine I took a picture of the picture of memex on p. 44 and it returned Wikipedia’s memex page. We also have seen the advent of human-like computers with the Apple age with devices that have more in common with their users than do pcs. The blending of machine and human into cyborg has been catalyzed by these devices that are increasingly human-like. I think Harraway would agree: as much as we are incorporating that which is machine into our human selves, the machine is also learning better how to be human.

my cyborg dad