Rich User Experience, UX and Desktopization of War
“If we only look through the interface we cannot appreciate the ways in which it shapes our experience”
— Bolter, Gromala: Windows and Mirrors
This essay is based on my lecture given at Interface Critique, Berlin University of the Arts, November 7th 2014.
Thank you for hosting me. Today I’m talking as the Geocities Institute’s Head of Research, an advocate for computer users’ rights, and interface design teacher.
RUE
I’ve been making web pages since 1995, since 2000 I’m collecting old web pages, since 2004 I’m writing about native web culture (digital folklore) and the significance of personal home pages for the web’s growth, personal growth and development of HCI.
So I remember very well the moment when Tim O’Reilly promoted the term Web 2.0 and announced that the time of Rich User Experience has begun. This buzzword was based on Rich Internet Applications, coined by Macromedia,1 that literally meant their Flash product. O’Reilly’s RUE philosophy was also rather technical: The richness of user experiences would arise from of use of AJAX, Asynchronous Javascript and XML.
The web was supposed to become more dynamic, fast and “awesome,” because many processes that users would have to consciously trigger before, started to run in the background. You didn’t have to submit or click or even scroll anymore, new pages, search results and pictures would appear by themselves, fast and seamless. “Rich” meant “automagic” and … as if you would be using desktop software.
As Tim O’Reilly states in September 2005 in blogpost What is Web 2.0?:2 “We are entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.”3
But Web 2.0 was not only about a new way of scripting interactions. It was an opportunity to become a part of the internet also automagically. No need to learn HTML or register a domain or whatever, Web 2.0 provided pre-made channels for self expression and communication, hosting and sharing. No need anymore to be your own information architect or interface designer, looking for a way to deliver your message. In short: no need to make a web page.
The paradox for me at that time was that Rich User Experience was the name for a reality where user experiences were getting poorer and poorer. You wouldn’t have to think about web or web specific activities anymore.
Also, Web 2.0 was the culmination of approximately seven years of neglecting and denying the experience of web users—where experience is Erfahrung, rather than Erlebnis.4 So layouts, graphics, scripts, tools and solutions made by naïve users were neither seen as a heritage nor as valuable elements or structures for professional web productions.
That’s why designers of today are certain that responsive design was invented in 2010, mixing up the idea with coining the term; though it was there from at least 1994.
And it also explains why the book Designing for Emotion5 from the very sympathetic series “books apart” gives advises how to build a project “from human to human” without even mentioning that there is much experience of humans addressing humans on the web that is decades old.
“Guess what?! I got my own domain name!” announces the proud user who leaves Geocities for a better place. – “So if you came here through a link, please let that person know they need to change their link!”
“If you take the time to sign my guest book I will e-mail you in return.” writes another user in an attempt to get feedback. Well, this one might be more of an example for early gamification than emotional design, but this direct human to human communication–something current designers have the largest desire to create–is very strong.
A few days ago, my team at the Geocities Research Institute found 700 answers to the question “What did peeman pee on?” Peeman is an animated GIF created by an unknown author, widely used on “manly” neighborhoods of Geocities to manifest disgust or disagreement with some topic or entity, like a sports team, a band, a political party, etc., kind of a “dislike” button.
It isn’t a particularly sophisticated way to show emotions or manifest an attitude, but still so much more interesting and expressive than what is available now: First of all, because it is an expression of a dislike, when today there is only an opportunity to like. Second, the statement lays outside of any scale or dualism: the dislike is not the opposite of a like. Third: it is not a button or function, it works only in combination with another graphic or word. Such a graphic needed to be made or found and collected, then placed in the right context on the page—all done manually.
I am mainly interested in early web amateurs because I strongly believe that the web in that state was the culmination of the Digital Revolution.6
And I don’t agree that the web of the 1990’s can just be considered as a short period before we got real tools, an exercise in self-publishing before real self-representation. I’d like to believe that 15 years of not making web pages will be classified as a short period in the history of the WWW.
There are a few initiatives right now supporting my observation that home page culture is having a second come back, this time on a structural rather than just visual level.7
neocities.org – free HTML design without using templates.
tilde.club – as the above, plus URLs as an expression of users belonging to a system; and web-rings as an autonomy in hyper linking.
superglue.it – “Welcome to my home page” taken to the next level, by hosting your home page at your actual home.
* * *
I had the chance to talk at the launch of superglue.it at WORM in Rotterdam a month ago. Five minutes before the event, team members were thinking who should go on stage. The graphic designer was not sure if she should present. “I’ve only made icons,” she said. “Don’t call them Icons,” the team leader encouraged her, “call them User Experience!” And his laughter sunk in with everybody else’s.
Experience Design and User Illusion
We laughed because if you work in new media design today, you hear and read and pronounce this word every day. Rich User Experience maybe was a term that kept its proponents and critics busy for some time, but it never made it into mainstream usage, it was always overshadowed by Web 2.0.
With User Experience (UXD, UX, XD) it is totally different:
The vocabulary of HCI, Human Computer Interaction design, that has been only growing since its inception, keeps shrinking since two years.
Forget, input and output, virtual and augmented, focus and context, front-end and back-end, forms, menus and icons. This all is experience now. Designers and companies who were offering web/interface solutions a year ago are now committed to UX. Former university media design departments are becoming UX departments. The word interface is substituted by experience in journalistic texts and conference fliers. WYSIWYG becomes “complete drag and drop experience,” as a web publishing company just informed me in an email advertising their new product.8
UX is not new, the term is fully fledged. It was coined by Don Norman in 1993 when he became a head of Apple’s research group: “I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person’s experience with the system including industrial design graphics, the interface, the physical interaction and the manual.”9
Recalling this in 2007, he added: “Since then the term has spread widely, so that it is starting to lose its meaning.” Other prophets are complaining for years already that not everybody who calls themselves “experience designer” actually practices it.
This is business as usual, terms appear, spread, transform, become idioms; the older generation unhappy with the younger one, etc. I don’t bring this up to distinguish “real” and “fake” UX designers.
I’m concerned about the design paradigm that bears this name at the moment, because it is too good at serving the ideology of Invisible Computing. As I argued in Turing Complete User,10 the word “experience” is one of three words used today referring to the main actors of HCI:
HCI | UX |
---|---|
Computer | Technology |
Interface | Experience |
Users | People |
The role of “experience” is to hide programmability or even customizability of the system, to minimize and channel users’ interaction with the system.
“User illusion” was a main principle of interface designers since XEROX PARC, since the first days of the profession. They were fully aware about creating illusions, of paper, of folders, of windows. UX creates an illusion of unmediated natural space.11
UX covers holes in Moore’s Law; when computers are still bigger than expected, it can help to shrink them in your head. UX fills awkward moments when AI fails. It brings “user illusion” to a level where users have to believe that there is no computer, no algorithms, no input. It is achieved by providing direct paths to anything a user might want to achive, by scripting the user12 and by making an effort on audiovisual and aesthetic levels to leave the computer behind.
The “Wake-up Light” by Philips is an iconic object that is often used as an example of what experience design is. It is neither about its look nor interaction, but about the effect it produces: a sunrise. The sunrise is a natural, glorious phenomenon, as opposed to artificial computer effects created from pixels, or, let’s say, the famous rain of glowing symbols from The Matrix. Because an experience is only an experience when it is “natural.”
There is no spoon. There is no lamp.
When Don Norman himself describes the field, he keeps it diplomatic: “[W]e can design in the affordances of experiences, but in the end it is up to the people who use our products to have the experiences.”13—Of course, but affordances are there to align the users’ behaviors with a direct path. So it is not really up to the “people,” but more up to the designer.
One of the world’s most convincing experience design proponents, Marc Hassenzahl, clearly states: “We will inevitably act through products, a story will be told, but the product itself creates and shapes it. The designer becomes an ‘author’ creating rather than representing experiences.”14
That’s very true. Experiences are shaped, created and staged. And it happens everywhere:
On vine, when commenting on another user’s video, you are not presented with an empty input form, but are overwriting the suggestion “say something nice.”
On Tumblr, a “close this window” button becomes “Oh, fine.” I click it and hear the UX expert preaching: “Don’t let them just close the window, there is no ‘window,’ no ‘cancel’ and no ‘OK.’ UsersPeople should greet the new feature, they should experience satisfaction with every update!”
As the Nielsen Norman Group puts it: “User experience design (UXD or UED) is the process of enhancing user satisfaction by improving the usability, ease of use, and pleasure provided in the interaction between the user and the product.”15
Such an experiences can be orchestrated on a visual level: In web design, video backgrounds are masterly used today to make you feel the depth, the bandwidth, the power of a service like airbnb, to bring you there, to the real experience. On the structural level, a good example is how facebook three years ago changed you tool for everyday communication into a tool to tell the story of your life with their “timeline.”
You experience being heard when Siri got a human voice, and an ultimate experience when this voice is calm, whatever happens. (The only thing that actually ever happens is SIRI not understanding what you say, but she is calm!)
You experience being needed and loved when you hold PARO, the most sold lovable robot in the world, because it has big eyes that look into your eyes. And you can pet its nice fur. Though smart algorithms, lifelike appearance and behavior alone wouldn’t suffice to not make users feel like consumers of a manufactured programmable system.
Critics of AI like Sherry Turkle warn that we must see and accept machines’ “ultimate indifference,”16 but today’s experience designers know how to script the user to avoid any gaps in the experience. There is no way to get out of this spectacle. When PARO is out of battery, it needs to be charged via a baby’s dummy plugged into its mouth. If you possess this precious creature, you experience its lifelines even when it is just a hairy sensors sandwich.
This approach leads to some great products on screen and IRL, but alienates as well. Robotics doesn’t give us a chance to fall in love with the computer if it is not anthropomorphic. Experience design prevents from thinking and valuing computers as computers, and interfaces as interfaces. It makes us helpless. We lose an ability to narrate ourselves and—going to a more pragmatic level—we are not able to use personal computers anymore.
We hardly know how to save and have no idea how to delete. We can’t UNDO!
* * *
UNDO was a gift from developers to users, a luxury a programmable system can provide. It became an everyday luxury with the first GUI developed at Xerox17 and turned into a standard for desktop operating systems to follow. Things changed only with the arrival of smart phones: neither Android nor Windows phone nor Blackberry provide a cross-application alternative to CTRL+Z. iPhones offer the embarrassing “shake to undo.”
What is the reasoning of these devices’ developers?
Not enough space on the nice touch surface for undo button; the idea that users should follow some exact path along the app’s logic, which would lead somewhere anyway; the promise that the experience is so smooth that you won’t even need this function.
Should we believe it and give up? No!
There are at least three reasons why to care about UNDO:
UNDO is one of very few generic (“stupid”) commands. It follows a convention without sticking its nose into the user’s business.
UNDO has a historical importance. It marks the beginning of the period when computers started to be used by people who didn’t program them, the arrival of the real user18, and the naive user. The function was first mentioned in the IBM research report Behavioral Issues in the Use of Interactive Systems:19 They outlined the necessity to provide future users with UNDO: “the benefit to the user in having—even knowing—of a capability to withdraw a command could be quite important (e.g, easing the acute distress often experienced by new users, who are worried about ‘doing something wrong’).”
UNDO is the border-line between the Virtual and the Real World everybody is keen to grasp. You can’t undo IRL. If you can’t undo it means you are IRL or on Android.
* * *
Commands, shortcuts, clicks and double clicks … not a big deal? Not an experience?
Let me leave you with this supercut for a moment:
These are us, people, formerly known as users, wildly shaking their “magic pane of glass” to erase a word or two, us crying out to heaven to “undo what hurts so bad.”20 Us bashing hardware because we failed with software.
We are giving up our last rights and freedoms for “experiences,” for the questionable comfort of “natural interaction.” But there is no natural interaction, and there are no invisible computers, there only hidden ones. Until the moment when, like in the episode with The Guardian, the guts of the personal computer are exposed.
In August 2013, The Guardian received an order to destroy the computer on which Snowden’s files were stored. In mass media we saw explicit pictures of damaged computer parts and images of journalists executing drives and chips and heard Guardian’s Editor in Chief saying: “It’s harder to smash up a computer than you think.” And it is even harder to accept it as a reality.
For government agencies, the destruction of hardware is a routine procedure. From their perspective, the case of deletion is thoroughly dealt with when the media holding the data is physically gone. They are smart enough to not trust the “empty trash” function. Of course the destruction made no sense in this case, since copies of the files in question were located elsewhere, but it is a great symbol for what is left for users to do, what is the last power users have over their systems: They can only access them on the hardware level, destroy them. Since there is less and less certainty of what you are doing with your computer on the level of software, you’ll tend to destroy your hard drive voluntarily every time you want to really delete something.
Classic images of the first ever computer ENIAC from 1945 show a system maintained by many people who rewire or rebuild it for every new task. ENIAC was operated on the level of hardware, because there was no software. Can it be that this is the future again?
In 2011, 66 years after ENIAC, ProtoDojo showcased a widely celebrated “hack” to control an iPad with a vintage NES video game controller. The way to achieve this was to build artificial fingers, controlled by the NES joypad, to touch the iPads surface; modifying the hardware from the outside, because everything else, especially the iPad’s software, is totally inaccessible.
Every victory of experience design: a new product “telling the story,” or an interface meeting the “exact needs of the customer, without fuss or bother” widens the gap in between a person and a personal computer.
The morning after “experience design:” interface-less, desposible hardware, personal hard disc shredders, primitive customization via mechanical means, rewiring, reassembling, making holes into hard disks, in order to to delete, to logout, to “view offline.”
* * *
Having said that, I’d like to add that HCI designers have huge power, and seem unaware about it often. Many of those who design interfaces never studied interface design, many of those who did didn’t study its history, never read Alan Kay’s words about creating the “user illusion,” didn’t question this paradigm and didn’t reflect on their own decisions in this context. And not only interface designers should be educated about their role, but it should be discussed and questioned which tasks can be delegated to them in general. Where are the borders of their responsibilities?
Combat Stress and The Desktopization of War
In 2013, Dr. Scott Fitzsimmons and MA graduate Karina Sangha published the paper Killing in High Definition. They rose the issue of combat stress among operators of armed drones (Remote Piloted Aircrafts) and suggested ways to reduce it. One of them is to Mask Traumatic Imagery.
To reduce RPA operators’ exposure to the stress-inducing traumatic imagery associated with conducting airstrikes against human targets, the USAF should integrate graphical overlays into the visual sensor displays in the operators’ virtual cockpits. These overlays would, in real-time, mask the on-screen human victims of RPA airstrikes from the operators who carry them out with sprites or other simple graphics designed to dehumanize the victims’ appearance and, therefore, prevent the operators from seeing and developing haunting visual memories of the effects of their weapons.
I had students of my interface design class read this paper. I asked them to imagine what this masking could be. After hesitation to even think in this direction, their first draft were alluding to the game SIMS:
Of course the authors of this paper are not ignorant or evil. A paragraph below the quoted one they state that they’re aware that their ideas could be read as advocacy for a “play station mentality,” and note that RPA operators don’t need artificial motivation to kill, they know what they are doing. To sum it up, there is no need for a gamification of war, it is not about killing more but about feeling fine after the job is done.
I think that this paper, its attitude, this call to solve immense psychiatric task on the level of the interface made me see HCI in a new light.
Since the advent of the Web, new media theoreticians were excited about convergence: you have the same interface to shop, to chat, to watch a film … and to launch weapons, I could continue now. It wouldn’t be really true, drone operators use other interfaces and specialized input devices. Still, as on the image above, they are equipped with the same operating systems running on the same monitors that we use at home and the office. But this is not the issue, the convergence we can find here is even more scary: the same interface to navigate, kill and to cure post traumatic stress.
Remember Weizenbaum reacting furiously to Colby’s plans of implementing the Eliza chatbot in actual psychiatric treatments? He wrote: “What must a psychiatrist think he is doing while treating a patient that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter.”21 Weizenbaum was not asking for better software to help curing patients, he was rejecting the core idea to use algorithms for this task. It is an ethical rather than a technical or design question, just like the masking of traumatic imagery is now.
If we think about the current state of the art in related fields, we see on the technological level everything is already in place for the computer display acting as a gun sight and at the same time as a psychotherapist coach.
There are tests to cure PTSD in virtual reality, and studies that report about successes. So there is believe in VR’s healing abilities.22
There are a lot of examples around in gaming and mobile apps proving that the real world can be augmented with generated worlds in real time.23
There is experience in simplification of the real—or rather too real—images, like in the case of airport body scanners.24
And last but not least there is a tradition of roughly seven years of masking objects, information and people on Google Maps: This raises the issue of banalization of masking as a process. For example, to hide military bases, Google’s designers use the “crystallization” filter, known and available to everyone, because it is a default filter in every image processing software. So the act of masking doesn’t appear as an act that could rise political and ethical questions, but as one click in Photoshop.25
Those preconditions, especially the last one, made me think that something more dangerous than the gamification of war can happen, namely the desktopization of war. (It has already arrived on the level of commodity computing hardware and familiar consumer operating systems.) It can happen when experience designers will deliver interfaces to pilots that would complete the narrative of getting things done on your personal computer; to deliver the feeling that they are users of a personal computer and not soldiers, by merging classics of direct manipulation with real time traumatic imagery, by substituting the gun sight with a marquee selection tool, by “erasing” and “scrolling” people, by “crystallizing” corpses or replacing them with “broken image” symbols, by turning on the screen saver when the mission is complete.
We created these drafts in the hope of preventing others from thinking into this direction.
Augmented Reality shouldn’t become Virtual Reality. On a technical and conceptual level, interaction designers usually follow this rule, but when it comes to gun sights it must become an ethical issue instead.
Experience designers should not provide experiences for gun sights. There should be no user illusion and no illusion of being a user created for military operations. The desktopization of war shouldn’t happen. Let’s use clear words to describe the roles we take and the systems we bring to action:
War | UX | HCI |
---|---|---|
Gun | Technology | Computer |
Gun Sight | Experience | Interface |
Soldiers | People | Users |
* * *
I look through a lot of old (pre RUE) homepages every day, and see quite some that are made to release stress, to share with cyberspace what the authors can't share with anybody else, sometimes it is noted that they were created after direct advice of a psychoterapist. Pages made by people with all kinds of different backgrounds, veterans among them. I don't have any statistics about if making a home page ever helped anybody to get rid of combat stress, but I can't stop thinking of drone operators coming back home in the evening, looking for peeman.gif in collections of free graphics, and making a homepage.
They of course should find more actual icons to pee on. And by any means tell their story, share their experiences and link to pages of other soldiers.
Olia Lialina, January 2015
Jeremy Allaire: Macromedia Flash MX—A next-generation rich client, macromedia whitepaper, 2002 ↩
Tim O’Reilly: What is Web 2.0, O’Reilly, p.5, 2005 ↩
A decade later, when “the cloud” has become the symbol of power and the desktop metaphor is getting obsolete, this comparison looks almost funny. As this article seeks to demonstrate, the power of the desktop should not be underestimated. ↩
Wiktionary explains the different possible meanings of “experience” in the English language. ↩
Aarron Walter: Designing for Emotion, A Book Apart, 2011 ↩
… as opposed to Chris Anderson and Michael Wolff: The Web Is Dead. Long Live the Internet, WIRED, 2010-08-17 ↩
The first comeback was around five years ago when designers started to pay attention to elements of the early web: animated GIFs, under construction signs. See Olia Lialina, Geocities as Style and Marketing Gimmick @divshot, in: One Terabyte of Kilobyte Age, 2013-04-04 ↩
Weebly, Inc: “Introducing Weebly for iPad”, Weebly newsletter, received by author on 2014-11-16 ↩
Peter Merholz: Peter in Conversation with Don Norman About UX & Innovation, Adaptive Path, 2007 ↩
Olia Lialina, Turing Complete User, 2012 ↩
Alan Kay: “User Interface: A Personal View”, in The Art of Human-Computer Interface Design, 1990, Breda Laurel, S. Joy Mountford (eds.), Addison Wesley, pp.1 91-207. ↩
Janet Murray: Hamlet on the Holodeck, The Free Press, 1997
In later editions of the book and her recent writings she refers to this concept as scripting the interactor. ↩
Donald A. Norman (2014). Commentary on: Hassenzahl, Marc (2014): User Experience and Experience Design. In: Soegaard, Mads and Dam, Rikke Friis (eds.). The Encyclopedia of Human-Computer Interaction, 2nd Ed.. Aarhus, Denmark: The Interaction Design Foundation. ↩
ibid. ↩
The Nielsen Norman Group’s defintion of User Experience dates back to December 1998 ↩
Sherry Turkle: Alone Together, Basic Books, 2011, p.133 ↩
Butler Lampson & Ed Taft: Alto User’s Handbook, Xerox Corporation, 1979, p.36 ↩
See Olia Lialina: Users Imagined, appendix to: Turing Complete User, 2012 ↩
Lance A. Miller & John C. Thomas: Behavioral Issues in the Use of Interactive Systems, 1976 ↩
Fredrik Kempe, David Kreuger, Hamed “K-One” Pirouzpanah: UNDO, as performed by Sanna Nielsen, 2014-05-10 ↩
Josef Weizenbaum, From Judgement to Calculation, 1976, in: Noah Wardrip-Fruin and Nick Montfort (ed), The New Media Reader, MIT Press, 2003, p.370 ↩
PBS’ Frontline series covered a few projects:
Interview with Albert Rizzo, leader of Virtual Reality Exposure Therapy at the USC Institute for Creative Technologies since 2005.
Report on a Sargeant going through VR assiset PTSD therapy ↩
Since 2011, Nintendo’s handheld video game systems series 3DS features a built-in game called “Face Raiders” that mixes live camera, user photos and 3D graphics. ↩
See:
Tom McGhie: Boffins design ‘modest’ naked airport scan, Time is Money, 2010
Manchester Airport press release on body scanners, unknown date ↩