Wowsers and sainted oily fish, this project fills me with excitement:
A digitally-augmented chapbook, printed with barcode-like images readable by a computer’s camera that trigger text animations on the screen. I want one! I would love to see this in novels and textbooks, too: extra info or supporting bits of narrative available when you scan your book. I do think it’s an odd interaction to read an entire book in front of your laptop, as if you’re reading to a child, but really just reading to your (computer) self.
Although I’ve only seen the preview video, I can’t help but wince a bit at some of the text animations, which seem more driven by what the code can perform than what is right for the text. Just a hunch. Could be wrong. On their website you can also download a page of the chapbook and test it for yourself.
Lit snob? Gamer geek? Perhaps both?
Rumor has it that the developers are working on a version of Company for release at the end of time.
At the Consumer Electronics Show a few weeks ago, at least 100 new tablets were revealed. A minority of these were e-readers, including a “Multimedia Novel” by Pandigital that comes preloaded with Barnes and Noble’s Nookbook store. Exciting name, but not such an exciting device other than its under-$300 price tag. With very few exceptions, the texts one can read on these devices are digital versions of existing books and magazines. I’d like to share with you one of these exceptions, the iPad/iPhone/iPod Touch app What They Speak When They Speak to Me by Jason Lewis.
Screenshot of iPhone version
The digital page opens on a jumble of translucent white letters on a black background, slowly jostling one another. I shook my iPad, wondering if the letters might bunch together and form words. Nada. I dragged a finger across the screen, leaving a thin white line on which letters started to gather like pigeons on a telephone wire. So one reads by creating a surface for the phrases, line by line. It’s a simple and satisfying interaction, just a finger swipe. Reminds me of the act of turning a page, which of course also makes text “magically appear” on the next sheet of paper. But there is something more playful and exciting about watching the letters come together before your eyes, I caught myself starting to predict what the words were about to speak to me. I was also reminded of a beautiful talk by Virginia Woolf where she asks us to “Look once more at the dictionary. There beyond a doubt lie plays more splendid than Antony and Cleopatra; poems lovelier than the Ode to a Nightingale; novels beside which Pride and Prejudice or David Copperfield are the crude bunglings of amateurs. It is only a question of finding the right words and putting them in the right order.”
But why is it important to make the reader work for the words, tease them out of a chaos of letters? Why not just write the darn poem out flat? Is it a desire to expand reading to embrace seeing and touching? Interestingly, this work is listed under the Entertainment category rather than Books on the iTunes App Store.
What they Speak is only 99 cents at the iTunes App Store. Support your local digital media writer! As tablets become more and more visible, I’m hopeful that bit by bit (har har) we’ll see more fine writing produced specifically for this new reading surface. Fellow readers, have any of you published for this platform yet? What do you think of the experience of reading on a tablet?
Amazon recently announced a new, shorter format for its ereader called Kindle Singles (cue mental image of floppy yellow-orange cheese square). The works to be published will range from 10,000 to 30,000 words. Part of Amazon’s marketing spiel includes a “call to serious writers, thinkers, scientists, business leaders, historians, politicians and publishers” to submit their work. New authors, new revenue for Ama$on. What do you think, writers (“serious” writers, that is)? Will this be an interesting option for you?
I get a little thrill upon discovering a new place where literature can appear. From eBooks on the iPad to URLs (see Mai Ueda’s Domain Poems) to… browser plugins. Tumbarumba inserts fragments of a story into the text of regular HTML pages. Clicking the alien fragment reveals another sentence, and another, and finally the whole darn thing. Like the Shizzolator of 2003, but quieter, more subtle. Even insidious.
Guy Ben Ner’s excellent video Stealing Beauty (2007, about 18 minutes) was featured in an architecture talk that my husband attended. When he came home I was eager to hear about what invited speaker Vito Acconci had to say. The reply? “Forget Acconci, watch this!” In the video the narrative imitates a typical American sitcom, but the artsy twist here is that it’s shot in various IKEA stores, without staff permission. Complete with punchy theme music, outifts that coordinate with the showroom sets, and dialogue that centers around the value of money and family, the video is a hilarious performance piece. Rather than commercial breaks to cut up the action, scenes take place in different showroom sets, sometimes in an entirely different store. The transition between spaces reminds one of Borges, the family chatter going on in kitchens folded into other kitchens. When a certain action (dishwashing, showering, watching a porn flick) can’t be performed in the store, sound editing fills in. IKEA showroom set as narrative constraint–brilliant! Watching the family perform their lives just like the showrooms perform perfect and desirable furniture sets produces that delicious familiar/strange feeling, and leaves you wanting to run to your local IKEA store with a video camera and a script. Watch Stealing Beauty at UBUWEB.
Family values among the IKEA price tags
I’ve been meaning to write a series of posts about video poetry, but it’s proved tricky to find the right examples. Onward with googling and link hopping. I shall prevail. In the meantime, I came across a lovely project using augmeted reality (AR) to superimpose animations onto a book. Pretty:
The book remains a book, but if you hold up the secret decoder ring, another layer is revealed. I’d like to see this technique extend to showing alternate or previous versions of a text, to collaborations between writers and filmmakers, to special re-releases or homages to classic texts. Buy a book along with your choice of plugin (or no plugin). Is there a work you’d like to see reinterpreted with AR? I vote for Raymond Queneau’s Hundred Thousand Billion Poems.
File under tools for writers: Swype has come out with a gesture-based text input for touchscreen devices. Words are written by tracing consecutive letters on the keyboard without lifting your finger. Someone has even used it to break the Guiness World Record for fastest texting. When I tried it on my friend’s Android mobile, it didn’t simply predict words appropriate for business exchange (e.g. “See you at the meeting”) but was 100% accurate with more poetic text (I tried “Curling lip exchange”). You have to see it to believe it:
Reminds me of Picasso’s drawings with light:
How will what we write change when we can record words as fast as we can think them? Will we more accurately capture a flash of inspiration? Unlike putting pen to paper, when typing on a keyboard (whether typewriter or laptop) the physical motion of writing “a” is virtually the same as for any other letter. Could interfaces like Swype bring us a more intimate relationship with the letters we write? Do we even care?
My one criticism is that Swype should have gone even further–why stick with the QWERTY keyboard layout? Why not have custom keyboards for individual users (recognizable by fingerprint)? Since Swype can theoretically work on any touchscreen of any size, not just the ones on tiny mobile devices, we should expect such interfaces to become more common. The speed and motion of recording ideas in words is changing, and I hope these changes will also bring some interesting (and even unintended) effects on the literary quality of what we write.
We know our ABCs. The words we type every day combine and recombine letters in hundreds of thousands of patterns. We play with the sounds letters make, stringing together consonant and dissonant phrases. But rarely do we play with letters just for the heck of it, for the sheer enjoyment of uttering RRRRRRRRRR! and eeeeeeeee! Should you feel the urge to play with your letters rather than arrange them in dutiful semantic chunks, look no further than abcdefghijklmnopqrstuvwxyz, musician-poet Jörg Piringer’s new app for the iPhone and iPod Touch. According to the author, this sound poetry toy will let you “Create and control tiny sound-creatures in the shape of letters that react to gravity or each other and generate rhythms and soundscapes.” Take a peek at the demo video:
The real fun is in combining letter sounds and behaviors from different modes, from the rising pitches and visual trails of “birds” mode to the small pulsing beats of “crickets” mode. With headphones on, you can clearly hear sounds pass from one ear to the other as a letter trots from one side of the screen to the other.
Piringer focused on the level of the letter rather than the word, which frees up the work from language barriers and earns it replay value. If he had used colors and shapes rather than letters the app would have been just as enjoyable. Using plain black letters as the interactive unit, however, shifts the work from pure entertainment to something that is also literary. It’s as if seeing and hearing letters triggers tiny word receptors in our brains, firing off all sorts of associations and semantic stirrings. Another good interaction design choice that Piringer made is the ability to start right back where you left off without having to “save”–something we take for granted with a bound paper volume but that is too often missing in digital literature. Taking usability cues from software and game design is essential to shaping a good digital reading experience.
More about the work and the author at http://joerg.piringer.net/abcdefg.
1. It’s been over a year since I first saw Roderick Coover’s video The Theory of Time Here (there’s a teeny video preview available; click the teeny blinking camcorder icon), created in collaboration with writer Deb Unferth. I still can’t get it out of my head. Footage of London traffic and passersby plays harmony to computer voiceover melody in this six-and-a-half-minute jewel of a piece. Cars and people are shot in a way that makes them seem like words and phrases traveling back and forth across the screen, a kind of spoken-word Ballet Mécanique. If the repetitive, speaking-clock voice (“At the tone / everyone went / was / was already / now is not / everyone was not”) were perfectly timed with the frequent visual cuts, the effect would have been trite. Instead, the slightly-off timing gives the voice a force, as if it is controlling the minute workings of the city. Unlike most works of writing in digital media, this one can be purchased: a DVD is available from the Video Data Bank.
Stills: Roderick Coover
2. This evening I attended a tech networking event, full of freelancers and start-up fever. One of the presenters demonstrated speakertext, which pairs YouTube videos with its text transcription. The transcription is done by the human drones of Amazon.com’s Mechanical Turk service. With speakertext, you can search through the video via text, pull out a quote and use it to link to a precise moment within the stream. Flexible, mutable, and quick to travel over the Internet, text is the ultimate digital interface. The speakertext system is begging for creative use; someone has to do a writing piece with it. Get your video on and save us from copy/paste utility.