The feeling of vertigo is often misrepresented as a kind of dizziness, a rattling of the brain into an unencumbered spiral of confusion. But actually it’s more like a buffering issue. The hardware in the ears isn’t keeping up with the data input from the eyes, and our biological gyroscope is thrown out of whack, making us dizzy.
Over their 20-year history, immersive video experiences have often been characterized by an imbalance in sensory inputs, where the detail of one sense exceeds the level of detail provided by another, provoking a feeling much like vertigo. Based on the 16 examples presented in the exhibit Sensory Stories: An Exhibition of New Narrative Experiences, however, it looks like the technology is finally able to achieve something like equilibrium.
“Birdly,” a bird-flight simulator that allows a user to soar over Manhattan. Photo by Thanassi Karageorgiou/Museum of the Moving Image.
Conceived and organized by the Future of StoryTelling (FoST), Sensory Stories reveals how artists are using innovative digital techniques to change the way audiences experience storytelling. The exhibit is on view at the Museum of the Moving Image (MOMI) in New York through July 26.
The key to a successful immersive experience is to create an equilibrium among the senses activated by the experience, explains Michel Zai, CEO of Somniacs, creators of the multi-sensory, motion-centric immersive VR experience known as “Birdly.” One of the first pieces visitors encounter in the exhibit at MOMI, “Birdly” explores the experience of a bird in flight. The participant commands the installation with arms and hands, which correlate to the wings and primary feathers of the bird, a red kite. “Birdly” gives users the chance to fly across the landscape, seeing the scenery, hearing the wind blowing and experiencing the movement in three-dimensional space via a robotic moving platform simulating flight movement. To enhance the experience, the simulator regulates the headwind from a fan mounted in front of the user according to the speed of the bird. The virtual landscape is visualized through a head-mounted display.
“It is in our case much easier to feel good. The movement provides direct feedback to your body,” Zai says of “Birdly,” which was created by his colleagues Max Rheiner and the Zurich University of the Arts in Switzerland, with Thomas Tobler and Fabian Troxler. “The tiny adjustments the robotics make in response to your movement on the platform, combining information from your wings and your head position—it really gives people a special experience. That’s what our work is all about. In everything we do, we try to make the experience as perfect as we can.”
Installation view of some of Sensory Stories’ games. Photo by Thanassi Karageorgiou/Museum of the Moving Image.
That type of multi-sensory feedback is what FoST was aiming for with the various virtual reality experiences, interactive films, participatory installations and touch-responsive interfaces that incorporate full-body immersion and interaction via vision, hearing, touch and smell throughout the exhibit. Producer Yelena Rachitsky explains, “Using a majority of your physical senses gives you the experience of being part of the work, a full body understanding rather than an isolated intellectual one.”
This is the intention of the artists as well. “We want to fill the space between the passive viewing of movies and the interactiveness of video games,” says Alon Benari, vice president of creative and innovation at Interlude. Interlude is the developer of Treehouse, a web-based self-serve interactive video authoring suite.
The digital media company collaborated with art world celebrity directing duo the Daniels on the Sensory Stories exhibit “Possibilia,” which was produced by Xbox Entertainment Studios and Prettybird Pictures. This is the collaborators’ second interactive film built on the ever-evolving Treehouse technology, and it’s on display via interactive touchscreen flat-panel display at MOMI.
Inspired by the storytelling style of video games, but with a result more cinematic than their typical stop-select-start-again progression, “Possibilia” lets viewers flip back and forth between imagery as seamlessly as flipping channels on television, but with the dialogue remaining smooth and consistent throughout.
This is a whole new kind of video viewing, seen previously with Interlude’s Bob Dylan video (http://video.bobdylan.com), and now expanded to 16 different storylines coalescing around the dissolution of a relationship. In “Possibilia,” the user directs the progression of the fight between a boyfriend, played by Alex Karpovsky of Girls fame, and girlfriend, portrayed by Zoe Jarman from The Mindy Project.
Screen still from the Daniels’ “Possibilia” (2014), an interactive short film written and directed by Daniel Kwan and Daniel Scheinert. Shown: Alex Karpovsky and Zoe Jarman. Image courtesy of Future of StoryTelling.
The dialogue for each scenario plays out along an identical timeline across various sequences of events, with the mood, tone and actions of the actors determined by the whim of the viewer. Want the couple to scream at each other? Click that vignette at the bottom of the screen. Want them to remain mostly inert on a sofa and talk it through? Click that option.
Having seen viewers interact with the film most recently at the Sensory Stories exhibit, and at several other event-based presentations since its Sundance 2014 launch (where it premiered simultaneously as an online app), Benari describes “Possibilia” as Interlude’s “dream come true.” He says, “People are interacting with it like a film and like a game. They are leaning forward, interacting, but also laughing, crying, engaging with the film.”
Five years ago, Interlude was just a video company launched by musicians with a dream of providing seamless interaction between parallel videos with audio consistent throughout. Now they’re developing a platform that is attracting quite a lot of attention in a world where jobs depend on holding the attention of online viewers. “People watch videos an average of almost 2.5 times, so you start to see a lot of value for advertisers,” Benari says. “Our viewers are not just opening the video and letting it play and then opening Facebook in another tab—they’re watching the video and engaging.”
“Way to Go” creators (from left) Caroline Robert, Philippe Lambert, Édouard Lanctôt-Benoit and Vincent Morisset. Photo by Jonathan Brisebois.
That is an enticing proposition for video artists looking to stand out in a crowd. Interlude is a striking example of how the future of storytelling depends on artists’ visions being made manifest by creative technologists. “Like in every platform, at the end of the day, the content needs to be good, and the idea for interactivity has to be good, otherwise it can be like a gimmick,” Benari says. “What’s really cool about ‘Possibilia’ is they found a way to have technology enhance the story, so the technology behind the interaction is just another wrench in the toolkit of the directors that allows them to tell different types of stories.”
But, Benari adds, some stories must be told linearly, and “we’re not trying to reinvent linear storytelling. ‘Possibilia’ is a story that cannot be told linearly—you need to see all these parallel worlds at once. If it was a straightforward film, it wouldn’t work.”
Linear or nonlinear, artists who work in video look to engage audiences by the most contemporary means possible, even while keeping a firm grasp on the ancient elements of storytelling. Many of the pieces in the Sensory Stories exhibit demonstrate an exciting new angle on original artistic expression. Now your work isn’t just a one-off—a lot of the operating software and drivers are custom-built specifically for the realization of that one particular idea.
“To me, programming opens up so many possibilities for creation. It’s no wonder that artists want to use programming, or programmers want to create artistic stuff,” observes Édouard Lanctôt-Benoit, one of the collaborators on the “Way to Go” interactive film with Vincent Morisset, Philippe Lambert, Caroline Robert and the studio AATOAA. “The fact that the computers are getting stronger and technologies are getting better is opening up possibilities on the creator side. A lot of things are accessible now that weren’t before. It’s very inspiring because it’s a blank canvas.”
Lanctôt-Benoit designed and programmed the live effects for “Way to Go”—a role that was probably once considered more tech and less art, but that’s not the case anymore. Everyone on the project was involved in the technology development, the capture of 360-degree video and the control and soundtracking of same. The piece, which premiered at Sundance 2015, is a short or indefinitely long video journey through the woods from the perspective of a cute, hand-drawn animated character. Users control the speed at which their hand-drawn friend wanders, runs or flies through the wood, and they also control the vantage point. If the user happens to turn the character around, he will see one of the film’s creators capturing the video for that particular viewpoint via a six-camera GoPro rig.
The film originated as a web project and is offered online a fully interactive web application, but at some point in the process the idea came about for a virtual reality version built for Oculus Rift. That version tours to museum exhibits and will be incorporated into the browser-based application in the future.
Screen still from “Way to Go.”
“The technology for 360 video is really booming now—everyone has their own process and way to do things,” Lanctôt-Benoit notes. “But when we started, it was still very vague, and we built our own process.”
A big part of the prototype phase was figuring out how to shoot in the 360-degree context. Working in the woods, with trees very close to the camera on either side, it was impossible to do the usual two-camera immersive capture, so AATOAA developed its own software to integrate imagery from the six-camera GoPro rig.
“For this project, not just for the filming but the whole project, a big part of it was building tools,” Lanctôt-Benoit says. (He happened to build all those tools, but he’s sweetly modest about it.)
All of the software is custom, including the video game engine, the generative music coding and 3D animation features. Often working from far-flung locales, Lanctôt-Benoit built tools for the film’s director, sound designer and graphic designer/animator so they could work directly in the web application. “Too often programming becomes the bottleneck,” he quips. “So I gave them the ability to make modifications themselves.”
The musical score is generated based on how fast the user is moving, Lanctôt-Benoit explains, “but it’s sort of subtle and you don’t think about it because we’re playing notes dynamically, not just replaying music, so we can change the pace and add some randomness.”
He adds that the sound mix on “Way to Go” is probably among the most complex projects built on the web to date, as there are hundreds of music and ambient sound MP3 files loading when a viewer watches the project. “For each MP3 there’s an envelope and reverb, so it plays differently from different dimensions.”
Aside from the charming nature of an immersive walk through the woods, the piece is engaging on a root level, rewarding the user for each movement and choice. “If you look up, you’re flying; if you run and click, there’s always something happening; or if you stop and click, you can see details, macro shots of the environment,” Lanctôt-Benoit explains. “Everyone has a different experience. That’s the fun part of the museum exhibitions. We see this wide range of people, some really young, some really old, and they all have a different way of interacting with the piece, but it’s always fun.”