Digital Acrobatics

Author:
Publish date:

Unique combinations of CG characters with pieces of live-action stunt performances were used to build many of the action shots in Spider-Man 3.

At Sony Pictures Imageworks in Culver City, Calif., as work on Spider-Man 3 winds down, many of the key players behind the film''s visual effects extravaganza (just less than 1,000 shots) are finally starting to contemplate life without the spider. Some, such as Visual Effects Supervisor Scott Stokdyk, have been involved in the franchise in one way or another for more than seven years—since planning for the first movie got underway.

In looking back at the evolution of the franchise''s effects, Stokdyk and his colleagues are naturally proud of how far-reaching the influence of their work has been. As he tidied up a few loose ends on the project, Stokdyk took time to sketch out for millimeter some of the visual effects trends moved forward by the latest chapter in the Spider-Man saga.

“Combining pieces of live-actor performances with CG elements in heavy digital sequences and the degree to which we worked closely with stunt people were pretty important,” Stokdyk says. “The stunt people we worked with have moved quite a bit in the last couple years into computer-controlled wire systems—high-speed winches to move their stunt people around that are all computer-controlled. We learned how to move a camera relative to stunt people and to use extensive previsualization models to provide the production with information from the previz so they can match it with their computerized wirework. I had never seen that before—it borders on motion-controlling people.

“That''s an interesting trend I expect to see carried on—visual effects work that relies heavily on stunts to get bits and pieces of shots we can put together later. In many ways, doing an action shot completely computer-generated nowadays is easier than incorporating live-action elements. It''s not new to have all synthetic shots. But what people in the visual effects community have to be intelligent about from now on is how to incorporate live-action elements into a synthetic world to give you something you can''t get all-CG or all in-camera. That is something big I''m personally taking away from this movie.”

The New Goblin character and his glider were largely all-CG creations, combined with bluescreen elements of actor James Franco for action sequences.

Facial Replacement

In particular, Stokdyk is referring to the degree to which director Sam Raimi wanted Spider-Man unmasked during web-swinging episodes—sometimes forced to battle his nefarious foes in street clothes, in fact. This required an extremely sophisticated facial replacement approach for adding those real bits and pieces to a CG character engaged in heavy-duty action.

“We had to stretch our methodologies to combine stunt work with face replacement work—mixing live-action photography and CG in all sorts of different ways in a single shot,” Stokdyk says. “For instance, in one shot, we had a stunt guy spinning around on a special stunt rig over a bluescreen [standing in for Spider-Man''s alter-ego, Peter Parker], and he gets kicked by a CG [Goblin character] on his [CG] glider, and we then have to transition from that kick to a CG Peter Parker, and then, from that CG Peter Parker, we have to go back to the stunt Peter Parker, slamming into a set piece. So, in the same shot, we went stunt to CG to stunt and CG background to set background to CG to set. Those are the kinds of unique combinations that we were faced with.”

Imageworks'' previz team was central to this process. They would convert an animatic of such scenes into Autodesk Maya-based CG shots, and then turn over the computer models and related geometry to the visual effects team, which, under the guidance of Motion Control Supervisors John Schmidt and Nic Nicholson, would download that data into the computer system for the motion-controlled rig. This permitted the production to shoot stunt people executing various sequences against a bluescreen, and then use the same camera data and movement to film lead actors such as Tobey Maguire performing facial close-ups for the same scene.

Those facial performances were then mapped onto the moving CGI character, and as the sequences were built, the effects team then transitioned seamlessly between stunt shots, CG shots, and combo shots with a unified sense of camera perspective to sell the illusion.

“So, in some cases, we ended up with very long shots you could never photograph in real life, but with an animated character having the real actor''s face,” says Spencer Cook, the film''s animation supervisor. “We also tried to incorporate animation and bluescreen elements of the actors in various combinations. For instance, we filmed [actor] James Franco [as the Goblin] on his glider on a rig in front of a bluescreen, and then we would take that element and put it on a 2D card within Maya in order to move it around and enhance some of what was done on the stage.

“In other cases, we would go from the 2D element, the actual photographic element of the actor, directly into the full 3D digital character. We did lots of that kind of blending from an area where you see the actor''s face, and it''s clearly that actor, and then, through wipes and camera moves and things, you go to a digital character who is flying and doing all the things you could not do with rigging on a real stage. It''s about mixing all that stuff within the same shot. We did that to a far greater extent than what I''ve seen before this project.”

One key tool behind the creation of the character Sandman was Imageworks' new, node-based lighting tool, Katana. Katana sped up the production''s pipeline as various visual effects shots were added, subtracted, or altered.

Sand Issues

That theme of mixing and matching also applied to characters in other ways—particularly where two other important characters, the villains Sandman and Venom, were concerned. Both of them required rare combinations of character animation with CG effects to sell their illusions. Sandman, as his name implies, is a character made out of, and able to control, sand; while Venom is an alien “goo creature,” able to take recognizable shapes, but inherently not like anything remotely human. Thus, in his natural state, Venom had to move in ways that were both believable and not recognizable as human or animal-like.

In both cases, Imageworks had to take new steps forward in the art and science of effects—particles for Sandman and liquid for Venom—and then incorporate those effects into the character work.

Digital Effects Supervisor Ken Hahn, who joined the production following a stint developing believable fire for Ghost Rider, explains that the sand challenge, in particular, vexed artists because the character had to be subtly believable, yet not totally realistic, at a far greater volume than what the facility was used to doing with particles.

Part of the solution came as a new rigid body solver called "SphereSim," created by Jonathan D. Cohen, for certain scenes (particularly for the crucial birth of Sandman sequence) as well as custom fluid solvers for scenes where Sandman simply dissipates and floats away in the wind. But the best conceptual method for animating a naturally occurring substance still required extensive debate inside Imageworks.

“Our problem with any natural phenomenon-type effects—whether water, fire, or moving sand—is that it is computationally very expensive, and yet not very art-directable,” Hahn explains. “By the time I came onto the show, we had a couple people who had already put in well over a year''s worth of work in developing some impressive tools—very good rigid body solvers, gas and fluid solvers—and they had volume rendering worked out nicely, and other clever things like getting RenderMan curves into stuff that looked like geometry or RenderMan points that would shade looking like multi-faceted surfaces. So, they had this nice framework and foundation for all these tools ready when I came in.

“But we needed to be able to apply those within the context of the actual shot work. When the animators got hold of Sandman, they were animating him as though he was just a regular, articulated, skeleton-enveloped creature, and they weren''t thinking so much in terms of moving sand for the character himself. We realized there was a disconnect there, and on the big birth-of-Sandman sequence, I sat down with all the animators and all the effects [technical directors], and we worked hard to get the animation guys to think more like effects people, and the effects guys to understand why animation was doing certain things. And also to make sure they understood that their data sets had to be done in a way that the color and lighting and shading guys would be able to render them out so that it didn''t look like we were just getting tons of sparkly types of things.

“The birth of Sandman was particularly challenging because it was three minutes of film time, where all you were seeing was a digital image of a big pile of sand in a big bowl. And yet [Sam Raimi] put this entire story arc and emotional content into the sequence. So it was a fair revelation, because it became pretty clear from those initial meetings that animation wasn''t considering all these factors, and that moving sand should not be so rigid and well articulated, and that the effects guys had to understand that animation had certain constraints because of what Sam Raimi was asking for. At that point, it became an artistic-driven thing where we had the natural phenomenon of sand obey the laws of physics, based on the tons of reference and study we did of sand, but not so much that we couldn''t contradict it for creative reasons if Sam needed us to.”

Cook says the response to giving sand the kind of character movement that Raimi was seeking required a combination of realism and artistry. “[It largely involved] finding that balance between adding what is fantastic, exciting, or dramatic about how the sand moved, and maintaining what is familiar about sand,” he says. “We''d have it move on its own in trails and streams, in ways sand would not normally move on its own, and yet, we''d also have it spending much of its time falling or spilling exactly as you are used to seeing sand.”

The whole thing was so complicated that Hahn says, “We had five people work on tool development for Sandman for roughly two years—that''s about 10 man-years' worth of effort. We needed about 1,000 to 1,500 dual-processor [rendering machines] just to render all the sand from that opening shot of [the birth of Sandman]—about 2,700 frames in the actual movie. There were maybe 40 or 50 people who touched that shot in one form or another, so even with the fast tools we have today, it''s still a complex problem to do a character like that.”

One of the central weapons in Imageworks'' battle to handle the massive sand shots and to render many of the virtual buildings throughout New York was the company''s new, node-based lighting tool, dubbed "Katana." Katana was central to speeding up the production''s pipeline as various visual effects shots were added, subtracted, or frequently altered based on Raimi''s desires.

The risk in using it, however, was the fact that, when the production got underway, Katana was brand-new. In fact, Spider-Man 3 and Surf''s Up were the first two movies to take advantage of the tool at Imageworks simultaneously.

“We decided at the beginning of Spider-Man 3 that our previous tool was no longer going to cut it—it was too slow and cumbersome, and artists took too long to turn changes around with it,” says Peter Nofz, the project''s other digital effects supervisor, alongside Hahn. “We really needed something that was more intuitive and node-based, much like what happened earlier with compositing tools—when they became node-based, everything got faster. And that''s certainly true here, especially since all of the compositing people already understood the logic of node-based systems—anyone who worked on Maya or Houdini, for instance. It was a big step forward for a show like this. If artist ‘A'' came up with a great lighting setup for buildings and characters, artist ‘B'' could immediately copy that and start working.”

This approach was particularly useful for the New York setting. “[The movie] got really close to the buildings this time, especially in the alley-chase scene with Spider-Man and the Goblin—what I call the ‘never-ending alley,''" Nofz says. "We also went higher, up near the tops of skyscrapers.”

Imageworks created the liquid goo shots for the villain Venom bit by bit, using basic principles of character animation to indicate the creature''s intelligence and intentions, while maintaining the unique environmental nature of the slime.

Grappling with Goo

The other major character animation challenge the Imageworks team grappled with was the gooeyness of Venom. Cook says early tests for the creature were very bug-like, so Raimi consistently demanded changes to make the creature unlike anything viewers might relate to.

The first part of solving Venom, therefore, revolved around an extended testing phase that essentially forced the look—and the tools necessary to create that look—into existence.

“The goo was alien, and alive, not like a real-world component like sand at all,” Stokdyk says. “We did a lot of animation tests for it, and the mechanical effects guys even made us jars of thick goo to show us how it moved, but it all felt passive and unintelligent and not that scary—like those old [The] Blob movies. Finally, we got one sculpture that we all responded to that sort of looked like a chicken leg covered in black tar, but with this gooeyness to it and structure at the same time. That gave [Spencer Cook] ideas about how to animate it. We then shot a piece of film with Spider-Man sleeping with his arm out, and the goo pouncing on the arm—sort of an enveloping move more than an attacking move, and that''s when we got the liquid tendril sort of movement we all liked. We used that shot in the movie and referred back to that test as we built the CG creation.”

Cook says Imageworks then built the goo shots bit by bit, using basic principles of character animation to indicate the creature''s intelligence and intentions, while maintaining the unique environmental nature of the slime.

“The [Venom] animation rig purposefully did not have any kind of structure to it—it wasn''t a traditional animation rig,” Cook says. “Basically, we did it shot by shot. As the thing moved around, it would extend an arm when it needed it, but when it was done, that arm would disappear. And then another would come out, in another direction, and so on. More branches would keep flowing out of that arm. We tried hard to keep it amorphous and avoid a specific shape.

“But Venom still needed personality. And, like any other character, that all boils down to body language—even if the thing doesn''t have the kind of body we''re all familiar with. Basically, you still need to vary the motion and speed. That can convey a lot. If the thing is moving fast and skittery, then that makes it more aggressive. If it has a straight line of movement, directly at something, that makes it more aggressive. If it moves slowly, pauses, and changes direction a lot, then it is more hesitant. So we mostly did it with speed of the motion.”

Classic Skill

All of these effects were important—new steps forward—but the basic philosophy behind the film''s effects was traditional for the Spider-Man franchise, considering Raimi and Special Effects Supervisor John Dykstra outlined it for the first movie several years ago.

Even the original Spider-Man character rig, built by character animation setup artist Koji Morihiro for the first movie, remained in use through the third film. Cook says there was simply no reason to replace the rig, when it was easy enough to evolve it for current needs.

“The original model and rig for Spider-Man himself have held up for all these movies exceptionally well,” Cook says. “Originally, we designed a particular rig for the human character on the first movie, and then built into it things we learned on Hollow Man and other stuff, and kept updating it. But it''s the same rig. Koji did all the surface physiquing—all the muscle definition and the way the surface deforms when Spider-Man moves. That was done so well, we never had to update it. What we did update was to make the rig much more interactive—to make it easier for animators to access controls. On this movie, in particular, we added visible controls for the first time. We had, of course, a shelf of buttons in Maya to press for controls, but those buttons were all mapped to a specific control. So we updated the control selection by adding boxes and curves and things you could grab directly on the model itself. That is about just making it easier to access the rig itself, so that animators can spend more time on performance and less on figuring out how to do things technically.”

The heralded virtual camera concept designed to visualize the free-wheeling concept of web swinging through the city with the lead character, which honed on the first two Spider-Man movies by Imageworks and also used on Superman Returns, remained in the third movie. The project advanced the approach by using a proprietary software tool designed by Nicholson and Schmidt to make it easier to program CG camera moves to keep up with Spider-Man''s furious web-swinging capabilities. The software calculates the trajectory of a mass as it swings or leaps between two points, and the proper height and speed for such a movement—one of several tools designed for the project to speed up such tasks.

But this time around, web swinging through the jungle of New York''s skyscrapers was more sophisticated, incorporating the notion of combining live-actor elements with digital characters, while simultaneously relying more heavily on virtual environments than in the past.

“We didn''t do as much web swinging through the city in exactly the same storytelling way [as the other two movies], except in the one scene where [Peter Parker] has discovered his enhanced powers [wearing the black, venomized spider suit],” Stokdyk says. “But where we expanded on the [virtual camera] idea was in the alley chase sequence. That idea of unlimited freedom in the camera—to put it anywhere in relationship to the actors and characters in a CG background—that is something we took away from the first two movies. That lets the foreground actors drive what is going on, rather than having to be hooked into motion control to exactly match the bluescreen plates. Having the freedom to have plates in the foreground, compose to that, and then adding movement to the background to create that real-world sensibility—that is something we definitely expanded on for this movie.”

Check out the May/June issue of millimeter for an inside look at Spider-Man 3's 4K digital intermediate pipeline and process at the new Technicolor Digital Intermediates (TDI) facility on the Sony lot in Culver City, Calif.

Related

YamahaUC_YAI-1Bundle

Yamaha Unified Communications Simplifies UC Deployments at Enterprise Connect

Yamaha Unified Communications is helping enterprises move toward consistent, simple, and high-quality collaboration experiences in any conference room and meeting space with its latest solutions in booth 1027 at Enterprise Connect, March 18-20 in Orlando, Florida. Phil Marechal, Yamaha UC's vice president of business development and product management, will participate in an expert panel on huddle room trends.

RP_Series_IFPs

BenQ's Interactive Flat Panel Display Takes Home Tech & Learning's TCEA Best of Show Award

BenQ, an internationally renowned provider of visual display solutions, announced today that its latest interactive flat panel (IFP) display for education, the RP7501K, received the prestigious Tech & Learning Best of Show Award at the TCEA conference in San Antonio. Tech & Learning's TCEA 2019 Best of Show Awards celebrated the products and services exhibited at the expo that demonstrate great promise, according to the country's most tech-savvy educators.

InSync-FrameFormerScreenshot

InSync Technology at the 2019 NAB Show

At the 2019 NAB Show, InSync will announce availability of the new FrameFormer plug-in for Adobe Premiere Pro for macOS. This addition to the range provides further flexibility and options for deploying FrameFormer.

BlackBox-Emerald_4KReceiver

Black Box at the 2019 NAB Show

At the 2019 NAB Show, Black Box will showcase products across its high-performance Emerald™ unified KVM (keyboard, mouse, video) portfolio and will unveil a new offering that enables even greater flexibility in establishing agile remote desktop connectivity. The Black Box booth, SL12716, will feature demo areas dedicated to Emerald KVM solutions that address the need for convenient and intuitive real-time access to multiple computers from multiple locations, whether across rooms, floors, buildings, sites, or even cities. Black Box offers consultation and design services along with every solution delivered, and the company's representatives at the 2019 NAB Show will highlight complete IP-based and 4K AV visualization and distribution solutions tailored to the unique requirements of modern control rooms, post and media productions, and broadcast playout environments.