Browsed by
Author: Ian Failes

Framestore Interview: Behind the Seams of Marvelous Designer in ‘Doctor Strange’

Framestore Interview: Behind the Seams of Marvelous Designer in ‘Doctor Strange’

When Marvel’s Doctor Strange starring Benedict Cumberbatch was released last year, one of the most talked about characters wasn’t one that you might think. It was Strange’s cloak of levitation, a piece of the doctor’s wardrobe that has its own magical powers and often helped out in tricky situations.

To give the cloak the right kind of character, visual effects studio Framestore took the on-set cloak worn by Cumberbatch and generated a CG version. To do that as accurately as possible, they 3D design tool Marvelous Designer, which is aimed at building clothing and fabric as if it is being crafted for real. Framestore visual effects supervisor Alexis Wajsbrot takes AV3 through the process.

Watch Framestore’s breakdown for Doctor Strange below

AV3: Doctor Strange’s cloak is almost like a character in itself in the film – can you talk about the initial discussions you had about how it would be brought to life?

Alexis Wajsbrot: It was clear that Strange’s cloak of levitation was a key asset in the movie from the very beginning. During the early talks, Aladdin’s flying carpet was heavily referenced, as it needed to both help Doctor Strange as well as bring a comedic element to the movie. He was always handled like a real character and as Framestore just successfully delivered Rocket and Groot for Guardians of the Galaxy, Marvel thought Framestore could bring the cloak to life.

AV3: For those who don’t know much about Marvelous Designer, what is it about the way the software works that makes it useful for building and simulating CG cloth in VFX?

Alexis Wajsbrot: MD works using patterns, you build cloth in the same way a real tailor would build it. It’s incredibly useful to get the cloth matching the real one as close as possible as we are using the same techniques.

The main reason we decided to switch to MD was to get believable wrinkles and better cloth simulation. Traditionally modellers add wrinkles to the cloth when they build it, which caused the cloth to have an inconsistent topology density as well as not being able to fully unfold it, as the cloth may have too much fabric or not enough depending on how the wrinkles were modelled.

With MD, the topology density is always 100% consistent, and we can fully unfold it as we model the cloth unfolded using the pattern and then we simulate it in MD to get the extra wrinkles.

Sb104_0020_BAKE_1717Sb104_0020_COMP_1717

AV3: What did MD let you achieve for the cloak of levitation?

Alexis Wajsbrot: Thanks to MD we managed to get an exact CG version of the real cloak. We managed to get a first model very quickly of the Cloak, which allow us to unlock the other departments; Rigging / Animation / CFX, to get first pass of shots at a very early stage.

It’s a lot easier for modelers to tweak patterns rather than tweak a posed model, it allowed us to get very fast interaction / results of any given feedback. The model being a perfect topology, we also got more realistic result in CFX simulation.

AV3: How did you use MD to model and build Doctor Strange’s cloak? What patterns made it up? Can you talk about interacting with the costume department or the actual costume on this side of things?

Alexis Wajsbrot: In order to build Doctor Strange’s Cloak, the costume department provided us with the real patterns of the practical cloak. This was new for Framestore, so we worked as much as possible with the costume dept. At first they were surprised by our request, as they don’t usually share the patterns, but as soon as they understood the reasons and saw the first results, they 100% teamed up with us to get the best possible CG cloak, so we could switch seamlessly from a live action cloak to a CG one.

Our lead modeller Nicolas Leblanc started to re-create all those patterns in Marvelous (43 in total) using photos as background and the 2D/”roto like” tool.

Sa093_3420_BAKE_1009Sa093_3420_COMP_1009

We then stitched them together area by area, we simulated them with a medium density and pinned them on Strange’s model. Once the entire cloak was built, we increased the density to get some nice detail and we manually adjusted the fabric’s pose, then we exported all the meshes to Maya.

Ultimately we made the final adjustments and additions in Maya, adding thickness and the seams.

AV3: Can you talk about how the Marvelous Designer workflow worked on Doctor Strange, in terms of going from that tool into other tools used at Framestore for the final shots?

Alexis Wajsbrot: We mainly used it as a modelling tool and passed it to Maya as obj. We exported two versions, an unfolded flat version, which was useful for a quick re-topology and texturing, as well as a second version in pose with the simulated wrinkles and detail that MD provided.

Sa093_4060_COMP_1141

AV3: What would you say was the toughest shot to pull off that featured the CG cloak?

Alexis Wajsbrot: The most iconic shot would definitely be the shot where Strange takes the cloak, throws it in a 360 spin above his head before perfectly landing on his shoulders in the corridor. It ‘s really the shot where Strange becomes the sorcerer supreme.

Having said that the most complicated shot is the one where the cloak saved Strange from falling into the relic chamber, as we worked really hard to get the best possible silhouette.

Check out Marvelous Designer on the AV3 Store

How Neill Blomkamp’s experimental Oats Studios is using photogrammetry for its short films

How Neill Blomkamp’s experimental Oats Studios is using photogrammetry for its short films

Director Neill Blomkamp is doing something many filmmakers and visual effects artists can only dream about - he’s making a series of experimental shorts with no studio oversight. Of course, the director of District 9, Elysium and Chappie is already an experienced filmmaker, but his newly-formed Oats Studios is looking to buck the system by making a whole bunch of shorts, many using complex visual effects, and seeing what sticks.

Interestingly, Blomkamp and his visual effects supervisor Chris Harvey have relied significantly on photogrammetry for creating the digital environments and creatures in these shorts, including the ever-popular Agisoft PhotoScan.

We asked Harvey more about how Oats Studios is approaching the photogrammetry side of their experimental film adventures.

Watch the Rakka short, featuring Sigourney Weaver.

Photogrammetry tech

Oats has already released a diverse selection of short films. Rakka tells the story of the rebellion against a lizard-alien invasion. Firebase is also somewhat sci-fi based and features creatures terrorising soldiers during the Vietnam War. Then there’s Cooking With Bill, which is a satire of infomercials. In each short - and there are many more in the works that are constantly being released - Blomkamp’s unique hand-held observational style is there, as is the reliance on photorealistic effects.

Harvey, who was the visual effects supervisor on Chappie, says these shorts need to get made quickly and that’s one reason why they choose photogrammetry for helping to build and map environments, and make digital doubles. It also means Oats can tailor its own capture set-up.

A still from Rakka shows how computer generated imagery was added to an existing landscape.

“At Oats Studios we are big fans of photogrammetry and as such that’s the approach we went with when ‘cyber-scanning’ any performers we needed to recreate digitally,” Harvey says. “We have an in-house capture suite that consists of 32 Nikon D3300 cameras with 18-55mm lenses. These are housed in a full white room (floors, ceilings and walls) on 8 stands (4 cameras to a stand) arranged so that 5 stands focus on the front and 3 stands on the back.”

“They are all linked together along with the strobes so that they fire and capture simultaneously,” adds Harvey. “Obviously, the more cameras you have, the higher detail capture you can achieve but we had a very limited budget to put this together. The interesting thing about the configuration we came up with is that with a very quick switch of the zoom lens from 18-55 we can switch from a full body scan to a high detailed face scan. So typically we will do two captures, one for the full body and one for the face and then we combine these scans together during processing.”

During filming of some of the shorts, which happened in South Africa, a few captures were also done at visual effects studio BlackGinger, since this is where the actors were available. In South Africa, too, Harvey supervised the photogrammetry capture of live action locations.

Watch Firebase.

“It consisted of me taking literally thousands of photos of the set or location,” he says. “I had to pay close attention to various best practices in photogrammetry photography to ensure we got good results when we later started processing. Many people think you can just shoot a bunch of pictures and away the software goes, but there is a lot in how you take the pictures that go into getting a successful solve. In addition to those photos we would also take HDRIs for delighting purposes and a lot of photographs with measurements overlaid.”

An artist at Oats Studios works on one of the shorts.

Making models

To produce digi-doubles, Oats Studios artists input the captures into Agisoft PhotoScan. This product essentially generates point clouds, 3D models and corresponding textures from the captured imagery. “It just worked better in a consistent pipeline sort of way, results and steps in between seemed to make more sense and we got good fast results,” Harvey says of PhotoScan.

Environments were a slightly different story. Here, there were many, many more photographs taken and so for these, Capturing Reality’s RealityCapture was used. “We found RealityCapture just simply out-performed on the data crunching,” notes Harvey, “and since the subsequent steps after the solve had different requirements the issues we had with it for the digi-doubles versus Agisoft didn’t matter and so we used it instead.”

One of the plans with Oats Studios is to produce 3D printed models of the CG characters from the shorts. This is a CG sculpt from Firebase.

From the capture software, Oats would end up with high density meshes, as well as texture maps between 8K and 16K. Both of these sets of data would then get further refined in later steps with other software. In some cases, meshes would be brought back in to recalculate textures into a controlled UV space.

Other tools that came into play were Adobe Bridge for pre-processing the photos. For post-processing after the photogrammetry solves, Oats used a tool called instant meshes that re-topologizes the meshes, Wrap3 from R3DS for wrapping the standard character topology to the scan, and then an array of tools including Autodesk’s Mudbox, May and 3ds Max for further detailing and modelling tasks.

A recent Oats Studios short, God: Serengeti.

Oats is doing it differently

It’s not just short films that Oats Studios is making - it’s also sharing that process with viewers, currently via the purchasing and downloading of behind the scenes content and actual 3D and VFX assets on Steam. There’s also a proposal to produce 3D printed models from these assets.

What’s interesting is, the tools Oats are using are ones that many visual effects artists would already be familiar with. Could this be the future of filmed entertainment? We certainly hope so.

Ian Failes

Screen Scene goes invisible with V-Ray on ’Black Sails’

Screen Scene goes invisible with V-Ray on ’Black Sails’

In television right now, some of the most elaborate visual effects work required is of the ‘invisible’ variety. These are the shots that you might just not notice, but have actually gone through the hands of many skilled effects artisans.

One show capitalising on this seamless type of VFX work is the Starz series Black Sails. For season 4, Dublin-based studio Screen Scene VFX was called upon to create a raft of period city shots by augmenting live action and creating photorealistic buildings and locations. In order to reach that photorealistic look, Screen Scene took advantage of Chaos Group’s V-Ray renderer, a plugin the studio has been using for more than a decade.

We asked Screen Scene visual effects supervisor Ed Bruce and senior 3D artist John O’Connell to break down their Black Sails work and discuss how V-Ray was part of the studio’s pipeline.

– Screen Scene’s breakdown of its Season 4 Black Sail’s visual effects work.

AV3: The amount of invisible visual effects work by Screen Scene in Black Sails is incredible – what was your brief when coming onto the show in terms of the style of VFX required?

Ed Bruce (visual effects supervisor): Ultimately the brief was the same as it always is. Respect the in camera material, be as invisible and seamless as possible, don’t let the VFX work distract from the director’s storytelling and bring something rewarding to every shot.

Season 4 of Black Sails gave us a great opportunity for delivering the unnoticeable visual effects but also the scope of many hero wide establishing shots that clearly must have been VFX due to period, environment/location and assets. These big wides really helped locate the story and were important for the audience whilst giving our crew in SSVFX the challenge and reward that fully CG shots bring.

AV3: Can you talk about the assets that had to be built or used for your shots – what reference did you look to in crafting and look-dev’ing them? What sets or areas had been shot for real that you needed to extend or replicate in CG?

Ed Bruce: The production’s visual effects team, headed by visual effects supervisor Erik Henry and producer Terron Pratt, collected a vast array of data from the set in sunny South Africa. Production also shared assets from previous seasons from multiple vendors, especially some very high resolution ship models.

Many of Screen Scene’s shots were based around the dockland area of 1720’s Philadelphia which we built in CG and populated with ships, masts and small vessels. In a post TV schedule, sharing assets to avoid any additional modelling work really becomes beneficial.

philadelphia_reference
Archival reference for Philadelphia

Recreating 1720’s Philadelphia was challenging. As with most things before the 1900’s there aren’t any photos of the time, thus making it more difficult to get good references. Being one of the first US cities to adopt the grid style layout this helped with reducing many layout decisions for the backgrounds.

Production themselves had stayed faithful to Philadelphia’s main square layout and some of the buildings featured in the show still stand. Production built the ground floor of each building on the streets that the actors walk along to the measurements of the current city. Firstly we were tasked with topping up all of the on-set buildings with their second stories and roofing. The art department had created SketchUp documents for their set construction plans and while they only physically built the first level of each house, they had completed the entire building in SketchUp. This is what we used as our guide for the building top-ups.

The on-set VFX team had also taken Lidar of the shoot set and supplied us mesh files to work with. We were then able to create the extensions cleanly and then lay them on top of the Lidar scans. As you’d imagine, the real world is never as mathematically perfect as what we create in 3D, therefore we made slight adjustments from our models to line them up with the Lidar to better fit the live action set.

The second task was creating the continuation of each street and the surrounding streets and buildings whilst populating the set with CG people and props. There’s some artwork and drawings of Philadelphia back in the 1700’s which was a great starting point to the style of period’s building construction that our modellers were then able to create generic houses around those themes. A few fun things popped up in our research too. Houses back in Philadelphia were quite often timber frame and since it was a new city which was gradually spreading out from a central docks, there weren’t always concrete foundations to build upon. As a result some houses were raised above the ground on posts and had the ability to be moved. If you hated your neighbours you could pop your house up on a trailer and pull it with a load of horses somewhere else!

harbour_render

harbour_final

Screen Scene also worked on harbour and water visual effects shots.

In terms of lookdev we find CG integration is far easier than fully CG shots. You’ve got rules and limitations set for you by the shoot location with most of the reference that you’ll need in the plate in front of you. We made a lot of use of Itoo’s RailClone for both the creation of the building cladding and roofing and as an overall layout tool to create city blocks driven by splines we’d traced from historical maps of Philadelphia’s streets.

For the Island and fort shots we did most of our vegetation population with their Forest plugin which is fabulous for environment layout. The Itoo guys have worked quite closely with the Chaos Group team to take advantage of their render time instancing. For some close up trees we used GrowFX giving us some gentle wind motion to match what was happening with the practical vegetation.

AV3: Can you talk about some of the lighting and rendering challenges in particular? What was it, do you think, that helped sell your CG shots and also integrate CG work into live action?

Ed Bruce: As always on TV schedules, time is our biggest challenge, especially with heavy CG renders. On a previous show we’d done large scale fully CG shots containing similar components and levels of detail. However, Black Sails had a lot more large CG extensions and fully CG shots within a single episode.

Filling out environments as complex and busy as the streets of Philadelphia leading down to its bustling docks required us to create a large variety of assets in order for the shots to become an environment that feels varied and lived in. The downside to this level of detail is you’ve got to deal with all of the data that you’ve produced within the 3D scene and of course when it comes time to render! We imagine we’re no different from any other VFX company in that we’ll try and push our resources as far as we can and in this case we were just about squeezing the renders into the available ram on the render nodes.

city_render

city_final

One of Screen Scene’s wider city views.

In terms of lighting and integration, the on-set VFX team did a great job of Lidar-ing the majority of the set where we had to extend buildings upwards and we were able to use these models to drive our matchmoving giving us very precise alignment between the on-set surfaces and our CG extensions.

Since the shoot was in South Africa, which had incredibly crisp blue skies and direct sunlight, we could use the Lidar surfaces and the on-set silver balls to match the direction and softness of the sun very easily and have something that flowed very nicely from CG shadow into the live action shadows. Our 3D Supervisor Krzysztof Fendryk spent quite a bit of time making sure that the weathering and breakup of our building textures were a close match to what was detailed practically.

The compositors played a big part doing the subtle and finer points of matching lens artefacts and overall levels taking advantage of shot Macbeth colour charts and lens distortion grids.

AV3: What were some of the advantages of using V-Ray on this show?

John O’Connell (senior 3D artist): We’ve been using V-Ray for around 12 years now and therefore our artists have a lot of familiarity with it. V-Ray doesn’t have many weaknesses as it’s been used for most aspects of 3D. So, while some renderers might have particular strengths, V-Ray is able to cover nearly anything you’d ask of it very well, without too much fuss. The frame buffer and render pass options give you a lot of feedback. We find it quite easy to isolate aspects of the render when we want to tweak one specific thing. The speed improvements in their V-Ray mesh format meant we could keep our working files quite agile and not get heavily penalised when it came to rendering shots out as the compositors like the amount of matte, light or other utility passes we can throw over to them with little fuss.

building2_render

building2_final

Before and after comparison showing Screen Scene’s effects work.

It’s a great performing renderer and it’s got pretty much everything you’d need out of the box. We are always interested to see where the development team are innovating. We’re especially interested in their porting of the auto texture mip-mapping of textures from the GPU renderer – we find with the jobs we’re doing, we always want to try and add in more “stuff” to make our scenes richer. Therefore if the Chaos team can find ways for us to do this on the same machines then that’s very much appreciated!

Overall there’s very few complaints about the software at SSVFX, it’s very well proven and if we ever run into anything specific, the support team is fabulous. As we keep trying to push ever more ambitious shots through our facility, having a renderer that continues to develop ways to solve and complete these tasks is vitally important.

AV3: Can you talk about how V-Ray is used in general in SSVFX’s pipeline, and perhaps on other projects? Did you use any other Chaos Group plugins or products on the show?

John O’Connell: Screen Scene has always been a 3ds Max-based house. We started off in the commercials market originally and the native Irish market. In the past this body of work wasn’t big enough to warrant large teams or R&D departments etc, therefore it was very handy for us to take advantage of the agility of Max and its wealth of off-the-shelf plugins. We’ve always got a variety of requests from clients, which can cover anything that they see on TV or in cinema so we wouldn’t have the resources, budgets or speed to build software to meet the project’s needs. It’s great to have a wealth of small developers that are making relatively inexpensive and high quality plugins to meet the market’s needs or fill in gaps in the base software’s capabilities.

In the early 2000’s there was a bit of a renderer battle going on between Brazil, FinalRender, Arnold (we had an early beta for Max before it was sold to Sony for development) and of course V-Ray. We had tried the lot of them and they all had their good points but we settled on V-Ray for our first HD job as it had a really solid implementation of all the basics – good geometry handling, high quality anti-aliasing, fast raytracing and 3d motion blur. It was the first renderer to do render time displacement too which was great for a character job we were doing at the time.

All the other major Dublin companies at the time were based around Maya or Softimage which meant their renderers were quite heavily behind. We were getting far nicer results purely because we were able to use GI from a very early point without murdering ourselves with render time and the likes of having light cache to fill out the brightness of an interior. That was a godsend. V-Ray was easy to get to grips with for the non-technical artists and gave great result right away. It’s been our renderer ever since.

building_render

building_final

A shot typical of Screen Scene’s work on the show – a rendered building appears in the background. The effects work was intended to be invisible.

When we set up a specific VFX department for long form TV and film in 2010, it was great to have a solid EXR implementation, render passes for free and all the other various image quality aspects it just kept on delivering. Since it’s been a faster raytracer and better GI solution than most renderers for years it’s been great that other VFX companies have picked it up and requested all of the other features you’d need in production.

In terms of the next wave for utilising V-Ray, we’re yet to implement it within Nuke but it’s very interesting to look at the “scene assembly” approach where everything is baked down into dumb caches and drawn together in a content management application to be fed to V-Ray.

We have been using Phoenix recently, the infinite ocean texture is terribly handy for a lot of our wide shots where we just need moving water with a matte painted opposite shore reflected. We’re using it heavily on a current project for a tonne of blood and gore too, it’s a very fast simulator. We completed a show with a lot of stormy ocean setups a few years ago, which was a very challenging process. We’re looking forward to when the Phoenix dev team have some time to blend a texture driven water tank into an infinite ocean for large rolling wave shots.

AV3: What were one or two of the most challenging shots to pull off in your work for Black Sails?

Ed Bruce: On the live action side there were a few difficult shots that were extremely long. We were dealing with getting very accurate registration of our building top ups, having to graft tree branches onto trunks that were purposely trimmed heavily to allow easier keying for the street extensions, the usual roto fun of having a heavily populated street with motion blurred people criss-crossing everywhere, which in the most challenging shots we shot handheld.

With long shots the tech can be difficult to spot until you see a full render. You think you’ve got it all perfect and then you spot one little mis-alignment in geometry in extension or render glitch. When this happens it’s back through another iteration and render. On the fully CG side it wasn’t bad, the main issue was the memory overheads. The scenes we generated were pretty close to the limits of our render node’s RAM capacity.

fort_render

fort_final

A live action plate is combined with a CG render of a fort.

John O’Connell: We have a good workflow on how to generate the scenes, laying everything out fast which gives us things to look at quite quickly. A big thanks must go to Paul Roberts in Itoo for helping us with a very elegant layout solution using RailClone to make generic city blocks very efficiently.

We started the scenes off in as realistic fashion as possible, again using the proper proportions of Philadelphia and the Delaware River from maps and references. It was also great to have Erik Henry pop in for a few days especially for setting up the fully CG shots camera angles. Erik knew the clients vision and desire. This helped reduce the design process and version count for each shot.

How did we pull them off? SSVFX has always had great, talented artists working on our shows and it’s always good to see their craft materialise in the finished product. The visual success of season 4 is a collective effort from the production team through to our contribution. We’re looking forward to working again with Starz, Erik Henry and Terron Pratt and their team.

You can find out more about Screen Scene at their website: http://www.screenscene.ie/

V-Ray is available to purchase on the AV3 Store

A View on Post Production Workflows from an Expert in the Field

A View on Post Production Workflows from an Expert in the Field

In the world of post production pipelines and training, Simon Walker is someone you want to sit down with and mine for information. Which is exactly what AV3 has done in this lengthy interview with the UK-based Adobe Certified Master Trainer, who also gets in and advises on video editing and grading solutions and still takes time to be ‘on the box’ to check out the latest advancements in industry software.

We find out everything from how Simon got into the business starting with the early days of Photoshop, through to working on a tricky post solution for the FIFA EURO Championships. Plus he offers some thoughts on collaborating with Red Giant and what makes the perfect post production workflow.

AV3: Simon, what’s the best way to describe what you do?  

Simon Walker: I train broadcast and film professionals in editing, grading and post-production techniques and I have been a freelance trainer for over 20 years. I am an Adobe Certified Master Trainer, an instructor for the International Colorist Academy, and I was also a Final Cut Studio Master Trainer before that. I also train other trainers, as well as end users, in the Adobe Creative Cloud Pro Apps – Premiere Pro, After Effects, Audition, Prelude, SpeedGrade, and Adobe Media Encoder.

AV3: How did you get here? What led you into focusing on post production training?

Simon Walker: By mistake! At the beginning of the 90s, I was in desktop publishing designing graphics and page layouts and managing reprographics. This is before Photoshop had layers, when you had to make a selection and then bake that selection together with the next selection, and save the file as a new file, before moving on to do another selection. And then if you wanted to go back five versions, or rather when the art director said go back five versions, you had to make sure you worked in that way. And if you didn’t, you were stuffed!

Anyway, so I used to do that, and then the whole what we used to call ‘multimedia’ thing happened, when we started combining layout design with animation, audio and video. This was before it became called ‘new media,’ and after that, it was ‘360 media’, but we can’t call it that anymore and so it’s just become ‘media’. And the very quick story is that publishers started wanting content on disks as well as in print, so I started playing around with that.

simon-walker-msc

My absolutely favorite software back then was Macromedia Director. I used to animate using it and code for it and it was absolutely brilliant. And then bit-by-bit, as the computers became more and more capable, I started plugging in audio and other types of devices. On the early Performa Macs you could only record 8-bit audio which sounded terrible, so I had to digitize it in a different way. And then you could plug a camera into them, and then eventually, in the mid-to-late 90s, I spent thousand of pounds on a standard definition card, the Pinnacle DC1000, which was crazy. And those were the days when Windows 98 would give you the blue screen of death if you picked your cup of tea up slightly in the wrong way.

The way I became a trainer was that people at the same time started buying their own kit and thinking, ‘Oh we can do this, why are we paying Simon to do it?’  But they realised that they didn’t know how to use it. And so my job morphed slowly over a period of ten years from doing everything myself to doing a balance of some things and then showing other people how to do things. And then Final Cut happened, and then after that Final Cut stopped happening, so then we went back to Premiere and continued to use After Effects, and then suddenly it’s 2017!

AV3: Could you talk about a typical day when you’re consulting or training on a post production workflow?

Simon Walker: There isn’t a typical day! All days tend to be different, depending on the client and the requirements of what they need to produce. The techniques I use also differ for different disciplines: for example on news production, speed is essential, and the focus is on delivering an edit quickly, whereas for film production, there is more time available. For example, I’ve worked on news projects where there is 10 minutes to create an edit and I’ve worked on film productions where the editor has 10 months!

When I’m doing training, sometimes it’s one-on-one training, sometimes it’s to a group of people with hands-on at the computer, sometimes it’s a short seminar or lecture, and sometimes it’s recording on-line tutorials like my lynda.com courses.

Different people learn in different ways and have different methods in which they like to learn, so part of my job is to work out how different individuals like to learn, and then provide training in that format.

AV3: What have you found is the secret to making a post pipeline as efficient as possible? 

Simon Walker: Perhaps the secret is being flexible and adaptable regarding both the equipment you have, and also the people you work with. There are perfect workflows in theory, but you have to cross reference these with the budget you have and the time you have to finish the project. Also, different people have different styles and preferences for the way they work, so you need to consider these.

Sometimes the software makes a difference and sometimes the hardware makes a difference. For example, the ability for Premiere Pro to natively playback a wide range of codecs can lead to the assumption that you can throw 4K footage onto the timeline and play back without skipping frames on a low spec laptop, which is never going to happen. In cases like this, different methods then need to be used, for example Premiere Pro’s ability to dynamically adjust native resolution on playback and its ability to create proxy files on ingest, but still have the capability to grade with the original camera files, and the ability to switch in-between the two types of files as you work. This means you can try out various grades but also have the ability to play-back to preview the edit.

Consideration of the data-rate of certain codecs means that bottlenecks can be solved using an SSD or RAID, as older spinning discs are often the bottlenecks. I’ve found that when setting up a room for training, I use 1TB G-Technology ev SSD drives which have a transfer speed of almost 400 MB/s, which means I can transfer 20GB of training files in about one minute. This makes it very fast to set up 10 machines, especially when some attendees like to bring their own laptops. As they can be bringing Windows or Mac laptops, I use ExFAT formatted SSDs for this. Another interesting situation in the industry at the moment is choosing codecs for transferring graphic elements between different departments, especially since Apple is stopping development of QT for Windows.

ukfcug8

AV3: For people working with different post production pipelines – because every job really can be very different – how do you approach that kind of work with the right kind of flexibility?

Simon Walker: As I said, theoretically, you could always have the best possible workflow, but usually there’s somewhere in the pipeline that lets it down. It typically manifested a few years ago with people buying more and more expensive cameras that could do higher and higher resolutions and then not investing in the graphics cards or the speeds of the hard disks. And so my experience with all sorts of different workflows, from small production houses right up to broadcasters with multiple floors of stuff, it is that it’s all relative to the timing and the budget that you have. And therefore which kit choices you make.

Here’s a practical example – I worked on a project in which it was figured out that the higher spec computers were a thousand Euros more than the next one below, but actually weren’t that much faster for what everyone was doing. And so fifty times a thousand Euros was actually quite a saving which could then put into higher graphics cards for people who actually needed it. And so it’s interesting – it was about small little specs between machines that made a big difference to the budget but weren’t necessarily going to slow down the system.

AV3: Do you still have time to do project work?

Simon Walker: Yes, I still get commissions for shooting and editing and doing graphics and grading. I think you have to be doing it to actually be secure in your answers during a training session. Because when you go into a training session it’s not just about what you know and how the software works, or whether you know how their setup works and what they like to do and how they like to learn. You also have to know all the small little things that they might ask you that there are workarounds for, because people are always trying to do something that they shouldn’t do with the software, which I think is fantastic. But it does mean that my job is also a facilitator in terms of helping people to learn those things through my experience so they don’t have to figure it out for themselves and reinvent the wheel and so on.

Plus, when I go into broadcasters I see people using the software in a slightly different way. Which is really interesting. And so it’s not just my experience. It’s the experience of dozens and dozens of people who are trying things out and then finding that one technique works well with another. After conversations about these techniques and watching it as well as doing it myself, I find that it’s a combination of experiences which informs the content I actually end up sharing with everybody.

AV3: It must be hard to keep up with software updates and everything, though?

Simon Walker: Well, I am absolutely fascinated by seeing what different companies are doing andreading between the lines. And also just regularly logging onto the forums and following people on Twitter, who talk about different aspects of workflow. And that means that you build up a picture of things that not only are your experience but also slightly outside your experience. And my homework is watching TV, going to the cinema, and watching movies!

I’ve have a whole library of screenshots, and my favorite thing is to download trailers and then break those apart, especially with grading, to talk to people about the themes in current movies and what certain colors mean, and what the film-makers are suggesting by their use of color. It’s such an interesting process.

For example, somebody asked me a while ago what is the most important color in movies? And there is no simple answer to that. But at the time, Sicario had just came out, and I was very interested to see that in this movie, beige, just ordinary, boring beige, was actually symptomatic of moral ambiguity in the film. And some of the bad guys were dressed in beige. And that contrasted greatly against some of the deep, dark shadows that they had in certain scenes. And so there was an interesting contrast between just those two colors. I mean that’s just the surface of it – in that movie it was beige, and then in other movies it can be a completely different color, which can have a different meaning because it’s in the context of the story that’s being told.

So the answer is, all color is contextual, but if you are constantly looking at how certain colors tell a story or how certain editing techniques or even camera movements can tell a story, then that informs your actual production work. Then you start trying those things out in real life and start comparing them. But that’s probably because my job is finding out all sorts of new, interesting stuff, both on the technical side and on the creative side, and then being able to apply that for work. But if I didn’t have this job I’d probably end up doing that anyway.

AV3: Can you talk about working with Red Giant over the years – what makes them such a fun and influential company in this area do you think? 

Simon Walker:  I like the fact that so many of the staff are film-makers. This is across all the disciplines. For example, from people in the marketing department, to also the QA and the engineering department. Many of them shoot and edit short films as part of their creative expression. This is why the tools are so great, as they know first hand the pain points that film-makers have, and then make tools to solve them in creative ways. Here’s another example – the development of PluralEyes was because the original developer wanted a better way of synchronising the video and audio footage he was creating at home.

AV3: You had some experience setting up a Premiere Pro CC workflow for use in the FIFA EURO Championships last year, especially solving a playback issue. Can you talk about that?

Simon Walker: We had an issue with real-time playback on graded footage using the AVC-Intra 100 codec. The editors wanted to have real-time playback because they wanted to be able to apply a treatment without having to render. So they wanted to add the treatment, play it back on the Premiere Pro timeline, change their mind, and adjust it in real-time. This is especially useful when you’re sitting next to a producer showing treatments.

euro2-2016

So I designed a way for people to be able to add multiple different looks to footage using Magic Bullet Colorista from Red Giant, using the HSL wheels in Colorista to actually isolate certain colors. There were some looks like bleach bypass and then a sepia treatment and a vignette, plus something to isolate warm colors and something to isolate cool colors, and a range of stylising effects. And what we found was that we could stack four or five of these effects onto a single clip and playback in real-time without skipping frames. So we were able to not only preview effects, but also it was faster to output because Colorista was processing on the GPU using the Mercury Playback Engine.

So that became really important, and I made something like 25 presets that were available on all the edit suites, where editors were able to choose them. It helped people in some instances to hit deadlines because they were able to quickly provide stylized treatments for highlights and interstitials and small graphic pieces. That’s just one instance – most people use a variety of techniques – but that was specifically what I found useful with Colorista in a live sports environment.

You can read more about Simon Walker’s work and experience at his website: http://simonwalkerfreelance.com/

Get inspired with ZBrush artist Sven Rabe

Get inspired with ZBrush artist Sven Rabe

Every now and then a great piece of content or artwork makes the rounds on the Internet and inspires others to keep creating. That’s especially the case with personal works in the Pixologic ZBrush community, and one artist whose CR-2 pilot model was recently widely shared was Munich-based Sven Rabe, a lead modeler and senior 3D artist at studio LIGA01. AV3 checked in with Sven to find out more about how he uses ZBrush and for some tips and tricks on the software.

AV3: Your CR-2 pilot has been a huge hit around the web. Can you talk a little about the origins of that, and what you learnt from making it?

Sven Rabe: Actually, the idea for the pilot was spinning around in my head for a long time, but I never had enough free time to really get started. One night, as I came home after a very long working day, I zapped through the TV channels to come down and somehow I got caught by the classic Top Gun movie, which was like a reminder and booster for me to finally start working on my idea. I got inspired a lot by those heroic pilot movies, but I wanted to translate it into a more futuristic style with a classic touch to it.

zbrush_concept_sculpting
The CR-2 pilot helmet takes shape in ZBrush.

I think the biggest thing I’ve learnt from this project, besides many little technical things here and there, was actually to always keep going. No matter how less time you have for your personal projects and how long it will take in the end to finish, keep going, it will all pay off in the end.

AV3: How did you get into ZBrush?

Sven Rabe: I think it was around 2004 as I stumbled online over some work of Martin Krol, Glen Southern and Jean-Sébastien Rolhion, some of the very early ZBrush pioneers, I guess. The “Running Death” artwork by Jean-Sébastien (http://www.designpicture.com) is still one of my favorites. Around this time I worked at a small cg studio as 3D Artist, focusing on modeling. Amazed by the artwork of those guys, I wanted to know how to achieve this high detailed quality and so I found out about a software called ZBrush. The next thing I knew was, I need to learn this!

AV3: What’s your hardware/software setup in terms of how you now use the software?

Sven Rabe: I’m currently running ZBrush 4R7 P3 under Windows 10 on an Intel i7 3.4 GHz workstation with 32 GB RAM and a NVIDIA GeForce GTX 1070 graphics card.

scifi_helmet_side_cropb_v001
A detailed close-up of the CR-2 helmet

AV3: What are some recent examples of where you’ve used ZBrush in your professional work?

Sven Rabe: I use ZBrush as much as I can within our production pipeline at LIGA01, but it always depends a bit on the project. One example are the turtle assets I’ve created for a PRO7 HBBTV commercial. I used ZBrush extensively on these little guys.

Above: Watch the turtles commercial.

Another project was a real life interactive commercial for a Mercedes-Benz test drive experience. Besides of a lot of other tasks, I was mainly responsible for creating the hero robot arm. For many of the shapes, I used ZBrush to quickly sketch out forms for further processing.

mercedes-benz_robotarm_b
Sven’s final production model for the robot arm.

AV3: Can you share some of your main ZBrush tips and tricks?

Sven Rabe: Oh, that’s a hard one, as all of my workflows are pretty basic I guess, but I always encourage artists to customize their UI and to use their own hotkeys. It will make you so much faster and also way more efficient.

Another thing is the general amount of detail in models. In ZBrush, detailing is a lot of fun and very quick and easy to do, at least for concept modeling. But as much as complexity helps to recreate reality, it doesn’t necessarily help to sell your design, especially for hard surface work. Try to find the right balance for your models to make it look more authentic. Don’t just put details everywhere for no reason. Often less is more.

characterdesign_sketches
Some exploratory character design sketches by Sven Rabe, done in ZBrush with overpaints in Photoshop.

AV3: Is there a particular brush or tool in ZBrush that you can’t livewithout?

Sven Rabe: Oh yeah, definitely. Too many I guess! But to name just a few, I just love to use Clay BuildUp for most of my organic modeling, as I really like the feel of it and if you combine it with other alphas than the standard square alpha, you can get really nice additional effects out of it. Workflow wise, I would say DynaMesh and ZRemesher had the biggest impact on my workflows. I think they were both game changers as they opened up total new possibilities, especially for all the hard surface concept work.

AV3: What’s something in ZBrush you hope that Pixologic might improve or implement in the near future?

Sven Rabe: Well, as a long-term staff contractor you’re usually dealing much longer with all the modeling assets and the data, than for example, freelance artists or concept modelers do, because you’re on the project from start to finish and you also have to create the final production models. So for me, the pipeline integration is a very important topic. Therefore I would like to see more improvements towards this direction, for example, a python connection would be great, so you could write your own tools more quickly and connect it into the whole pipeline more easily.

angel_screenshot_final
Angel ZBrush sculpt by Sven Rabe.

Things like Alembic file support and a real camera exchange possibility would be great improvements, too. I also would like to see more organization options for subTools, like for example, sub folders for grouping and so on. Last but not least some performance improvements for the MultiMap Exporter and support for curvature maps extraction. Of course, I also like new tools and stuff, that’s always nice to have, but on a daily working basis, the other points would be more important to me at the moment.

AV3: There’s a great ZBrush community out there – can you talk about where you go to check out ZBrush work or find out tips and tricks?

Sven Rabe: My first address is always ZBC (ZBrushCentral), as it is fully packed with so many insanely talented artists. To browse through all the threads is very inspiring and humbling at the same time. The whole community is so co-operative, it’s really amazing.

Apart from the various social networks like Facebook, LinkedIn, Pinterest etc., I also like to constantly check out YouTube and Vimeo or 3DTotal, CGSociety, Polycount and ArtStation.

You can see more of Sven’s work at his ArtStation website: https://www.artstation.com/artist/svenrabe. And see more work from LIGA01 at http://www.liga01.de.

ZBrush 4R7 and ZBrushCore are both available at a discount on the AV3 store.

What’s the likelihood of another VFX ’upset’ at the next Oscars?

What’s the likelihood of another VFX ’upset’ at the next Oscars?

When Ex Machina made the final five nominees for Best Visual Effects at the Oscars last year – and won – many people were shocked and surprised. Not because of the quality of the work. The VFX by Double Negative and MILK VFX were seamless and outstanding.

It was, instead, because the other films nominated were perhaps classic examples of ‘effects films’. Star Wars: The Force Awakens. Mad Max: Fury Road. The Revenant. The Martian. Arguably, each of these nominees contained many story-driven effects too (shouldn’t all films?), but Ex Machina easily had the most subtle, if not still obvious and crucial, effects work in the body replacement work for the robot Ava played by Alicia Vikander.

Watch a breakdown of Double Negative’s work for Ex Machina.

So why did Ex Machina win, and what could that teach us about this year’s Oscar contenders and a possible ‘surprise’ winner amongst what is a rich tapestry of both effects-driven films and several with much more subtle work?

Getting to an answer to that question is tricky, partly because winning awards isn’t necessarily a scientific concern. Not that that has stopped people from trying. Visual effects artist Todd Vaziri has for the last few years attempted to call the winner of the VFX Oscar with his ‘VFX Predictinator’ (http://fxrant.blogspot.com.au/2016/01/the-vfx-predictinator-88th-academy.html). It uses several inputs Vaziri has gauged over time from past winners to score each nominee. For this past Oscars, the Predictinator scored The Revenant the highest. The lowest score? Well, that went to Ex Machina.

rodeofx_soy_008_beforeandafter
An example of a possible surprise winner could be Arrival. These before and after images showing VFX work by Rodeo FX reveal some subtle but convincing ways in which the actors were filmed without wearing the protective suits, but which were seamlessly added later with VFX.

So, how did Ex Machina win, then? Firstly, as noted, the effects were simply incredible. And that’s reason enough. But Ex Machina had something else, and that was an absolutely compelling story. It gave voters a chance to consider something that wasn’t effects-heavy, perhaps also tapping into a recent phenomenon in which films with a lot of CG have been generally criticised (that may also be why the marketing campaigns behind The Force Awakens and Fury Road pushed heavily on the practical effects work, and why The Revenant hardly had any discussion about its fantastic CG bear work at all).

Some of the intrigue about the the visual effects Oscar race comes from the actual voting process itself. In general terms, visual effects practitioners ultimately decide the final five nominees, but then the winner is voted by members of the Academy of Motion Picture Arts and Sciences (which include, of course, other branches of filmmakers and actors). It’s arguable that the ‘general’ voting public is looking for different things than seasoned VFX voters, but in each case it’s not like the reasons for the voting decisions need to be given.

Kubo and the Two Strings is one of the films that has made the final list of 10 contenders for the VFX Oscar race.

With all this in mind, which films in the race this year might have the ‘Ex Machina’ factor? That is, which films may have a more subtle approach to their visual effects, but still with effects that are crucial to the storytelling? Again, many films in contention have these kinds of effects at some level, in individual scenes or even shots. But a small number have what could fit the mould. These are:

  • Arrival
  • Deepwater Horizon
  • Kubo and the Two Strings

Arrival, for example, is unlike most alien invasion films and much more of a cerebral thriller. And it has the subtle effects to match, with beautiful vistas of the oval shaped spaceships, CG creatures and some other clever environments and shots. But all of this VFX work is rather withheld, a reminder of the approach made in Ex Machina.

deepwaterhorizon_beforeandafter
Deepwater Horizon’s effects do range from blockbuster-y type fire and explosions to much more subtle cues about the danger these oil drillers are in.

Deepwater Horizon is perhaps a more effects-heavy film, given the subject matter of a drilling platform exploding into flames. But the subtlety comes from a seamless mix of on-set practical fire and lighting, digital fire and digital compositing. Few will probably know which is which. Hint: it’s mostly digital.

Kubo and the Two Strings…wait, isn’t that a stop motion film? It is, but the way Laika approach their work these days is with a combination of stop motion, puppets, CG and visual effects. It’s almost like a live-action film and that means oftentimes a stop motion animated sequence is indistinguishable from something done in CG or with multiple elements, composited together, to form a final shot.

None of this discussion should take away from the glorious work on show in films released in 2016. Just look at the photorealistic jungle and animal life in The Jungle Book, the psychedelic effects seen in Doctor Strange and the way in which Rogue One: A Star Wars Story manages to throw back to an old-school effects era in its space-themed adventure.

But given the past win by Ex Machina and the prevalence of these ‘non-effects’ films this year – which really actually do have a lot of effects in them – could we soon be up for another upset? We’ll know soon enough.

An Interview with VFX Legion: Using mocha Pro on the hit show How to Get Away with Murder

An Interview with VFX Legion: Using mocha Pro on the hit show How to Get Away with Murder

VFX Legion is a visual effects studio offering something very different to most – it operates remotely with a global team, which provides a new kind of flexibility to many productions. Artists at VFX Legion have worked on many projects, including recently on the film Hardcore Henry and on the TV show’s The Catch and How To Get Away with Murder.

For roto, paint and tracking work, VFX Legion is a big user of Mocha Pro, one of AV3’s key plug-ins, and here we find out more about how the software was used in particular by visual effects artist Kyle Spiker on How To Get Away with Murder.

AV3: What kinds of visual effects are involved in VFX Legion’s work for How to Get Away with Murder?

Kyle Spiker: The show’s visual effects are what we call invisible effects. Most of the episodes involve small fixes – things you just don’t notice. But every once in a while you get something bigger. In the last season there were blood additions to make sure blood matches from episode to episode. And there was a house fire that we’re making match from episode to episode even though it was shot at different times.

AV3: Is the invisible side of that work a fun part of working on the show?

Kyle Spiker: Yes, it’s good problem-solving. It’s a little unfortunate that you spend all this time to make something look good and no one notices! But that’s part of the challenge. I really like set extensions where you’re adding whole new parts of the world and no one knows the difference.

AV3: Can you talk about how mocha forms part of the pipeline at VFX Legion? What are the typical tasks you use it for?

Kyle Spiker: Mocha is used primarily for tracking here at Legion. I use it specifically for roto, tracking and paint, but for most of the artists it’s tracking only, doing corner-pins and getting that exported to their compositing package of choice. We have a pretty even split between After Effects and NUKE artists.

Roto 1

AV3: What would be an example of a shot on How to Get Away with Murder where mocha was particularly useful? 

Kyle Spiker: There’s a shot which is the end of episode one of season three which involves the house fire. It has a lot of crew workers, firefighters, policemen, pedestrians, emergency vehicles. All of those had to be roto’d so we could add smoke and fire and embers and modern steam to the hoses, and to add damage to the house. Mocha made it pretty quick to roto all the characters at the level that we needed in a very short amount of time. I think I did all of the roto required for that shot in four hours.

AV3: Why do you think you could do that so quickly in mocha?

Kyle Spiker: We can do that so quickly in mocha because of its planar tracking – it’s ability to lock a mask to a plane and only have to do small adjustments to the actual mask shape itself. This is instead of the traditional roto where it is not tracking assisted involves a lot more keyframes and movements by the artist instead of the software. Having that planar tracking built in and so easy to use adds a lot of speed to the pipeline.

It also has a secondary benefit where every track of a head or body also gives you tracking data you can use for other purposes. It’s not just the mask, it also gives you the track at the same time.

Roto 2

AV3: What’s the workflow when you’ve done that roto or tracking into other programs like NUKE?

Kyle Spiker: In the latest version of mocha Pro they have many different options for different packages, so whatever your compositing package needs they have the right exporter for that. With NUKE it’s very simple, you get a roto and paint node, all of the masks separately and it even separates tracking data and mask data so you have further control.

Roto 3

AV3: Any mocha tips or tricks you can share – say, something you use in the software every day that saves you time on particular shots?

Kyle Spiker: Most people don’t use the stabilize button! I love using it. I’ll draw a mask around say a head or object and normally it’ll track the object off the screen and you always have to move your view to follow it, but if you click ‘stabilize’ the mask stays in the same spot and you’re just adjusting the little differences on the edge of the mask. I find that alone frees up a lot of time – less clicking, less objects moving. That seems pretty unique to mocha mostly because it tracks to a single plane instead of a single point.

For roto in general, I also have a few tips. Keep your masks simple. The fewer the points the better. It’s better to have more masks that are simple than less masks that are complex. It saves you more time and your rotos are better and it’s easier to do. Also watch your shot before you start roto’ing, looking for points that make sense for your keyframes. Instead of just putting every keyframe every ten frames, look for the motion of the arm where it begins and ends and save your keyframes to match your footage.

Roto 4

AV3: What are some of the biggest improvements you’ve noticed in mocha over the past 10 years? What would you like to see in future versions?

Kyle Spiker: Mocha has definitely gotten faster, but I think the biggest thing that I liked was when they changed their whole hotkey setup. Being able to remap and change and add however you want your software to work from package to package is really great. I have it now where all of the same hotkeys from NUKE are now the hotkeys of mocha. I can interchange between the two without having to think where my hands are supposed to go.

In future versions, I’ve always wanted someone to add a rigging system for roto. Say points on each elbow or arm and then use the data to help with that. What would also be great would be faster organization of layers and those layers having names. Changing the color of a mask is easy but changing the name you have to click on the layer, double-click on it and then re-name it. It’d be great if I could just hit ’N’ and then type in the name. Especially since in roto you have so many layers so quickly – a person just standing there can be 30 layers of masks.

Find out more about VFX Legion at their website: http://vfxlegion.com

View mocha Pro in our store

The Future of VFX: 5 Major Trends Happening Right Now – By Ian Failes

The Future of VFX: 5 Major Trends Happening Right Now – By Ian Failes

Identifying trends in visual effects is fraught with danger. Technology is constantly changing, as are the VFX location hotspots. Right now, a truly worldwide visual effects industry exists, propelled in part by an explosion in comic book and sci-fi films, animated features, virtual reality and TV visual effects. Here’s a look at 5 current trends that are shaping the future of VFX – from returning to practical roots to new technology and creative innovations.

1. The push for VFX as creative partner

Filmmaking is clearly a collaborative medium, yet historically visual effects and post-production has come in, unsurprisingly, mostly at the end of the line. However that’s certainly changing, not least of which because VFX changes can be quite substantial, meaning planning is crucial.

Age of Ultron
Research by Animal Logic into fractals, among many other things, paved the way for the look of the Internet as shown in Avengers: Age of Ultron.

Another reason VFX studios are finding themselves a larger part of the creative process is that they are now made up more and more of talented concept artists, designers and visual effects supervisors who are good at problem solving and have the tools to do it with. The director might only say, ‘Just make it look cool’, and suddenly a lot is left up to VFX.

Take Animal Logic’s work for Avengers: Age of Ultron, for example, where they had to design what the Internet looked like. The studio did that via weeks of R&D, both conceptual and technical. Or how about Iloura’s recent work for the Battle of Bastards episode in Game of Thrones, where they conceptualised the look of battling horses and soldiers in a very messy fight. The trend here isn’t so much a technical development but more a higher degree of trust and collaboration placed in the VFX team to get the job done.

2. Going practical, or making it look practical

Although pretty much anything can be done with CGI now, there has, of late, been a resurgence in practical effects nostalgia. In what might be a response to criticism that visual effects are becoming ‘too CGI’, it could be argued that more and more shots are being handled with practical gags, or at least attempted as ones.

Interstellar movie
Shooting as much for real was a large part of Chris Nolan’s approach in Interstellar, even though visual effects remained a crucial part of achieving that film.

Some of the best filmmakers already approach their films with this as a key consideration – think Christopher Nolan and the use of large scale sets, practical effects and miniatures in the VFX Oscar-winning Interstellar. Or George Miller’s Mad Max: Fury Road and J.J. Abrams’ The Force Awakens.

Of course, all of these films also involved substantial digital visual effects work. The point, though, is that the starting line seemed to be what could be achieved on-set practically, rather than the other way around. And even when VFX were relied upon, any digital shots were imbued with a sense of real life photography and phenomena. Likewise, the success of some digital tools that draw on the practical side of effects – Pixologic’s digital sculpting tool ZBrush is a great example – it helps let artists return to hand-crafted beginnings.

Looking to the future of VFX, it will be interesting to observe whether this trend persists, even as realism with CGI continues to improve.

3. VFX is VR is VFX

We are clearly in some kind of virtual and augmented reality revolution. A lot of money is being spent on and invested in this technology which is likely to find use in gaming, home entertainment, advertising and…who knows what else. Interestingly, the studios behind some of the best VR/AR work are also visual effects outfits. That’s because some of the hardest things to solve in VR are things that VFX artists have been trying to solve for years, including stitching panoramas and 360 degree video, compositing, HDR lighting, dealing with stereo and making digital assets.


Behind the Scenes: Jack Daniel’s “Storytelling: VR Distillery Experience

The VFX studios hitting VR hard right now include Industrial Light & Magic (with its ILMxLAB), The Mill, Digital Domain, Luma Pictures, Framestore, Mirada and MPC. Plenty of VFX supes have also made the transition to VR studios, as well.

While pipelines and techniques in VR are still being constantly developed, several plug-ins supplied by AV3 are right at the forefront of VR and 360 degree video solutions, such as Imagineer Systems’ mocha Pro 5  and Mettle’s Skybox Studio 2.0 for Adobe After Effects. There’s also Kolor’s 360 degree video tools.

4. The rise of digital humans, ultimately the future of VFX?

It’s often considered the Holy Grail of visual effects – to make a photorealistic and believable human performance, digitally. Hollywood has had several successes, from Digital Domain’s breakthrough Benjamin Button work to Weta Digital’s almost invisible digital Paul Walker in Furious 7. But making CG humans is hard, and there have also been some oft-discussed journeys into the ‘Uncanny Valley’ with films like The Polar Express and TRON: Legacy.

the future of vfx - digital humans?
A breakdown of Weta Digital’s face replacement work for Furious 7. Along with 2D and projection methods, the studio also crafted a photorealistic 3D representation of Paul Walker after the actor passed away during filming.

Still, the mission remains. For several years, digi-doubles have existed to help with stunts and impossible shots, but in terms of close-up emotional performances, we might not quite be there yet. Luckily, several technologies are coming together (and have been for some time) to make digital humans more possible and more palatable. These include technologies such as high fidelity facial and performance capture, body, face and eye scanning and re-lighting techniques, rendering, muscle and skin simulation, and even just a greater understanding of the underlying movement behind human motion.

And there are artistic and technical breakthroughs, too, in the use of digital make-up on real actors, to smooth out blemishes, make them look younger, older or like someone or something else. Lola VFX is one of the leaders here – their recent efforts in making a Skinny Steve in Captain America and a younger Michael Douglas in Ant-Man are stand-outs. Artists looking to get a handle on how to deal with digital retouching techniques have immediate options, too, such as Digital Anarchy’s Beauty Box Photo 3.

5. Tools for the job

A final trend highlights the passion and innovation that’s clear in visual effects; coming up with cool new tools. Just like VFX studios are being asked more and more to be creative partners, they also have to solve large and complex problems. And they do that by drawing on their existing toolset of off-the-shelf software, proprietary tools, and by inventing new ones.

Take ILM’s LightCraft, for instance, which the studio developed for the film Warcraft as a system to automatically determine the parameters of digital light rigs using on-set imagery. They subsequently adopted the tool for The Force Awakens, Jurassic World, and Teenage Mutant Ninja Turtles: Out of the Shadows. And if one new tool wasn’t enough, ILM also made a new hair grooming tool for Warcraft called Haircraft that saw major use in the bear attack in The Revenant.


An ILM featurette on their work for Warcraft.

There are countless other examples. Pixar developed a new crowd system called MURE specifically for The Good Dinosaur and Finding Dory, as well as a tool called AutoSpline for animating curves for their CG Octopus Hank in that film. Atomic Fiction, the studio behind the freeway chase VFX in Deadpool, built its own cloud based renderer called Conductor. There are a lot of smart people in visual effects.

That’s 5 trends that are shaping the future of VFX and the industry as a whole, but there are clearly many others to take note of such as physically based rendering tools, real-time rendering, tools to deal with higher frame rate filmmaking and, possibly one of the biggest, the greater accessibility of tools and plug-ins for artists. What are some of the biggest trends you’re following in visual effects?

Ian Failes
Ian Failes – www.vfxblog.com