DIY Depth Capture Performance (MR)

Subscribe!

Want more case-studies and podcasts like this?

Sign up for my substack while it’s still free!

When I performed “on” stage “at” The Museum of Science last year, I gave a real-time demonstration of both motion capture and volumetric video using a depth sensor camera. Around the same time, I released a case study for performing live theater in a gaming engine (Unity3D). Behind the scenes I’d actually been in the middle of completing a three year journey to experiment with this exact technology for my adaptation of Shakespeare’s Macbeth, called A Tale Told By An Idiot. This is my magnum opus in scrappy mixed reality (MR) or the merging of real and virtual elements to produce new experiences, where physical and digital objects co-exist and interact in real time

Setting the “stage.”

After launching the sci-fi series, SONA, with a VR escape room at Comic Con in 2018, I became fascinated with the gaming engine, Unity3D, as a virtual production solution for mixed reality scenery and characters (after all, for SONA, I built a spaceship in my apartment which was a rather disruptive and unsustainable lifestyle choice as an indie filmmaker). But now I understood the basics of simultaneously thinking of production in virtual and physical space.

SonaSpaceship.gif

Almost immediately, I began “kit-bashing” free models and characters to see how far I could get with sweat equity. And the answer was…pretty far! I found an open-box MSI gaming laptop and an old Microsoft Xbox Kinect V2 on eBay to start teaching myself the basic C# script and commands to trigger animations and virtual cinematography in Unity3D. My early renders were incredibly unimpressive (below)… but having previously directed and produced two animated short films, it blew my mind that I was able to generate anything at all...especially without any prior skill or knowledge for 3D animation. 

witches.gif

“Is this a depth-sensor I see before me?”

Based on the success of SONA, I soon found myself writing…starring in…and eventually directing a modernization of Shakespeare’s Macbeth, set in the world on an indie video game company. This was the perfect concept to stress-test the Unity3D pipeline to see if I could put my actors “inside” the video game of our story. Throughout principle photography, I simultaneously recorded my actors’ live-action performance and tracked their skeletal and facial data, using a variety of different apps and software (as new tools entered the marketplace). 

macgreen.gif

This data had to be meticulously “cleaned up” for smooth playback (the Kinect captured very jerky movements). Then the data was “parented” onto the skeleton of our 3D characters/avatars.  An incredible 3D Artist, Dharmik Bhatt, collaborated with me on this early process. We ended up with over 250 “animation events” for our story exported each of them as a single FBX model with the correct animation to be imported into the correct 3D environment inside Unity3D. Using C# I could trigger each action or line of dialogue one by one by one on a timeline. After a lot of fuss, I essentially could make the 3D characters perform the entire “moment” inside Unity3D, just like watching a play. This allowed me to decide where to position virtual cameras and render out final clip of animation. 

Idiot Tale Pipeline.gif

With non-stop tinkering on nights and weekends, I averaged about three clips per week (I even brought the laptop on-location when I was hired on another movie) and I could always grab “pick ups” of character’s movements or dialogue if the clean up was too complicated.

As I type this, I realize how insane this sounds, but it has given me immense respect for animation studios, creators, and game designers. And it does prove that anything is possible with sweat equity.

Fast forward 3 years!

Of course, because it took so long to complete, the technology behind real-time 3D animation has improved drastically since I began. Today, I can actually take the exact same character models and Kinect camera to “puppet” the character live, inside Unity. This is bittersweet because of the hours I lost on #IdiotTale, but for the overall industry, it’s AMAZING! 

realtimeunityknight.gif

What’s even more impressive (to me) is the Kinect can livestream and record a holographic-like image of me directly into Unity3D, regardless of whether I have a green screen. This “Volumetric video” captures three dimensional space based on the distance of each object from the device. I can change the settings to tell the camera of “occlude” (ignore) any object outside of a specific range (anything in front of me or behind me) so it only streams my body. 

volumetric.gif

This allows anyone to use my exact workflow for A Tale Told By An Idiot to perform live as a hologram or a 3D character! In another tutorial, I explored WebRTC render streaming (also known as “pixel streaming) which allowed me to share the Unity3D “viewport” directly to my audience’s web browser or livestream using OBS. 

 
 

But I’m still using hardware that is three years old. If the software workflow has improved this much in three years, what about the device itself? A brilliant dancer, Valencia James, has recently gotten deserved attention for streaming her volumetric capture (like a hologram) directly into Mozilla Hubs using a more recent version of the Kinect. Similarly, I recently upgraded my iPhone to the 12Pro which includes a LIDAR camera. So I now have the ability for mixed reality capture in my pocket!

valenciajames.gif

So did I waste three years of my life?

Well, the project is called A Tale Told By An IDIOT! But I’m actually in good company… The Royal Shakespeare Company has also spent the past few years developing an interactive version of A Midsummer Night’s Dream, called Dream which premiered last month. Puck wears a motion capture body suit that controls a virtual avatar. For their gaming engine, instead of Unity3D, they are using Unreal Engine, which has also released some remarkable virtual production tools within the last year. 

So what does a 2021 approach to mixed reality look like without the budget of The Royal Shakespeare Company? At the beginning of the pandemic, I actually consulted an Off-Broadway company to consider purchasing motion capture body suits, but the price point was still too high and they wouldn’t take the leap (can you imagine they’d let me do a Dream-like real-time theater performance last April?!?). Meanwhile, I’ve continued to explore “puppetting” 3D avatars with consumer-grade tech.

So if I had to make #IdiotTale today, what could I do with just my phone

UnrealTest.gif

Recently, my collaborator Alex Coulombe asked me help him test Unreal’s Live Link App. He gave me and three other remote collaborators the credentials to his computer so we could stream from our phones directly to his Unreal project. Within minutes I was “puppetting” an avatar on his computer. So if I made #IdiotTale today, I could have the actors animate their character in real time during filming…and/or remotely from their homes! 

If your local playhouse was willing to do Zoom Theater, there is really no excuse not to try volumetric capture or avatar performance during the continued shut down. Audiences can move around the room to watch live performers as if they were holograms on the virtual stage. It solves all the problems of interactivity, immersion, synchronicity, and embodiment that normally frustrate online audiences. And best of all, the performer is unencumbered by the tech. 

In the meantime, I really hope you’ll support my three year journey with D.I.Y. Mixed Reality by sharing this with anyone who wants to create their own motion capture without a budget. 

Previous
Previous

OnBoard-ing New Voices (WebXR)

Next
Next

A One Act Festival in your Browser (WebXR)