Transcript

# Office Hours 2025-01-24


**Petr:** [00:00:00] I actually, I tried, actually, I tried to figure out how to import your scene, uh, so you dropped us files and download that, uh, and I installed the AutoShot and AutoShot tools and the Blender plugin and, well, something goes wrong and said that it's no scene video file, so you need to scan the folder containing the scene video to make this take.


So I don't actually understand what to press. So that, that's the first thing, uh, I want to, uh, understand that I can share the screen for that. Oh yeah, yeah, let's do that. And the, uh, the other thing that, uh, I watched the tutorial, um, and, uh, I have a few, like, comments, what we can, uh, can improve to, like, achieve our goal to, um, To explain people how easy it is to use this plugin, first of all, as I saw, it's really the pain to change [00:01:00] the frame, the first frame.


So that's kind of weird thing that you have to go to the one tab and it's kind of hidden tab and then you go back to the GeoTracker. So I would appreciate if you can post this tutorial, uh, not the next, next week, but the week after the next week. So I can talk, uh, on a Monday with the, um, programmer and maybe we will fix this kind of UX issue.


Maybe we'll like pop up the window and ask about like, Oh, you have a sequence started not from a first frame. Would you like to start it from first or for actual one? I mean, I think there should be the UX solution for that. thing and no, like, extra tabs. Let 


**Eliot:** me, let me just make sure I understand the, are we talking?


Oh, okay. Tell you what, let's, let's first solve your thing where AutoShot is coming in because I remember that take and I think I know what's going on. It's because Okay. Okay. 


**Petr:** Okay. No problem. I will try to share the screen. Actually, not the whole, but, uh, [00:02:00] 


**Eliot:** Because this one, I think it was the, the markers were blown out and we didn't have timecode based match yet.


Yeah, something like that. Um, all right. So let's see. Okay. No Cine video file. Okay. So let's see. So, okay. The first thing you're going to want to do, let me grab my little, uh, little marker is we need to tell the Cine source file where to look. So you can click browse, um, next to that. And there we go. And then we'll go to wherever the, uh, you unzipped that project file.


Okay, here it is. It should be somewhere here. There we go, that's the guy. And then project, and then inside project, you can double click inside that. You're going to click Cine Video. And this is just, when we zip takes, this is just the standard places we put. Now you can click, uh, select folder. Alright, that's fine.


And then normally what you would do is you would then click scan. Um, and it's going to scan through the folder. Uh, let's see. Is it going to scan a folder? Did you click [00:03:00] scan? Let's see. Ah, 


**Petr:** okay. I clicked it. It, it pop ups the window with a ignore cache and visual markers instead of timecode. 


**Eliot:** Oh yes, yes.


Okay. Uh, so what, what 


**Petr:** should I, uh, take the, uh. Ah, 


**Eliot:** yes. So now, now what we default to doing is we match takes with timecode because if you're using a, one of the technical syncs, the timecode based sync is really, really fast. And this is a take that was shot. From, uh, March of the last year. So it was shot before we had that.


So we're going to use the optical markers. So go ahead and click the, uh, and I don't see that window popping up. Yeah, I 


**Petr:** got it. So I have to put the visual markers instead of timecode. 


**Eliot:** Yep. Go ahead and, uh, ignore cache. Yep. And then just go ahead and click scan. Cache is just the, um, uh, where it stores the, the, um, uh, ones that it's, it's already, already found when it's already made matches, it makes a list of those.


and you can tell it to ignore that. So, oh no, I did it. So 


**Petr:** the frame room mapping, uh oh, it's now like in the chorus? Yeah. [00:04:00] So what it's doing is 


**Eliot:** it's, it's crunching through that take and it's doing an optical scan of the take, uh, looking for, for the markers? Yeah. Oh. So it's, it's, it's, uh, it goes through and it tries to optically detect thoses now.


If I remember correctly, those markers were really blown out in this take. So this, uh, are very blurry, and I, I, we may, we may have to do it manually. Uh, which is fine. That, that's, uh, that, that's a workable thing. But let's see if it, if it automatically solves it. Um, it's just gonna go through that. Okay, alright, so now I found a matching Cine video.


Um, There we go. So it's now it's scanning. Now here it's scanning the, the Jet Set camera. So when you roll and cut a take, okay, there, okay, uh, Cine Camera Offset is zero. So, okay, it did, it didn't find a Cine Camera Offset and let's, uh, let's see why it didn't just to, I'm almost sure I remember exactly why, but in this, the video, um, let's see.


So let's take a quick look. Let's open our Cine video and see what that looks like. Um, [00:05:00] And it'll just pop up in a window and I'm not seeing one, yeah, 


**Petr:** yeah, I got it. I got it. Um, just, I have to like, uh, share the extra one. So I have to open the video. Yes. 


**Eliot:** Yeah, yeah, it just, it just pops you to the Well, it's really a 


**Petr:** huge video file, I think.


Oh, okay. Yeah, 


**Eliot:** it's, that's a monster. So, you know. 


**Petr:** Yeah, I can, I can open it in Nuke, I think. Or, is it okay if I will open it in Nuke? So, I, I, I don't have the review, um, viewer here on the computer, so. Oh, okay. 


**Eliot:** Okay. 


**Petr:** Maybe I have, but I'm not sure, uh, this is in It's a video file. I'm actually, uh, used, uh, a long time ago on this PC, uh, and Oh, okay, I got it.


I got djview, uh. Uh, viewer. So yeah, I can load this file frame by frame. I think. Okay. Uh, so, uh, try to share with 


**Eliot:** you this window on that one. I'm almost certain. I remember that what [00:06:00] happened was there is an exposure problem on the, uh, on the markers. So they didn't automatically detect. So that's what I've got.


That's that's the pile. Oh yeah, there we go. And so if we go to the very beginning of the file, uh, that's probably where we have the, uh, the markers. Uh, yeah, okay. It was super blurry and so it couldn't, couldn't detect all the, all the flashing frames and stuff like that. Um, and I think that the iPhone might've been even, even worse.


So the good thing about this, and you can, you can exit that. This is fine. Um, we have, when, when the, the tracking stuff, uh, when our automatic tracking systems, um, uh, what we're basically doing is we're detecting the delay between when the Jetset take was rolled and the, and the Cine take was rolled.


Actually, we usually roll the Cine take first. So the Cine offset is just that time delay. And so there's a couple of different ways we have of finding that, um, the automatic markers is one way. When that breaks, then we have a manual method. And what I'll [00:07:00] actually just do is tell you the, tell you the offset.


So I'm going to put the, uh, Uh, this is the manual method in the chat. We go into Resolve, drop two takes on top of each other and align them and just measure the start offset. Not that hard. Oh, I got, I got just manually calculated. Yeah. It's time 


**Petr:** offset or in offsetting frames. It's a time offset in seconds.


It's a time offset. So it could be even, uh, less than one frame. 


**Eliot:** Yeah, yeah, it could be, uh, it could be a fraction. So I'm just actually gonna, uh, uh, copy and paste what I've got over here. Uh, and it's just 1. 883 for this one. Okay. 


**Petr:** I will try to find the, uh, dialogue here in the zoom for some reason, everything hidden.


Oh, okay. Here, here's a chat. I got it. Okay. I got it. So where I can paste it. 


**Eliot:** You're going to paste it into the CineTimeOffset. Uh, and that's, uh, it's Uh, haters. Okay. Yeah. And there we go. And so Should I press refine? You don't, you don't need to. [00:08:00] What that is for is Typically used for is when we do a time code based match time codes based matches very fast, but time code by itself is not precise enough to get that extreme tight timing sequence that we need.


It can easily be a frame offer in one direction or a couple directions in the other, like time code is not a high precision tool and we need a precision tool, so the refined offset triggers an optical flow algorithm, which you're going to be familiar with where we detect the motions in the in both the, um.


What we're comparing is the motion in the, the jet set, um, the, the iPhone, you know, 30 frames per second video with the Cine, um, you know, um, the 24 frames, 25 frames per second video and deriving the, where those align correctly. And we need, the time code gets us in the neighborhood. We need a close, you know, within a few frames and then the, the optical flow gets us down to that subframe, um, optimization.


Yeah, I got it. I got it. Yeah. So we don't, we, you know, we already did this with the optical. I already did this [00:09:00] manually and calculated the Sydney time offset. So now, uh, what in Blender, you can just pick, let's pick a, uh, clip in frame and an out frame. Cause that's a 6, 000 frame clip. Um, I think I have, what do I have?


I started 1150 to 1320 was, was the one I was using. Uh, so you can put, yeah, 1320, 1150. Uh, 1150 as 


**Petr:** a start. 


**Eliot:** Yep, 1150 as a start. There we go. And for our color, this was in Sony S Log3, S Gamut, uh, so you just go to Sony, and then I think it was, yeah, S Log3, S Gamut3 Cine. Um, and AI Rotomap, that's fine. ModNet is, will work for, for this.


What's better, uh, Angela? So those, both of those are good. Um, PP mat is probably slight, slightly better. Those are the two AI mat systems that are built into, uh, built into auto shot. Uh, and both of those, they're, they're okay. They're about three years old at this point, and so they have a lot of chatter and they, they can.


They can be kind of patchy. We have a new one [00:10:00] called InspireeNet, but that's a big download. And I go through the installation of that in the synthesize tutorial. But for right now, PPMAT is fine. It's InspireeNet is also quite a bit slower. These are, these are faster. Uh, there we go. So then, and you're, we're just going to extract the XRs.


We're going to put it into an empty. Uh, empty CompStart and Blender. Have you picked your Blender, uh, where your Blender is? Yeah, yeah, 


**Petr:** I did it. I did it. I just, uh All right. 


**Eliot:** Yeah, I 


**Petr:** think I did it. I just, yeah. 


**Eliot:** Okay, that's good. Now you just click save and run. And, uh, let's, there we go. So what it's gonna do is, oh, what's that?


Trackback. Let's see. Uh, what do we got going here? Uh, 


**Petr:** no space left on device. That's, that's what's happening. Okay. No problem. I can, um, I can remove your zip file. Actually. I, I can remove that and leave only unzipped folder because I, yeah, I have both now and I don't need, [00:11:00] uh, the deep one in this one. Yes.


That's 47 gigabytes. Goddamn. 


**Eliot:** Interesting. 


**Petr:** Yeah, no, no problem. Just it's because of the capacity of my computer now. 


**Eliot:** Okay. 


**Petr:** Open and


empty. Oh, it's empty. Okay. So try again. 


**Eliot:** There we 


**Petr:** go. 


**Eliot:** All right. So there we go. That's okay. So you 


**Petr:** use FFmpeg as I send you here. Okay. 


**Eliot:** Yeah, we use FFmpeg when we're pulling frames for anything. That's an MP4 or an MOV. Uh, and in this case, this is a ProRes 4444 HQ log file that was originally derived from ProRes RAW.


Um, Okay. Uh, when we're, when we're dealing with some other tools like BR, we actually do a direct pull from the, from the raw source. And we're working, you know, we're working on the area raw and sea raw and a couple of these other, and [00:12:00] the reds and stuff like that to, to do a direct pull, um, from those. So, but yeah, the, so the Sony is just a really, really fat log footage, , which is great.


Mm-hmm . And so it's, it's gonna just do the frame extractions, uh, for that. 


**John:** Sorry, Ellie. Just a quick question to jump in another newbie question. So, um, if we, uh, if we were shooting with, uh, with reds, for example, the current workflow would then convert all that red to, to like ProRes or something. And then, uh, do like go back to the, the R3D files for like a final composite, but not actually use the R3D files in, in, uh, in, in the, um, in your software.


**Eliot:** Yeah, we're not reading the R3D yet. Um, I would say, uh, when, if you get into a project, let, let me know when you're doing that. Cause we're literally in development of that stuff. Uh, some of those pieces of it now. Um, and I, I, you know, we have to be able to, to connect directly to the red stuff. There's no, no two ways about it.


So, uh, there was, [00:13:00] this was. Almost, this was almost a year ago. So we didn't have a, we didn't have a solution for that at that point. Uh, but if you're going to jump into a project with that, just talk to me and then we'll, we'll, uh, we can work it out to figure out how we're going to approach it. Uh, if you were doing it today, yeah, you'd take the red file, run it through RedCine, output a fat log file.


And at that point you can, if you're outputting a, you know, ProRes 4444 HQ from red, you're not really losing any data. Yeah, yeah, sure. 


**John:** They're, 


**Eliot:** they're just 


**John:** huge. Or I guess the other option is just to record pro res in the camera instead. I mean, we could do that, I guess, but it's just, uh, you know, usually when I'm shooting red, I want to have the R3D just because it gives you the most later just for whatever.


Right. Yeah. Hands down flat out. There are no toys 


**Eliot:** about that. And the monster files. I know, I know. I like, we were decoding a few of those. I went, yeah, you're going to need a bigger drive. Um, but that's fine. All right. So, all right. So I know what it's doing. If you look down there, it's already pulled the [00:14:00] files.


What it's doing now is it's running, uh, let's see, PPMAT, and it's, uh, it, those are, it's extracting all the AI mats. So we're at 155 So it's, It's, uh, it's cooking through the AI mat extractions. 


**Petr:** Yeah. How would specs basically work? It's kind of, uh, the math for like human bodies or that, uh, more about, um, separating different objects.


So I can select which objects I want to 


**Eliot:** math. Uh, PPMAT is about people. Um, now, Inspironet is about foreground objects, which every once in a while it'll spoof. For, uh, Inspironet is a, is a much higher quality one. It's, it's also an 8 gigabyte download, so we have that as a separate. Uh, and 


**Petr:** basically how much VRAM I need for that, 


**Eliot:** you know, the VRAM hasn't been a, it doesn't seem to be that big of a deal.


It just, it just takes a little bit longer to do it. Well, actually 


**Petr:** now I saw that, uh, this process, uh, took about. I think it's took about three [00:15:00] gigabytes, maybe four for PRM. I think that's kind of, okay. Okay. So yeah, I opened that file. So I will, uh, re share my window now. Oh yeah, so here's the Blender file.


U1 application. Yeah, here it is. 


**Eliot:** There we go. That looks, that looks like now what it has done is, uh, it has automatically applied those AI mats as the, the background mats. So as we go forward through the timeline, you'll probably see it chatter a bit, you know, cause that's a PP mat is a three year old algorithm, uh, but it's, it's all right.


It does, it does a reasonable job, especially for garbage matting kinds of things. Um, it does, you know, it does a pretty good job of, uh, of, uh, extraction. 


**Petr:** Actually, it's really interesting, uh, why it's It's, um, plays not really smooth. Uh, what's heavy here? Like what's take my resources in this scene? 


**Eliot:** Oh, sure.


It's, it's the, it's the, uh, playback of the XR files. So 


**Petr:** that's just because the XR, okay. Can you use a proxy files for that? Something like that. 


**Eliot:** You know, [00:16:00] um, honestly there, that gets into areas in Blender. I just don't know. Um, the, we basically by default, we can make it work with the, um, uh, I wanted to make sure we were working with the high resolution files.


Uh, so we, you know, make sure we didn't hit memory problems. So I've just always tested with a 4k, 6k files. The plug back's a little bit slower, but we're, we're in the 3d world now. So that's, that's what we're, we're kind of focused on. Yeah. Yeah. I got it. I got it. Okay. 


**Petr:** So let's, let's try to do this. So, um, first of all, if I want to return to this scene, to this condition, so, uh, can I save this, uh, and then like, uh, the zero version and then come back?


**Eliot:** Yeah, let's just save our, save a Blender. Cause I don't 


**Petr:** want to go through the whole process again to, uh, to check it. Yeah. For now. Okay. Increment it or something like that. Okay. This is like that. And then again, do something like, [00:17:00] Like this see so, um Okay, so what we've done We've got all the scene and then we can go to the VFX motion tracking and load this Actually, where I should find it.


**Eliot:** I just go up one one directory and it's gonna be in Cinecam EXR That's not Cinecam EXR. 


**Petr:** It actually the fast Is, uh huh, project sequences, uh huh, I got it, I got it, mm hmm. 


**Eliot:** Yeah, we, uh, we basically organize when you're extracting sequences from it, uh, then we organize them in a, you know, fairly logical way to do it.


**Petr:** Mm hmm. So I can enter the pin mode and look through the whole shot. Yeah, nice, looks 


**Eliot:** really nice. Now, I just kind of randomly put pins on a fairly broad area of it to, you know, cause it's already aligned, but I, [00:18:00] this is, I wanted to see how you were going to do it, if you're going to do it any different.


**Petr:** Yeah, okay, so that's kind of the main point that I think should be different in your tutorial. So, um, the main point I want to explain, maybe we like don't understand each other a little bit. So the idea is that we take the optical flow. And take the exact position of a mesh, uh, in the manual keyframe. It doesn't matter pin, pinning or not, you are in this keyframe.


If this keyframe is manual, so you have like that green dotted line, uh, down, uh, there on the playback. So in that case, we take this position and project the, um, features from our optical flow. To the mesh. So, so pins are not influenced at all to the tracking. So you have, uh, you can add pins, delete pins.


It's just the instrument for positioning instead of rotating and translating the objects in the scene. You can use pins for, um, [00:19:00] positioning. You also can use pins for locking you for checking the quality and so on, but they are not influenced at all, uh, on the tracking quality. So it doesn't matter where you put the pins.


So that's a very important thing, and many people don't understand that, and we don't know how to explain it more, uh, more correctly because everyone thinks, oh, okay, it spins, it's striking, it's like features, but it's not features, it's just instruments or positioning. 


**Eliot:** I see, I see. And so when I placed the pins, it automatically dropped a key frame.


Um, uh, and that's actually the important. Now you can, 


**Petr:** you can, uh, add a key frame without pins hitting here, the button, but, uh, also you can, uh, for example, add pins, you can, uh, pin the shot and that case, uh, this frame will be your manual key frame. If you want to remove the manual key frame. You can, uh, click, uh, here, the remove button and you will remove the key frame, but all pins will be still there because they are [00:20:00] just lead projection handles that you can still have 


**Eliot:** access for them.


I got it. I got it. Okay. So I'll go back and I'll, I'll redo that. That portion of it to just drop a key frame. Cause I, what I wanted to show is, is that basically, you know, with, uh, I think this is going to be a phenomenal, I've, I've run this by a couple of people. I, I show this to Alex and Alex went, you know, like this, cause he, he's like, this is a game changer, right?


This is all of a sudden you can get lock tracks in, you know, Nuke and Blender and stuff like that. And you don't have to deal with the external trackers and it's aligned. He was, he was very excited about it. So it's, uh, so I want to get it right. So it's worth doing this. Uh, it's totally worth me doing, going this and chasing my tails and, and showing it to you and getting it, getting the right.


Uh, but one of the, one, one of the things I wanted to show is I deleted all, almost all the key frames we had of, of the jet set tracking coming in. Uh, and so I just had one, one good frame. So I'll just show, I'll do the same thing, but that'll just, I'll drop a key frame and then it tracks the rest of the shot.


And that way. Uh, this is one of the [00:21:00] concepts, uh, theater that I'm, that I'm, that we're working on figuring out how to express is that doing things this way, people think in virtual production that right now everything is designed to have it perfect at the moment on the day. That's really hard. It's surprisingly, shockingly difficult to have everything perfect in the moment of capture.


I mean, it's very enticing, you know, like, oh, we're in camera VFX. This is great until you actually. Yeah, yeah, I know, 


**Petr:** I have some experience on the scene, I know how difficult it is to get actual shots, like, correct when you, uh, when you make a movie, and then on a post they said, okay, they will fix it on a post, and then on a post, post guys are like, what the hell, so I have about five years of compositing and tracking, so I know all that stuff.


Uh, and I have also like, you know, kind of engineering, um, knowledge about how all that things work. Well, not as good as you, cause I'm not practically the programmer, but I'm kind of understand how it works. So, uh, [00:22:00] basically, basically that's real pain work, uh, on the post 


**Eliot:** production. Exactly. And this, this kind of thing, because if we get the production data did very quickly flow in your post.


Yeah, it gets easy. It gets really easy. Yeah. Okay. So this is great. Thank you for telling me about that. This is, uh, I was a critical cause I went through and I watched the keen tools, tutorials, et cetera, and I didn't catch that. I thought the pins were setting the key frame and I didn't realize that they did it automatically.


But the pins were not the key thing. The, the key frame, little green line. That was the, that was the key bit of it. Okay. So I'll, I'll do that. Um, and when we're doing the analysis, is this GPU based or is it CPU based? If I want to go ahead and 


**Petr:** Well, it depends on your computer. So we can calculate both and, uh, our algorithm just decide.


Uh, which to use, it depends on your computer. So if you have a GPU, we'll calculate in GPU. If it fails, we'll calculate on the CPU. In Nuke, we have a button. [00:23:00] You can just choose what you want to run. But, uh, here we, we hide this option, just, uh, decide to do it as worth it. Actually, you can, uh, not. Uh, make this analysis file, you can take, uh, this, uh, checkbox, you use another sketch file and we will analyze on the fly each frame, but it will, uh, really, um, influence on the tracking speed.


So you will track with a speed of calculating the analysis file like that. And it's, it's okay for tracking. It's still, uh, faster than track features. But, uh, as you can understand the main feature, the main like approach of tracking with your tracker is realigning and then refining. So the refine is a key.


So the tracking, uh, buttons are actually like. Unnecessary because a really important thing is to make the correct, uh, keyframes and then refine everything and, uh, like finally get a [00:24:00] good result. And the process of refining it's tracking back and forth. So it's kind of twice times, uh, slower. 


**Eliot:** I see. 


**Petr:** And of course it's, it's a creative process.


So we'll. Like retract sometimes, and that's why you need this cache file. It's kind of really game changer that our program has that cache file and you can work with a tracking really fast. So the speed of tracking here could be about, uh, 24, maybe 50 frames per second. It's not a problem. The only speed limit, when you have a cache file, is the speed of reading your EXRs.


Yeah. Got it. So that, that, that's what limiting you, for example, in Nuke, it will work faster. And well, actually, uh, like between us, you can even, um, not use, uh, the image. I'm not sure. I think if I will, uh, remove these from here, I can't, uh, track forward. I think. Yeah. It's, it's, uh, sad that [00:25:00] I have to analyze like file selected, but in Nuke, you can even, uh, remove the clip.


To like get rid of slowing down and use only cache file for tracking. So when we track, we're not, we are not looking to the clip file. We're just using our cache file because we need only optical flows and our metrics. So we all got in that cache file. So, um. The, the process should be, should look like you have first frame, uh, last frame and you have a motion.


So you like professional, um, much more, you analyze the motion, uh, you try to find the maximum points of changing direction of a movement. So for example, here you have a kind of linear movement. So that's, uh, well, no actually like changing points, but what might, maybe, maybe you have Here are the changing point


somewhere. Oh, actually we [00:26:00] have a plane here. Yes. As I understand. So can we disable this plane? I think. 


**Eliot:** Yeah, you can disable the, the, Oh, of the, Oh, the image. Uh, yeah. So, uh, let me grab my annotator, open up the, this, this shim, uh, shim origin. And it's, it's actually underneath the camera. The, so this one.


Actually go up one to the camera and let's, uh, drop down the camera and then there it is. So there's the image plate, and so you can actually hide the image plate. 


**Petr:** Yeah. Yeah. I, I wanna not to render that, so I, I, I think we can render it quicker. Uh, only the files. So, but no, no, it's, it's still, can I delete this, like remove 


**Eliot:** from the sale?


Um, you certainly can. Yeah. And then let's see if it, if it, uh, it's experi, it's worth experimenting to see what, what things, uh, yeah, yeah. No, no. It's. Not the 


**Petr:** case. Okay. Doesn't matter. Okay. So, uh, let's start the, let's try to track something, uh, to put some, uh, [00:27:00] keyframes here and there. Um, okay. Um, I will add the keyframe just hitting this button.


**Eliot:** Yeah. 


**Petr:** Uh, and, uh, press track forward. So now we've got tracking. Ah, we've got tracking of objects. So that's. Not what we need, so I will, like, remove tracking, change to the camera, and then I can actually remove the camera, uh, animation also. Yep, yep, you just highlight the camera, get rid of the keyframes.


Button, and, um, as I understand, we can add here the 


**Eliot:** Timeline, yeah. Graph 


**Petr:** editor, yeah, to look what's, what, what, what's happening here. And it'll 


**Eliot:** be probably easier if you're in timeline. Graph editor can be confusing, so Uh, go to there and go over to the timeline and then you can kind of see the different pieces that are 


**Petr:** there.


Yeah, I got it. I got it. Yeah, we removed. Well, actually we are for some reason, uh, in the second key [00:28:00] frame. We don't need the second key frame. We should be here. Okay, so let's track it forward. So the green ones is automatically tracked features. Well, not features. So they automatically tracked, uh, frames.


And the dotted line is a key frame that we made, uh, manually. Uh, well, in that, in that case, I just hit the add key frame and we like defining, uh, that this key frame is perfectly initially aligned. Yeah, but it can be not the first frame. It can be, it could be any frame on the timeline. Yeah, it doesn't matter.


No, it works really nice. Cause you have like a lot of features here and, uh. That's, that's pretty, pretty good shot for tracking. And actually, I think, uh, you can use even, uh, proxy video for tracking. It should [00:29:00] be faster. Uh, and at the same time, I think, uh, something like 2K would be enough. 2K JPEGs. Uh, it should be absolutely okay for tracking this kind of shot.


'cause it's what? What, oh, look really nice. 


**Eliot:** Yeah, yeah, I was, I was very happy to see, see this, uh, see this work, work so well, and I want to try it on a, you know, once we have the, the methodology, you know, the, uh, going there, there's a bunch of other things that I want to try it with, because I think it's, I 


**Petr:** think it's 


**Eliot:** great, it's just, you know, there's the, uh, the, the number of groups that I've talked to that, um, Basically just don't want to deal with distortion, right?


And they, they're, they're not doing anything particularly extreme. So you're not going to see that aspect of it. I just don't think it's going to be that big of a deal for most, most shots. If you're going to do something, you know, where you need it, great. Then we can worry about undistorting the plates or something like that.


But yeah, well, 


**Petr:** actually about distortion, I think, well, there is no problem to work with the distortion. I mean, if you have that [00:30:00] distortion information, that's pretty easy to undistort footage, to redistort it and so on, and it's. Of course, it's dramatically influenced on the quality of your, like, visual effect.


And, uh, well, you are engineer, it's, uh, it's kind of task. So to create something, um, kind of a checkerboard, but not checkerboard, some, uh, technology. Uh, and you have a camera, for example, you have a, uh, cine camera, you have a camera from your device, like mobile phone and so on. And you have a screen over.


For example, a notebook, so the laptop, so I mean, uh, the laptop screen is flat. So, and, uh, you, you, you can know, uh, the geometry of that pattern. And for example, the automatic undistortion and redistortion, uh, like calculating the coefficients, it would be like amazing technology, I think. And it should, uh, dramatically influence on the tracking quality, on the quality of, uh, compositing.


So that's kind of, uh, The next game changer, I think, uh, [00:31:00] someone, uh, should create that. 


**Eliot:** That sounds, that sounds, sounds good. We're, uh, all right. Well, that, it looks like it tracked. It looks like it tracked really well. Yeah, it's 


**Petr:** tracked. It's absolutely tracked. 


**Eliot:** Now, when it's doing that, did it, uh, did it automatically replace the key for the original key frames in the, uh, the, the, the camera had its original set of key frames.


Um, and so that since we had the camera selected in, um, uh, since that camera was selected, then the keyframes have automatically been replaced by the, uh, by the GeoTracker keyframes. Yeah, actually, 


**Petr:** I hope so. I'm not sure. We can, we can check it with removing keyframes. And then try to move further. Yeah, as you see, the model goes off.


So it stays, uh, so everything works correctly. Um, like that. Okay. So what you got here, we got tracking. It's pretty accurate tracking. As I see, we can check it with putting, um, Uh, [00:32:00] just one key frame or not key frame, the, um, I'll call it the pin. Yeah, the pin. Just, uh, we're checking the quality. So we can, uh, jump around and look for that pin.


Yes. Ah, okay. Okay. Okay. So it's not a part of your geometry. Maybe somewhere here. It should be more relevant. That's a problem sticking out. Yeah. Well, not the pixel perfect track, I think for now, but, but it's pretty okay. 


**Eliot:** Yeah. 


**Petr:** It's pretty okay. I think it's cause of the distortion. 


**Eliot:** If you, um, pick a spot that's on one of the flat areas, uh, then that, that should, that should, uh, be a quite a bit.


Okay, okay, okay. You have a bit better, because that, that's, uh, the, the, the geometry scan when you're at areas of high transition, uh, it just, you know, doesn't have enough resolution to capture all the, all those individual little 


**Petr:** corners and 


**Eliot:** stuff. 


**Petr:** Uh huh, I got it. Okay. [00:33:00] It's look pretty nice. Actually, when you put a pin, you make a manual key frame.


As you see, I made three key frames. That's, that's not really cool. Uh, I'm not sure. Um, well, the best idea how to deal with that is, uh, putting the pins, uh, over, uh, existing key frames. So not to create the other, but actually now I don't want to, um, to use it. It's wrecking. I want to use a mask. So you said we generate masks.


So let's try to, let's try to combine that effect. Like, Oh, actually what's going on? Oh, when we move this, as I see the plane is moving weirdly. Yeah. You see the two people on the screen. That's 


**Eliot:** cause I 


**Petr:** think, I think that's, well, um, the main picture is the background. So here it is, the background. And this [00:34:00] one is a card that we've got.


I'm, I'm not sure where it is. I got rid of that card. Yeah. Let's, let's get to 


**Eliot:** that card. Can you, uh, flip it into layout? Let's kind of see where that's coming from. I, I don't quite know where that, that, but that's clearly coming from the, uh, um, maybe just click on it, uh, in the, in the main layout. And let's just see where that's, where it's, where that's showing up.


**Petr:** Oh God, that's really , like just 3D. Okay. We don't have it here, but we've got it here for some reason. That's. That's pretty weird. Okay. What?


I think so, yeah. Yeah. The uh, okay. I'll try to figure out what's outside after all these years. Oh really? Even after I put a new battery in it. Hey Bill, you have to get a new one. Hey, I to, it's something. And the problem is when I go to, [00:35:00] uh, 


**Eliot:** I'll go for now, uh, let's see. Uh huh. All right. Okay. 


**Petr:** So I don't know why we see balls, the camera and, uh, uh, and the plane, but well, okay.


It's pretty okay. Well, actually. It will bother us, I think, uh, cause we want to check the mask. So, okay, let's try to use masks here. Oh, 


**Eliot:** yes, yes. 


**Petr:** Okay. We have a masks. We have compositing 2D masks, so you can select the sequence here, uh, select the sequence and where it should be. 


**Eliot:** Uh, go up one and then go to, uh, AI map.


It's a, I 


**Petr:** top, top, uh, this one. Okay. Okay. It's a PNG. So what do you have here? We have PNG file and here we have a selector of, uh, channels with a mask. So I think you have alpha channel also only. The 


**Eliot:** PNG file is an eight bit gray scale, so it's [00:36:00] not an RGBA file. And that's, I remember thinking that's probably, that might be one of the things causing a problem is that it may be expecting an RGBA PNG.


But it may not know, know, understand what a, just a straight eight bit, you know, grayscale PNG is, uh, there's no other color. 


**Petr:** Well, actually, I understand, but there should be like one channel inside. 


**Eliot:** Yeah, 


**Petr:** one channel. Anyway, one channel inside. Well, it should be kind of red channel, I think. 


**John:** Yeah, this is where I ran out of it.


Sorry, quick, quick newbie question about all this. Uh, so these, these masks are generated by the, uh, the, the phone app, correct? 


**Eliot:** Uh, no, no, these are generated in post production, uh, in, in AutoShot. So we have, there's, there's a few different AI matting tools and they're, they're in like the AI mat drop down and when you're processing a shot in AutoShot and, uh, and they, these, there's a couple and sometimes you have to try them out on your scenes.


The, the AI tools all behave differently on different. Devices. Yeah. So there's two that ship with, uh, [00:37:00] with, uh, automatically with, um, uh, with auto shot and they're called, uh, they have strange names that they're, they're academic, uh, solver names. Uh, modnet is one and then PPmat is another, we didn't name them.


Uh, and then, uh, and these are okay. You're seeing, you're seeing one, you can see that it's, it's extracting, uh, the humans in the background decently. Well, we have one that's state of the art called inspiring it. Um, and if you, uh, I show you, I, that's an eight gigabyte download, cause these are. Big AI, AI, um, you know, the, the, um, the, the weights and stuff like that are not small.


So we have that as a separate install. Uh, and if you look at the, uh, I put the link to the intro of the, the SynthEyes tracking refinement. In the first part of that tutorial, I walked through downloading and installing InspireeNet. Uh, right now it's Windows only. Some of these are, uh, the other is PPMAT and ModNet.


That's both Windows and Mac. Uh, but Inspiring Ed is right now Windows only, uh, because we can only get it installing under Windows. 


**John:** And so, but, I [00:38:00] guess, you're not using these, these AI mats for, like, a final composite. You're using this, uh, to, for what purpose? In this case, to, to help, uh, mat out the, uh, the moving subjects so you get a better track?


Is that the, I'm trying to wrap my brain around what exactly you're using these for. That's 


**Eliot:** exactly it. And because otherwise you have to hand draw an animated map around a moving actor. Got it. Pain in the ass. Yeah, yeah, yeah, yeah. Pain in the butt. So you're, so 


**John:** basically you're telling, you're telling in this case, uh, the, um, this, this plugin that, uh, geo tracker to, uh, to use the mat to ignore the, that person and then, and then the, you're using the other Yeah.


Like, like a garbage mat. Like a garbage bag. Got it. Right. And the other question I had was, was, would there be some other kind of, like, preferred tracking marker? I would imagine something really high contrast, like some sort of black and white checkerboard, cards that you could put on the floor. I mean, would there have been anything that, in the production, that would have made this more, either more accurate or easier, or more for the, for the software to [00:39:00] grab onto in terms of, uh, in terms of detail on the set?


**Eliot:** So there's two separate areas. One is how Jet Set tracks. Jet Set is a natural feature tracker. And if you want to see what it's picking up, if you click on the origin tab and as you're moving around in the scene, you'll see a bunch of these little, little tiny red, green, and blue, uh, look almost like little fireflies, but tiny little origins and it's detect and it shows you in real time what it's detecting, um, you know, so you can very quickly see.


What it can pick up on. And, and certainly on the, on the green screen, it's not gonna pick up most things in that green screen except for the, the corners of where the pieces of tape are. 


**Petr:** Yeah. Yeah. And so, 


**Eliot:** um, so if, if you're running into issues on a green screen, we recommend it's, it's not like it's a new thing.


We, you know, crosses of tape on there. It's still Exactly, yeah. Best go to and just put a bunch of 'em on there. Um mm-hmm . If you put something that's, uh, especially with green screen, you just want shades of green so you can still key it out. You don't want a separate color. Well, yeah, if they're going in front, certainly 


**John:** if they're going in front of it, but if it's off to the side, then wouldn't, wouldn't it be better just to have something really nice and high [00:40:00] contrast that as long as the person, you don't have to guard, you know, then roto it out.


**Eliot:** Yeah, off to the side is fine, and you don't need anything special, and as you, if you'll try out the, you know, fire up your origin screen and just kind of look around your house, you're going to see that, that certain things just generate enormous numbers of features, especially what it's, under the hood, what it's looking for is it's doing corner detection.


Uh, so it looks for, you know, areas of, uh, high frequency gradient in two directions. Not in one. One's a line and you can't track on a line. It looks for corners. And so, but a quote unquote corner can be anything from, uh, some part of a picture, an image, part of a carpet to, you know, you'll, you'll very quickly see which things that generate a million fireflies and which ones, which ones don't.


Right. Um, and some things like foilage generate a lot of fireflies, but it's not, it's not. Very accurate, you know, the leaves change too much, um, but things like carpet is, is great. You know, that's, that's one of the standard things we end up calibrating with because everybody's got a carpet and it's, it's got lots of features on it.


Yeah. Yeah. I 


**John:** noticed that in the tutorial. It's a lot of carpets. 


**Eliot:** [00:41:00] Pragmatic. Um, so you, you can, uh, experiment and try out in this case, um, the, just the, the roughness of the green screen on the floor. There's plenty of, plenty of feature detail to, to work from. So that works, it worked out great. If you were on a, a perfect green screen that had absolutely no features, you might, you're going to run into problems.


**John:** So is there any way, like, for instance, I know that, you know, and then one of the tutorials, like if you have a scene, let's say that. I guess where am I going with this? Let's say that you, you want to, you have, you build yourself a 3D, a 3D model of, of your set, right? It's a 3D set. And then you, you do your best on the stage to basically, let's say you have to have certain things that are in the same location, right?


As that 3D model. Is there any way that the software can help you do that? Like, I was even thinking like maybe some kind of You know, QR code, like a large QR code that you print out and you stick on the floor and then you aim the camera at that thing. And it's like, Oh, that QR code is, is, uh, you know, reference, you know, like a null object, so and so [00:42:00] in the, in the Blender scene.


**Eliot:** You have, uh, you are, you are exactly correct. And we built exactly that. Uh, I'll send you, here's, I'll put a link of this in the, uh, tracking origin markers. There we go. Uh, so I'm going to put this, uh, link in the chat. That's in the, in the 


**John:** chat. Okay. Yeah. So there you go. Check that 


**Eliot:** out. So we have two kinds of tracking, uh, origin markers.


We have both a horizontal and vertical, and we have two sizes. One is just normal. Uh, a, you know, a four size, just print out on your normal printer and the other one. Oh, yeah, I see. Oh yeah, look at that. Uhhuh. and the other ones, which are gonna work better for stage production. Uh, you, the, the A four size is great for testing in your house and stuff on stage.


You need a larger marker, uh, for detection distances. Uh, Uhhuh, those are our downloads on our downloads page. Uh, okay. They're built to be printed out on Canva. Dot com at an 18 by 24 inch size on their eighth inch thick, you know, board works great. And then, and the, the way it works is the origin markers on the floor, lock the origin and the orientation that it specifies it, whatever [00:43:00] orientation, bam, it said that the vertical ones, if, cause it's designed where you're on set, you can't tilt the camera down cause you're rigged up.


Um, you can't just like, you know, cameras on, on a dolly or whatever. So you put. Uh, the, the vertical origin marker on a pole, um, you know, just on a, on a standard, you know, C stand, whatever, or a standard lightweight stand, put it in front of the camera, uh, and if you're in origin mode, it'll optically detect that, highlight it with a, with a, you know, a ring around it.


You click the yellow button that says, you know, uh, set origin. It's going to fire a ray straight down from where that, wherever it detects that pole down to the ground. And it's going to reset the origin down on the ground. It won't reorient it. The or on the vertical one is only does, uh, positioning, but it's a very, very good way of, you know, handling stuff on set where, you know, Oh, I got to reset the origin.


It's a guy walks over in front of the camera. And you could do it remotely as well. Right. So someone on video ability, yeah, that's very cool. Yeah, just, you know, usual stuff, production stuff, you know, having somebody walk in, you know, with, in front of the [00:44:00] camera, you know, hit it. And they're out of the way, same, same way as doing this lady.


It's easy. Right. Awesome. 


**John:** Thanks. 


**Eliot:** Yeah. No worries. Hey, this is, it's very much engineered for this. Um, all right. And, uh, I'd say, I think I got, I got through all the questions, but the, oh, uh, what you just described of pinning part of your 3d scene to part of your 2d scene is the entire design of the system.


So what we have, when you start going through the tutorials, are you on Blender? Unreal? I mean, what, what's your 3d app of choice? At the moment, at the moment, Blender, I'm sort of a 


**John:** Unreal newbie. So I'm using Blender. 


**Eliot:** Yeah. Blender is great. It's absolutely. And it's wildly underrated for virtual production.


We, we, that's, that's kind of our, our internal weapon of choice. So we, we use everything. Um, so in Blender, um, what you, the, when you do the, the, the walkthrough and I'll Put the link of this on, uh, tutorials and then [00:45:00] Blender workflows. And here we go. The Autoshot Blender round trip is the one you want. And what we do is we go through actually pulling in a scan from PolyCam, but you know, whatever you could, you could, you'd build a blender scene, uh, with mega scans or whatever.


Uh, and we drop in what are called scene locators. And those are specific, it's just a named empty null in the scene. Uh, scene loc underscore, you know, whatever. Uh, and then, and followed by a specific name to describe it. And when we, you export that, we read those into Jet Set. Um, and we, it makes a list of scene locators in Jet Set so that when you're moving around the scene, you say, okay, I want to lock, you know, here's, I'm going to set my tracking origin over here at the corner of a post, and I'm going to do, say, a set extension or something like that.


Exactly right. Yeah. And in my 3D scene, I made a scene loc that's designed to be where that post was going to be. So I set my origin in Jet Set, pick the scene locator, and it'll align. It's the whole system is designed around picking a particular spot in your 3D scene and [00:46:00] hooking it to a particular spot in your 2D scene and locking those together in 3D space.


**John:** And so I guess in the work, in the workflow in that, in that scenario, if there were certain, certain props or I don't know, it's like there's a post or something and then that way you could, you could put like a reference marker on the set for the actor. So that they would know where the hell that thing is.


But you're, you're kind of working through the app in order to be able to do that. Am I, am I on the right track there? 


**Eliot:** Yep, you're going in the right direction. So we set the origins there. And in fact, if you had things like, uh, uh, It's one of these things that I need to put up. I haven't done the tutorials yet.


We have something called object locators. So if there are things in your set you need to be able to manipulate, like a character or something like that, interactively in the, in the app, you can do the same sort of thing in your 3D scene. Uh, you put an object loc underscore, you know, a null in your, in your 3D scene that you parent the characters to.


And then when they show up, uh, in Jet Set, you can actually move them, those, those particular pieces around. Uh, and you, or, and you can put an origin marker on the floor and it'll, it can detect that and lock [00:47:00] the object to that, that, that, uh, that origin. So yes, all of these things are, are exactly Sorry, let 


**John:** me, so are you, so are you saying that this would be basically like Prop and object tracking and live that, that, that, that data then gets, gets fed back.


So the actor has some kind of a, is holding, I don't know, some kind of crazy thing. That's a CG thing. Is it able to do that? Is that what we're talking about? 


**Eliot:** It's not live. Um, it is, it is, uh, it's designed so if you need to have, you know, a prop on set or something like that, or a creature or something, uh, then you can put something on set and lock, uh, lock the creature's position to that.


But if you move it around, you got you, it doesn't track and follow it. You have to, you know, point it at it again and tell it to lock to it. 


**John:** Oh, I see. Yeah. You, so it's not really like, yeah, I know. Yeah. I was thinking about like, you know, OptiTracks or, you know, object tracking, in addition to the camera tracking that you can track some kind of virtual object or something that would.


**Eliot:** Well, if you are, if you're doing that, [00:48:00] then what you'd end up doing, and I'll send you a link to, uh, Alden's, uh, video. He just did that where he has a seven foot tall robot wandering through the scene. Uh, and so what he did is he has a second jet set iPhone, um, that's, that's mounted to the same coordinate system.


So, and it's tracking it in the shot and it's matched the same time code. So in post, he just pulls those two in and takes the, the tracking data from the, you know, the giant robot, uh, jet set drops it into the main one and everything aligns. because you're in the same space in the same time and it's basically like a miniature motion capture system except minus the 20.


20 or 30 cameras set up and the twitchiness where if you accidentally elbow one of the cameras, everybody has to stop for the next 45 minutes while you stop and run around with a wand and recalibrate. Oh, that's gone. Oh, that's gone. 


**John:** So, so, sorry. And yeah, last, but I just to follow up on that one again. So, so the, the second phone is, it's almost like the second phone is sort of like a tracking, a Like a movable tracker.


Like it's actually tracking where that phone is in the space. Is that what [00:49:00] you're saying? 


**Eliot:** Yeah. Well, the, we, since each, each phone, uh, jet set is, is, all right. Can you, can you guys see the, this, uh, the screen, uh, screen. Okay. So in this, uh, you see there's a camera here and, uh, that's running jet set sitting, so the camera's moving and tracking and back here, where's my annotate, there we go, there's annotate this guy, uh, all right, right here.


That's another Jet Set iPhone with a battery pack, um, and, uh, and I think some time code hooked to it. And, so, this, and here's our, our intrepid, uh, uh, actor moving the, uh, moving this through the scene. Let me switch out of annotate. There we go. Uh, so, as he's, he's moving this through the scene, and this is how, this is how the eye lines are correct.


Uh, and that, and that 


**Petr:** shot. 


**Eliot:** Yeah. So this is, and again, there's no mocap system on set. What you're seeing here is all the compute that's on set. There's no brain bar. There's no, none of that. [00:50:00] Um, the cameras are both, these phones are tracking themselves in 3d space. That's how Jet Set works. And they're just two instances of Jet Set on two separate iPhones.


One of them is tracking the main camera and one of them is tracking, uh, the, uh, one of them is tracking the seven foot tall robot. So, and it only gives you a single point, which is the head. Right, right. And then, then he's going in and animating the body and stuff like that to match where that is. But that way all the lines are correct.


You can have actually a very fast moving, uh, character going through the scene. You know, you can, it's basically an iPhone on a stick. So if you needed to have, you know, something run through the scene really quick, you know, you have somebody swing the iPhone on the stick through the scene. The actor's eyes track it.


Um, the iPhone tracks accurately, very accurately within a, you know, centimeter or so. And then in post. Things line up, I know that's really, that's a really inventive 


**John:** thing to do. Thanks. That's cool. 


**Eliot:** I really glad I wanted him to show that because it's, uh, again, once again, this is a technique that people are used to seeing a [00:51:00] team of mocap experts and two days of setup.


And 40 cameras and you know, the circus that we've all, all seen and an enormous amounts of work in post and Nope, like two iPhones on set, you know, and they're both capturing the data internally. So you just download the data, line up time code and, and, uh, extract the frames and then, you know, in Blender, you copy the motion from one to the, to the other Blender scene.


That's it, you know, and then, then you have things aligned in space and time. Um, so yeah, it's fun. And this is one of these things that we're working on. Uh, Having people getting this message across is, is one of our, uh, one of our, uh, current projects. Um, 


**John:** Khalid, are you going to make this a recording of this conversation available?


That I could share with some other friends or folks I'm working with? Absolutely. 


**Eliot:** We record our office hours, and so I'm recording this one. Especially the ones where we're doing lots of good technical debugging like this. And the office hours are posted, uh, and transcribed and posted, uh, under the [00:52:00] office hours section on the website.


Uh, here's the link. Oh, cool. Uh, and so you can see all the ones going back. Each one of them has a description next to it. So if you click on that office hours, it'll go to, I'll just show you real quick how it works. Uh, office hours, share. Okay, so if I, this is one where, actually this is the original one from, from Keen Tools.


Uh, so I view office hour. And it'll open this up and this is Descript. So this is a transcribed, uh, version of this. So it has a running, uh, transcription of this. So we can click to this part of it and, and hit play. And then it'll, you can actually read through, uh, and search the entire transcription. Um, and see just the parts that you're, you're interested in.


So it's a fantastic tool, uh, to be able to go back. And there, there's ones where I went through the entire scenic calibrations, like word for word, note for note. And then we shot a, you know, shot a take and loaded it through, through AutoShot. Uh, so the office hours are great. They, the, you can go back and find probably the question you're going to ask and have it seen, you know, done note by note, uh, in the [00:53:00] office hour.


**Petr:** No, it's nice. Uh, actually, uh, I found out what's going on with masks. So they are working correct Yeah, except one moment. They are not taking account the frame you change. So it starts from that Frame, so that's a problem. So we have to like rematch them both. It's a technical issue I'm sure we'll fix it like in a couple of days Uh, so on the next week, so you will see it, like you can see it now.


So as you see, uh, here's your mask so we can get his, here's the mask. Oh, I see, I see. Because, so it, it's just in that range, not in the original, uh, not, uh, in the original range, not in the modified from the first frame. So that's, that's the problem. So we have to take into account the both, uh, things. So, okay, um, I s I wrote down a few things we, [00:54:00] uh, will try to change in the next week.


So the, um, user experience of cha of selecting the first frame. Well, there are technical problems, uh, to do it, um, To do it flexible, to make it changeable during the process of tracking, so we have to like move everything around because the core engine wrote, um, in the other, in the other concept. But, uh, we can make the UX when you, uh, use the first time clip, uh, the program will ask you.


Uh, what frame you want to use as a first frame, the first one, or the actual from, uh, your sequence and then, uh, realigning the masks and all the scene around the selected one. So that should be work around. So one moment, I'll write down this. Okay,[00:55:00] 


**Eliot:** this is great. 


**Petr:** Uh, yeah. And also I have, um, I have a few like, um, points here. You can select the color space of your shot. So you don't have to go to the other tab. Just select, select here. It's, it's about the original one, but here it's a linear, you can change it to the linear AC CG, I think. Well, I don't, ah, okay.


I have it. I didn't 


**Eliot:** even realize that was a dropdown. 


**Petr:** Okay. That's the drop down. Uh, so you can select the, um, that's absolutely the same. It's kind of the copy of knob from the motion tracking. So that's 


**Eliot:** kind of that. Okay. So when do we still need to load in? Uh, in the motion tracking, cause I know, no, no, 


**Petr:** no, no.


I want, I want to really avoid of loading, uh, from that tracking cause you don't not using that. So it's kind of over [00:56:00] complication of a tutorial. It's, and you say like, Oh, it's game changing. It's so easy. And then you show the kind of complicated thing of jumping out between windows. So that's why we should change that UX.


And we will, um, I hope we'll do that on next week and it will be amazing to like, uh, the align our release and your tutorial together like to yeah, yeah, this is to make it perfect. Okay. Okay. So, um, the next thing is about masks. I said it about color space. As I said, you can change it here in the main tab.


So also, um, uh, in the tutorial, you have, uh, the part where you press, uh, on the reanalyze chart. It would be really nice if you will remove the previous, uh, file to have this button red. It's kind of the pipeline. So, uh, you first, uh, select the, like create new [00:57:00] geotrigger, then you select all the inputs. And basically the camera and geometry will be there because they are just one camera and one geometry in a scene.


So it just selects the video, then you select the, um, color space, then you click analyze. And after that you change it to camera and go tracking. So that's, that would be kind of the straightforward, uh, things. Um, then you select the frames you say, uh, about the analyzing, uh, the footage like visually, uh, and the person can make the key frames.


It can just define the frame as a key frame. It can, uh, like Align it with the pins. So the pins are not influence on the tracking. It's just instrument for positioning and actually you can show like double screen here. For example, in 3D scene. Um, I think everything should be like that. So that we're moving camera in 3D space while moving the [00:58:00] geometry as you see.


**Eliot:** Yeah, yeah, that sounds what I'll probably do is do the initial one of. Of the pass through with, with, uh, you know, with just the basic setting, a key frame and letting it track. Yeah. Yeah. You can do that. Yeah. What I would, I like, um, what I like doing is, is doing kind of, you know, like here's the simple one.


And if I, if I need to do come back and have one where we do more complex thing, then I do a separate one where it, where it covers the more, more complex case. And that way. You know, each one of these is, is relatively short and so people can watch the one that covers their, their situation and then it's a good.


A good setup. Yes. And 


**Petr:** then you can show about the mask so you can show the, uh, how it works. It will be co aligned with your range. So that's kind of on our side, but, uh, in your tutorial, you just select here the file you go to the masks projects. So on you select, um, the sequence. And [00:59:00] then, uh, I think with the default settings.


It should probably work. Yeah, it works with default settings because we, uh, recognize it as our, uh, channel because it's the first one. So like in Nuke, if you will open the sequence in Nuke, it will be red because it's kind of the first channel. So that's how it works. But actually you can, uh, as you see, like combining here, the mask.


For example, if you will use something like segmentation for that, so the masks for different objects, you can write that objects like foreground, background in a different channels, and then use here the masks that you really want to. They also, this small button is a button of inverting the mask, so if you want to track, for example, person, uh, you can press this button to invert that, uh, mask.


What actually, uh, what else may be, um, needed for good tracking sometimes, uh, is drawing [01:00:00] the, um, mask over Over the polygons. So I want to show you like one thing. Maybe that would be really important for For getting good track if you have that huge mesh and for example, you have a mirror or something like that in the scene which is not detected by your mask, but You can define which polygons are wrong and you don't want to take an account that polygons, for example, 


**Eliot:** in your tutorial, because in the tutorial you had, you ran it two separate scenes where one of you put it in 3d mode and so you could, and you, you loaded the geometry and edit mode and then you could pick, pick which polygons you wanted to mask and it was really tricky to kind of, Uh, and, and clearly the, the person doing this could do this cause they, they were, they were, you know, picking the polygons, um, that they wanted to mask out from the other one, but there was no visual reference of, of, so for example, if I had, [01:01:00] uh, where's my, and I'm not sure if you can do this, but this, it'll be, it's just a, something that would be very useful.


Where'd my annotate, there was my annotator. So in the, in the tutorial, you know, they had over, over here, uh, they had say, you know, there's, there's a mesh. Um, you know, and they have, that they had, had located and so they, well, you 


**Petr:** actually drawn somewhere. Yes. I can. 


**Eliot:** Oh, can you see it? No. Oh, I think I don't know 


**Petr:** where, where to see.


**Eliot:** Oh, uh, I'm on, uh, I'm on your screen. Um, I'm, oh wait, where'd my annotation go? Is that, is that working? Uh, can you see, oh, my 


**Petr:** screen? No, no, no. I cannot. They're 


**Eliot:** disappearing. For 


**Petr:** some, 


**Eliot:** for some 


**Petr:** reason. 


**Eliot:** Uh, because I, yeah, my annotations are randomly disappearing. Uh, I'm not sure what's going on there, but. We can create 


**Petr:** one board, a whiteboard I think.


A whiteboard, okay, yeah, yeah, 


**Eliot:** um. Alright, let me make a new one. So in the tutorial, and this may be tricky because I know you're, you're dealing. Oh, where's my whiteboard go? [01:02:00] Um, whoa. My zoom is acting weird. Um, it's like, coming up, I get the whiteboard. Okay, let's see if the whiteboard stays here. Yeah, that's it.


Okay, so in the tutorial you have, over here you have, you know, something in, and you're in edit mode. Um, and so there you have, you know, here's your, here's your mesh. Uh, and they're, and they're, they're, they're selecting in edit mode. They're selecting, you know, a few of them over here. And over here is the image.


Um. You know, with it, but one of the, and you know, this was, it was like a, the, you know, the castle in the countryside, um, kind of, kind of thing. And, and there's a mesh around it, but as you selected them here, it was not, it was not showing up here. It's, it would be interesting if you can figure out a way to, I don't know, how are you going to do this?


**Petr:** You mean the draw with a shader, the, um, [01:03:00] polygons that we match? Yeah. Yeah. Something where you're 


**Eliot:** drawing on the image. 


**Petr:** Look, look, look, we have, we have that thing here and we have it, uh, in Nuke also. So when you draw in Nuke. You basically draw over the green, uh, mesh of the geotracker, so you see the blue spots where you draw on a mesh.


But here, uh, we don't, uh, like, We are not, uh, creating this special, uh, um, viewport for that. We are using the blender things, but we have a special shader for that. I will show you the method. I just want to select your original geometry. Somehow, if it's possible, of course, um, can, can I, okay, okay. I will share my screen again.


So here we have your, um, footage and we can, uh, like reselect something. Not a cube. I want to load, I want to [01:04:00] import here. The file, the import, it's obj or usd? Usd? Uh, just make sure 


**Eliot:** I 


**Petr:** understand. Oh, I 


**Eliot:** want to import the 


**Petr:** mesh of your, uh, like scene. Oh, oh, um. It should be something. Is it an obj 


**Eliot:** file? 


**Petr:** It's 


**Eliot:** an obj 


**Petr:** file.


**Eliot:** Yeah, I, I, I think we may actually store it in both modes, but I, I think in Blender we load it as obj. I'm not 100 percent sure. Okay, 


**Petr:** no problem. I can import obj. 


**Eliot:** You can, um, you can look, yeah, let's, let's. Let's try it out and we can see, uh, 


**Petr:** Actually I can reopen the scene. Oh, that's, that would be easier.


There we go. Nice. We have this scene. So, um, this is the original one. So we can go to the geotracker. We can, uh, we can remove this plane just from scene, just delete and select and delete. Uh, now it works pretty fast, you could go, uh, look through camera, [01:05:00] um, it works not really fast and we still see the mesh.


I don't know why, but that would be really nice to get rid of that, uh, I think 


**Eliot:** generate, I'll show you an auto shot where, uh, there's a checkbox that says include image, image plane, and you just uncheck it when you, when you're generating. Actually, 


**Petr:** I don't have image plane now, but I still can see this, uh, for some reason, maybe we store it, uh, somewhere here in the background.


**Eliot:** You spin the camera down, uh, like, uh, so there's my annotate. Um, God, this is, this is weird. Yeah. Let's, let's open up this cause the, uh, that's 


**Petr:** weird. Well, this is the geometry, uh, position, so we don't need to touch it. This is not the geome okay, can I remove the camera? Okay, that's, it's kind of weird thing living inside the camera, I think.


Yes, it's living inside the camera. So, [01:06:00] for some reason, here in the camera, you stores this image somehow. I don't know how it works. I'm going to 


**Eliot:** look at that afterwards. Actually, that surprises me too. Uh, and it's look 


**Petr:** like shader. You see, it's, uh, it's stuck on my screen when I'm zooming and just moves when I zoom out.


**Eliot:** Wait, wait. Is it a viewport composite? Hang on. Um, yes, it's a viewport composite. Oh, I know what's going on. Okay. Uh, okay. Okay. How to 


**Petr:** disable it? , 


**Eliot:** upper right hand corner. Uh, okay. Yeah, that's, that's what's going on. Uh, good. There's a little top down menu and on the composite, just go to disabled. Oh God. There we go.


That's it. That's it. I forgot the, the, on the, on that particular way of doing an X. This is, this is an empty scene and so we by default have a compositing tree chain in it. Okay. There we go. 


**Petr:** So that's compositing tree chain. Oh, God. [01:07:00] Yeah. 


**Eliot:** Yeah. So if you click that, by the way, 


**Petr:** that's your node. 


**Eliot:** Yeah. Yeah. So we, we, we were very excited about Blender's viewport compositor.


Um, and so if you click that, if you highlight the autoshot node and, uh, and hit tab, 


**Petr:** well, actually that's node that you wrote, uh, or. Ah, okay. So it seems to say, and how you did this, I mean, uh, I, I thought it's like impossible to make a new nodes here. And 


**Eliot:** it's because we built a, uh, it was tricky. Uh, we built a custom, so you can hit tab and you can see what's inside it.


There's a whole node network inside there. Uh, just go ahead and hit tab. Yeah, there we go. And then zoom out. So there's a whole, yeah, I forgot about all this. Yeah. So there's an entire compositing tree in there that links up the, um, Uh, the AI mats and, uh, as well as a couple of these, uh, custom nodes. So we, we were starting to do experiments with doing, um, automatic background.


Um, Uh, there are, there are advanced techniques in Nuke [01:08:00] keying where you do a background, um, where you flatten the background, the green screen background. Yeah, I 


**Petr:** know, I know, I know. 


**Eliot:** Yeah, yeah. And so that we actually, I went through one of the Nuke tutorials and reconstructed that, that node sequence inside, uh, inside the Blender's thing.


So if you go up, uh, this is actually kind of fun to show. So go ahead and click on, let's scroll down a little bit, uh, go ahead and click on, uh, Uh, clean plate, uh, go up a little bit and if you, uh, and just hit tab on clean plate, uh, yeah, there we go. And you'll go into, okay, so let's zoom back out. Um, this will actually generate a clean plate.


Let's, uh, actually hit shift tab. I'll show you the one that does the, uh, does the actual full, uh, full extraction. Uh, shift tab should, or actually maybe control tab will. Bring you out one level because we're going down into layers. And I got, I got it. I got it. So if you go to screen level, uh, I just highlight screen level and a hit tab.


Okay. Okay. Oh [01:09:00] God damn. Yeah. So this is actually a, a full, uh, compositing, uh, tree that, that had, it's been a while since I looked at, at this. So, uh, but it generates the. Um, if you set the initial keyer conditions with a input color, it, it does the screen leveling tricks that you, it's actually a pretty advanced compositing trick where it pulls the key, does a rough, uh, you know, uh, Matt, it expands the mat and it pulls the key and does an inversion so that it can extract the, the, the differentiations.


Yeah, I know. 


**Petr:** I like ABK here. 


**Eliot:** Yeah. I know. Yeah. Yeah. It's, and it's recreating the entire. Uh, the nuke, uh, you know, dealt, uh, yeah, whatever you call it, image based keyer inside Blender, like all the nodes, like down to, uh, and so, but of course, not, not a single, I know we thought a lot of people were going to use, um, Blender's compositing system as it got better, everybody's still in fusion.


Um, and I went, eh, you know, okay. So, but, but this is there, uh, you know, we, we, [01:10:00] and we build all the, all the tools for it. Um, 


**Petr:** no, it does, but do you have any nodes here like you made or all the nodes you use Here is just groups of the native blender nodes. 


**Eliot:** It is groups of native blender nodes. Oh, okay, okay.


Okay. And we, we import that. We have a, uh, a blend file when the auto shot has a reference blend file that it can use. And I, I, I forget exactly. I think we append it when we're, 'cause when we. When we run that blender scene, um, we are actually, uh, you can reference the original blender, blender file, your original blender scene file, but we actually are creating a new blend file, um, that can have the original data referenced into it, either via append or via a scene link, because the.


The, the system was originally engineered planning for people to do like several hundred shots, um, very quickly. And at which point you, you want to use the Blender linking system instead of the append system because otherwise you have like four gigabyte scene files all over the place. And so you can either append or you can link, but it's, it's, uh, it's designed to [01:11:00] reference the original Blender scene file that you use to, you know, make the, the onset stuff without changing it.


**Petr:** Mm hmm. 


**Eliot:** I got it. I got it. While, and this is interesting, so, okay, if you go back to that, and this will actually be really good to remove this, but if you go back to that motion tracking, it made me crazy for a while when I was going back to it. It's automatically setting the offset. I think as of Blender 4.


2, or one of the recent Blender things, um, you don't need to enter in the 1000 frame offset. So if you go back to that motion tracking tab, Um, there you go. And over here, yeah, the frame offset is zero, but it's doing the correct. It's reading this correctly. So I think it's basically automatically solving.


It's all automatically looking at that. It's starting at frame one, zero, zero one, even though there's no frame offset, it's doing the right file. And at first I put in the frame offset of 1000 and nothing worked. And then I realized, Oh, it's doing it automatically. And it took me. You know, a while to catch that one.


[01:12:00] Uh, but I, this is again, a good reason to have it all in GeoTracker. If you can put it there, cause that way, you know, it's just easy to confuse people. Like, well, yeah, it's really easy. 


**Petr:** Even, even I a little bit confused and I know how it should work. So that's kind of a weird thing. Okay. So now we have this scene and we want to mask, for example, something here.


Um, what we can do, we can go to the, uh, 3D mode here and look around our model. So what that, that, that we've got, and then we want to, um, draw something, uh, on the polygons. And, uh, for that, we will create the vertex group. Uh, for mask, we can call it somehow like, uh, the, um, like super mask, uh, and then we can, uh, press C, uh, and try to select tab C, [01:13:00] uh, goddamn number three, something like, like polygons, draw over polygons.


Um, and then we will assign it, uh, to our model. So when you assign it to the model, then this group is not empty. If that group is not empty, you can use it in a surface mask, that super mask group, and it will appear as here. So now if you want to edit it, you can just, uh, like select, uh, and, uh, select polygons.


Add extra one, assign it will, um, automatically. Yeah, yeah. Appears here. 


**Eliot:** I, I saw it and it's, it's, it's tricky because what you're trying to do is mask, um, mask things that are in the image, looking at something that you don't have an image. I mean, you can sort of do it. I get, I get what you're doing and I get this could be a while to, to try to do it, but eventually what you'd really love to be able to do [01:14:00] is paint on the image.


Yes, it, it would be really 


**Petr:** nice, but we, for that thing, we should, uh, like write another viewport for drawing. It's not completed here, but in Nuke and in After Effects, we have that option, so we can draw over our mesh and get the result immediately. But here you should select polygons, uh, in a 3D world.


Well, actually, for example, you can go back to 3d and enter the camera. So you can draw here just like that and then assign it. Um, and then go back to, uh, 3D mode. Actually, to go to the 3D mode, to our, like, in mode, you should, uh, go back from edit to object mode because we can track only in object mode. So that, that, that's a tricky thing.


But actually, yeah, here it is, this port that we draw recently. 


**Eliot:** Okay. So let's, so you would go back to 3D, um, hit edit mode. 


**Petr:** Yeah, I can go to the [01:15:00] 3D edit mode, C, to select some polygons, or I can use, uh, C with a shift to, like, get rid of all that things here, leave it here, and then I will click escape, assign, back to object mode, and then go inside, so now, everything, so it's how to work, uh, using, like, one screen only.


**Eliot:** Okay, okay, okay, that's, that's, uh. It'll probably be an advanced point, but that's, that makes sense. Yeah, 


**Petr:** it's a little bit advanced, but, uh, finally you can get really good result with that kind of thing. Especially if you have like mirrors or something like that in a scene that can, uh, really break your tracking because there are a lot of features and they're moving absolutely like incorrect.


Uh, in terms of Java train, sure, sure. Yeah. 


**Eliot:** When 


**Petr:** you 


**Eliot:** have things moving. 


**Petr:** And also you can see here the mask, but yeah, for now it's a problem because mask living, uh, in absolutely a [01:16:00] different time range. We'll fix it. Okay. Uh, so the next thing, uh, uh, I will just check my notes. Uh, compositing masks. I said, uh, pins not influenced striking.


I said, um, texturing. Oh, texturing. You said something goes wrong with texturing. Um, yeah, we 


**Eliot:** tried it last time. We tried to project the texture and something, uh, something didn't behave. Uh, okay. So why? 


**Petr:** Why textures? Okay, let's try to, let's try to project something here. For example, um, Texturing. So, ah, you've got no UVs here.


That's what we've got. Okay, smart UVs. What's that? It doesn't work in the bin mode. Okay, back to 3D. It's okay. Okay. Now everything like correct. So you can add frame, for example, this [01:17:00] one and you create a texture. So, okay. It's that overlapping doesn't mean that. Okay. Let's try to don't done, but overlapping UVs detected.


Uh huh. Interesting. Interesting. Why we can't see edge cause we redefined the


visibility. I think yes, the display visibility. Should be textured or solid textured. That's interesting. Actually, let's go to the shading and check it. Oh, no, no material here. Well, we have material, but no texture. That's really interesting because we like should have texture here. Ah, that's cause you, uh, you use not, not easy geometry.


This is really complicated thing. And actually.



**Eliot:** was curious, because the [01:18:00] geometry, there's not that much geometry. No, no, 


**Petr:** that's not because of the geometry. I think it's because of the complication here. Uh, you use the referencing of files, I think. I see no OBJ mesh here. It's something, something different. Um, 


**Eliot:** let's see under the scan. Oh, it's probably the, uh, Let me think for a second, not actually sure how we, I think we just load, loaded into the geometry as just a normal Blender, uh, Blender object.


**Petr:** GeoTracker footage, GeoTracker. Make text. Here's the texture. So texture exists. As it's really interesting what's going on, why we can't see that texture. Here we got the uvs. Mm-hmm . So basically that's not, not a correct uv, but it's okay for [01:19:00] texturing. Mm-hmm. Here's a texture. So texture could be rejected, but for, uh, for some reason, uh, it's not creating the texture here.


I can add it's manually, I think shift day. Image Texture and select from here and then use it as a color. Yeah, here it is So basically I did but I don't actually know why it why it works like that 


**Eliot:** I'm curious. And also it seems to have missed this entire, uh, chunk of it, which is 


**Petr:** yeah, because it's different bars.


I think of that geometry. No, well, that's really interesting. I will, uh, show all this stuff to our specialists. I think they will figure out what's going on here and we will fix it or, uh, like explain what's going on [01:20:00] and how to, is it okay for you to post this tutorial a little bit later, not on the next week?


**Eliot:** Yeah, yeah, that's fine. It's worth, it's worth to, to get some of these things. What I'll do is, is, uh, I really like the workflow things you guys are planning to do. Um, and I can, you know, internally I can show the team the Descript one. I, I do these things in Descript before we put them on YouTube for exactly this reason.


Because I can edit it really fast in Descript and redo and fix it. And so, I can show people what they need to know in Descript. And then, um, uh, but I really like the workflow things you're planning. So I'll just wait until you have the workflow things, uh, to have it go out to the outside world, and then we can use the existing one internally.


Yeah. That works great. Yeah. Super. This is, this is super exciting because it really solves, it really solves a, what was a gap in this, you know, like, yeah, synth eyes can do it, but man, it's, that's a, it's a, it's too heavy of a lift for a lot of. Uh, a lot of small groups, they want to stay in Blender or they want to stay in Nuke or After Effects and uh, [01:21:00] and not have to deal with that.


So I like it. I think it's great. Yeah. 


**Petr:** Yeah. Okay. So, uh, on our side is the textures, the first frame and, uh, the masks range on your side, the part with, uh, red, uh, reanalyze button, just, ah, you can, you can remove it. Uh, on a disc. Uh, on the C disc, I think. Okay. Uh, CTMP, something like that. The temporal files, you can find it.


Yeah. Okay. So, uh, if you remove there, here will be the normal button. Uh, then, uh, the other thing is about pins. Depends not influenced on tracking. And, uh, I think that's all. Yeah. So let's 


**Eliot:** move to the key frame based. I'll use the new, the new ingest for the, the video and for the, uh. And for the masks, um, cool.


Notice it on key frame. Yeah, I think it's, I think that'll be, I mean, I'll, I'll end up basically rerecording most of it, but it's, it's fine at this point. I've, I've done it enough times that every time I do it, it gets much faster and smoother. [01:22:00] So that's, that's fine. I don't mind redoing it if it's a good direction.


Nice. 


**Petr:** Thank you for your feedback. I think that's really helpful and we'll achieve good results. Thank you. 


**Eliot:** All right, 


**Petr:** Peter. Good to see 


**Eliot:** you. I'll see you in another week or so. 


**Petr:** See you. Bye. Bye.