CALL US

800-634-5178

Remote Production and AI Tracking: The Future of Broadcast Camera Systems

Banner

Remote Production and AI Tracking: The Future of Broadcast Camera Systems

By Stephanie R 0 Comment April 17, 2026

Broadcast workflows are evolving as AI-driven tracking and remote production tools become more advanced and reliable. Modern systems now use visual language models instead of traditional computer vision, allowing cameras to intelligently track subjects, adapt to changing environments, and maintain focus even in complex scenarios like live sports.

These innovations are enabling a shift toward distributed production, where cameras operate as connected devices within a larger network. With features like cloud orchestration, real-time control, and automated tracking, production teams can deliver professional-quality content with greater efficiency and scalability. As adoption grows, these technologies are expected to enhance storytelling while reducing operational costs across broadcast and live event production.

Learn more about Advanced Image Robotics here

Read the full transcript below:

Hello everyone. Welcome back to the script series about how to improve everything and everyone around your live event broadcast and post workflows and the tech stack to do it. I’m Jeff Sangpiel, the post doctor. Today we’re talking about something I’ve watched the industry wrestle with for decades. The sheer operational weight of getting cameras on the air, the people, the trucks, the cable runs, the infrastructure, the cost. It’s one of those problems that always felt like it should have a better answer. My guest today decided to build that answer literally from spare parts in his garage to track his daughter’s soccer games. Nick Norquest is an 11time Pacific Southwest Emmy award-winning filmmaker, a non-fiction storyteller, and a self-described workflow hacker. He co-founded Advanced Image Robotics, AIRIR, in 2020 with his cousin Kevin McClave out of San Diego. Their flagship product, the Air1, has racked up a 2022 NAB technology innovation award, a 2023 NAB product of the year award, and most recently a 2025 pilot innovation challenge win for their air autopilot AI tracking system. Their cameras have been deployed at the Olympics in Paris, Super Bowl 59, the US Senate, UFC, and pro sports events across basketball, soccer, hockey, tennis, and motorsports. all operated remotely, often by a single person over standard broadband. Nick, great to have you on the script. Really glad you were able to make time with us in the run-up to NAB this year. >> Yeah. Well, thanks for having me on, Jeff. >> Awesome. So, let’s start at the very beginning because I love I love me a good origin story. Uh years as a writer, producer, editor, and then you prototyped the first air camera system to film your daughter’s soccer practice. >> So, walk me through that. what were you actually building and at what point did you look at what you had and think wait this is a real product? >> Yeah, that’s a great question. So this basically everything we’re doing with air now was kind of born out of this just kind of epiphany that I had after I built this thing to shoot my daughter’s soccer matches where I went, hey, how we’re doing in things in production is really dumb. Like there’s better ways to move the camera. There’s lighter weight setups. It gives you a great deal more versatility, but it just kind of ended up being like a domino effect out of that. And the original origin is I was uh I was coaching my daughter’s soccer team. She reached a point in her development where she needed to be able to see how she was playing, needed to be able to see the channels, see the play develop. Um, and you need to do that from a high point up in the air. Well, I had a when I first rolled out, I you know, took, you know, I’m like I’ve been shooting for 30 years. I could shoot a 14-year-old girl’s soccer match, no problem. Roll my gear out there, and the results were terrible cuz I was down too low, couldn’t see the spacing. I’m like, “Okay, I need to get the thing up in the air.” Well, I started to try to put that stuff up in the air, and that got very scary very quickly on a, you know, in a pop-up kind of scenario where when you put weight up in the air on a big tripod or whatever, it just, yeah, it it gets dangerous. And so I tried some lighter weight heads and some other things and it just wasn’t fast enough to track the action in soccer. And so I was like, “Okay, I got to break down and buy something to do this.” Looked at the stuff in the market and it was absolutely terrible. Like I referred to it at the time as ancient Sumerian technology cuz it’s literally levers and pulleys and the whole thing twists and and it wasn’t cheap either. It was like, you know, it was like five grand for a setup. And I’m like, okay, I’m not going to spend the money on that. I’m going to hack together something on my own. And that’s kind of how Air was originally born. Took apart a took apart a drone gimbal camera. Uh created a a touchscreen interface to control it. Um because I didn’t want to haul a bunch of control systems, and I’ve always hated a joystick for PTZ. So, uh, basically built this system, went out and started shooting with it and did, you know, couple hundred soccer matches and, uh, there was one point in there probably about halfway through this where I was shooting. I remember the moment now. It was a corner kick coming in from the corner and I started, you know, I pushed in with the touchcreen and did this as the shot came in. I kind of floated back and then drifted in on it as it goes into the box and I was like, “Hey, that looks pretty And I’m like, that’s kind of hard to do if I was bent over the sticks with my eye and the eyepiece trying to pan, tilt, zoom all at the same time. And here I am. I’m sitting back in a chair doing this with it instead. And I went, “Oh, how we’ve been moving cameras is been the same for a hundred years.” Like almost from the invention of the camera, it’s been sticks with a human moving it. That’s just really dumb today. We’ve got much better ways to move the camera and really air is about leveraging this modern technology for camera movement um into practical workflows for broadcast. >> And it it sounds like you came into it as a filmmaker, not a hardware guy. And as a workflow hacker, it sounds like building error was really more a film making problem that happened to have a robotic solution than anything else that needed to be solved. Yeah, it’s kind of funny. We did not when we set out. So, basically what happened here is at a certain point I brought this system and showed it to Kevin, our CEO at some family gathering or something and he was doing work out of uh all over Asia at the time and he’s like, “Hey, I’ll find the factory where they’re making those uh cameras where they’re making that camera gimbal combo, you know, we’ll slap it in a box and put a label on it and um have a nice little lifestyle company.” Like that was the idea. We would sell it to youth soccer teams. Well, what ended up happening was completely different than that. Literally, when we first got accepted into the incubator uh here in San Diego, one called Evo Nexus. Um, day one CO hit and all the lockdowns happened and I was like, we are not selling any cameras to any youth soccer teams for the foreseeable future. I had always had in the back of my mind using the same thing for broadcast because I recognized there’s a bunch of scenarios where I’m using PTZs where it is incredibly cumbersome to set up. It takes, you know, I was doing these things in Florida at the time where I would set up like 10 little mini studios each with four cameras and pull all the cables back through the hallway to a central control room where I’ve got 10 guys all, you know, operating each room independently. The talent level is all over the place for operating a PTZ. You know, some of the guys I’m like, “Hey, you’re on a lockoff. Put it there and then take it.” Cuz frankly, joystick is not really, unless you have the decades of muscle memory, and some guys do. Um, it is very difficult to operate a camera that way. The touchcreen is dead simple. I’ve literally put it in the hands of a, you know, a 10-year-old with five minutes of instruction and he’s shooting a soccer match almost as well as me. Maybe a little too zoomy, but aside from that, it’s very easy to pick up. So, um, yeah. So, that’s kind of the the origins of air are just kind of around that, um, opportunistically applying what we’re doing to broadcast stuff. I started going, “Oh, this big setup problem that I have, all of a sudden, I can plug in, use regular Ethernet infrastructure. All of a sudden, I’ll set up a VPN and now I can control the camera from anywhere.” Um, literally across continents. Um, when we did the uh Olympics in Paris in uh 24, we had a robo set up above the boxing ring and I could control it from here in San Diego, however many thousands of kilometers away that is. No problem. I easily could have shot a boxing match from there. I’ve done concerts from here in San Diego where the the Robo was in Prince Edward Island in the far reaches of Canada, literally across the continent. So geography is now no longer relevant for your camera operation. And what that means essentially is that you can have a small footprint on site for setup and your crew can be dispersed. So this this other thing which we’ve been doing forever, which is rolling a whole army of people and expensive gear to the site, there’s absolutely productions where that’s still required. You’re going to do the Super Bowl, that’s still going to happen. But if you’re doing a D2 game, you don’t have the budget to do that. you still want a very high level of production because people are used to seeing, you know, the the tier one broadcast. You’ve got to find a way to deliver that high quality production at a much lower price point and you just can’t scale you can’t scale down the truck and the people to meet that. So, at the end of the day, what we’re doing is giving producers uh the tools to still be able to produce highquality content, but at a lower price point with fewer people. >> Excellent. Um, let’s get a little bit into that technology for the folks who in the audience who haven’t seen Air in action. >> You’ve described it as treating a camera like an IoT device. Help me understand what the Air1 actually is and what does go away for a production team when they deploy it. >> Yeah. So, at its heart, it’s a it’s a threeaxis gimbal. So, it’s a you can put it up anywhere, put it up high in the air, it’s going to stabilize itself. It also has that wonderful smooth movement that you get from brushless DC motors where it has it feels like a human’s operating. It doesn’t feel like uh like a robot even when our autopilot AI is running it. It feels like a human is operating it. So mounted onto that robot is a digital cinema camera. So you have large sensor micro four thirds comes stock on it. Micro four/ thirds 4K. Um, inside the robot is a essentially a brain, what we call the air station, and that essentially handles all of the connections between the camera and the gimbal and the operator wherever they are in the world. So, at its origin, when it was originally designed, when I first built that original prototype to go up on the pole, I didn’t want to have to use traditional methods of control. I just used regular Ethernet and a touchcreen to control it. So at its from its very beginning, it was designed as an IoT device. It was designed to be controlled over the network. So that then not only removes geography, also removes a bunch of infrastructure you have to put in to connect everything. And at the end of the day, the robot I can control it from anywhere over standard Ethernet with only a broadband. I can stream it to anywhere. SRT goes straight out of it to any endpoint in the world. I can record internally on it up to uh 4K 60 and ProRes HQ. Um and it just is a highly versatile tool for doing any kind of production. We have some customers NBC Sports Bay Area, it goes right into a traditional broadcast truck. We have other customers that are doing, you know, remote control from these distant locations around the world. So, it it is it is uh much more versatile actually than I first expected. Uh when we first rolled this out, I I was not expecting that we were going to get traction in tier one. Our our target was more the the tier 2 market, but because of the versatility of this thing and because you can do stuff like put it up in the announce booth and have it shoot the talent and do a do a live shot back out over the stadium, um that’s not stuff that PTZs are traditionally very good at. um you can you can do it with some of them. They’ve gotten a lot better in the interim, but at the end of the day, your traditional PTZ is still a you know, it’s still a glorified security camera. >> The the other thing that’s very interesting about this is the whole AirCloud piece. You’re not just making a better camera robot. You’re you’re building the whole orchestration layer around it. What does AirCloud actually do that couldn’t be done with someone’s existing switching and streaming stack? Yeah. So, that’s a great question and this is something that I hadn’t I didn’t actually really set out to build when we first started this. But again, because the robot is an IoT device, you can now do all of these things with it that’s very difficult to do with traditional gear. Like for instance, the air station brain can connect up to uh computer instances in the cloud AWS EC2s where we can essentially use the aircloud. You can think of AirCloud like an orchestration layer, like a um a user interface to allow somebody who may not necessarily even be super tech-savvy to be able to set up a complete remy shoot because it’s it’s drag and drop. So basically, you just upload a little picture of your site plan, you drag your robots down on there to wherever you want them positioned around the field. You can put, you know, your people that you want to have access to them. And then same thing for EC2s that you may be deploying for um for switching graphics, replay, all of that stuff. You can drag those on. Autopilot is another one. The autopilot tracking, you could just essentially drag and drop and then when it comes time to deploy those, your your person out in the field can essentially open up their iPad and say, “Oh, that’s my camera.” and they just click on it and it opens it up, drops it into the interface, puts all the settings into the camera, you know, what what frame rate you’re recording at, what frame rate you’re streaming at, your stream destination, so you don’t have a bunch of manual typing in of data. Essentially, it creates drag and drop orchestration is is how it operates >> almost like a like a node structure like we’d have in in post-production back in the day. >> Yeah. Yeah. Yeah. >> Very cool. Um, you’ve got multiple registered trademarks now. Uh, Aired One, Aircloud, Air Station. Where does Air Station fit into all this architecture? >> So, Air Station is just kind of our term for the robot brain. That’s kind of the thing that makes what we’re doing different from a traditional gimbal and different from a traditional PTZ. So, it’s a it’s a computer embedded in the robot. Um, there’s also the controller board for the the gimbal and the axis of movement. And then there’s also a networking stack in there. And the networking stack um to what we were talking about a little bit earlier in AirCloud, I can take all of the data from all of the robots and push it up to what we call the master status dashboard. So on one pane of glass, I can see every robot that’s deployed on my project. or if you’re air and you have the god view, what we call the god view, I can see every robot in the world that’s deployed anywhere and change settings on those things at a click. So, when I’m setting up a shoot and I want to make sure all of the I want to paint and shade the cameras to all match, I have one pane of glass that I see all of those values displayed and can change them all on the fly there. It makes it really easy to catch things that that may be out of sync. >> Very cool. Uh, autopilot seems to be the big move for you folks at Arinet right now. You debuted it back at NAB and 25 and you won the pilot innovation challenge with it. Tell me what autopilot actually does and what the visual language model approach is doing differently from how everyone else has been doing AI tracking. >> Yeah, this is one of the things I’m I’m really excited about what we’re doing. This was kind of in the back of my head when we started Air. I wasn’t really quite sure how we’re going to how we were going to get there or how we were going to improve on regular computer vision stuff, but it’s been just um exceeded my my wildest dreams. Essentially, what we did is um that and really all of this is driven by a need from the customer. So, I want to have robots deployed. I want them to autonomously track action. I want to track players. I want to track the game. I want to track the ball. Um, so in order to do that, traditionally what’s happened is people have used regular computer vision tools and that kind of works okay in some circumstances doesn’t really work very well for sports because you have occlusions where things cover up. Um, they’re not they don’t do a very good job at persistence and all those things. So, as we got into developing Autopilot, we figured out pretty quickly that we needed to do something different with how we were identifying and then tracking these targets. So, we actually went in and created a visual language model. So, you know what an LLM is like chat GPT and all that. They use essentially words are their tokens. With a visual language model, images and pixels are the language that essentially you’re using. So what this means at the end of the day is that when the model goes in and identifies a target, it’s not it’s not like a traditional one where it goes, “Oh, that’s an outline of a human.” It will go in and say, “Oh, this is the thing that I want to track and it picks uh I think our original model was something like two billion parameters around around the target.” So we had it was really funny. We had uh uh one of our investors asked the question during one of the meetings. Uh she said, “So exactly what criteria is it using to identify the subject?” And our devs kind of go, “We don’t really know because it’s actually teaching itself.” And this is the big dividing line. I know there’s been a bunch of hype around AI and a lot of it is just BS. Um the way that I define AI is does it teach itself? Is it actually learning or is it a heristic where you’re telling it exactly what to do? Obviously, you have to put parameters around it. But this is the big thing about our visual language model. Essentially, it taught itself how to identify objects. And then we have another tracking model that taught itself how to track objects. So, it’s it’s actually purposebuilt models specifically for doing sports originally. It ends up working out really well for a ton of other stuff as well, like you know, everything from dog agility competitions to, you know, a booth camera um up in some other kind of sporting event, somebody walking around on stage, whatever. It it really is totally agnostic about what the subject matter is and where it is. The other really beautiful thing about it is it also doesn’t really care what the subject is. There’s a really cool video from NAB last year where I took in last because you can in addition to saying show me the humans and I want to track that human and it will go get that human or multiple humans and track them all keep them in frame. You can also tell it what object you’re looking for a chair a car a dog and it will show you those and then you can pick the one you want to track. You can also if you uh if there’s not a definition for what that thing is, you can just go lasso it. Like you go, “Okay, here’s the thing I want to track.” And there’s this great shot of this guy walks around the corner and I lasso his NAB badge. And you can see very quickly the the model goes in and goes, “Oh, this lanyard is part of the badge.” And you can see it kind of expand its little thing around to track. The thing that’s beautiful about this is the guy goes and turns away completely so that the badge is gone, but it sees a little piece of the lanyard around his neck at the top. So, it continues to track it. And then he turns back around. You can see it picks it all back up again. So, this ability to >> to deal with a target that is changing in appearance over time is something that’s completely unprecedented. Your regular computer vision models just just can’t deal with that. and they can’t deal with like, you know, there’s some stuff with um some tracking stuff we have with Steph Curry at Golden State where you have players, all the Warriors wearing the same uniforms, you know, all look very similar. They’re they’re, you know, going back and forth in in front of each other. Steph is jumping into a pile to get a rebound. And there’s a I think there’s a segment of like nine minutes of the game where it runs where it stays on him absolutely persistently. So, this is the thing that we really need. And at the end of the day, you know, AI, not AI, it doesn’t really matter. Is does the tool do the job that I need it to do? I put it on that player. I want to put it on Steph Curry. I want to put it on LeBron. I want to put it on Lionel Messi and have it stay on them. It >> It sounds like you’re giving the AI agency to figure out how to best do what you wanted to do. >> Yeah. The thing that’s that’s really interesting about it is the way that the model’s trained. We basically say, “Here’s this thing we want to do. You figure out how to do it.” And then when it comes back, generally you go, “Okay, it’s doing that thing.” But when you want to tweak little things to it, um, it becomes yet another training round. And we’re doing things where we’re essentially have a a teacher model and a student model where you teach the teacher and then the teacher teaches the student and then that student becomes a teacher and teaches. So you distill those models down that that two billion parameter model that we had a year ago running on big heavy industrial compute in the cloud we now have distilled down to run on a Mac mini. So that’s what we’ll be showing at at NAB um is the new onrem version of autopilot run on a mini and the parameters have you know radically reduced from 2 billion down to I think we have one model right now that runs at about 8 million parameters. >> So it’s activity agnostic, it’s object agnostic and it’s trainable in real time out of the box. That’s a strong set of features. Where do you see autopilot continuing to develop? Where are the hard cases you’re working on? What are the hard cases you’re working on now? >> Yeah, it’s really interesting. So, we’ve done a bunch of stuff with uh MASL with indoor soccer, uh Clippers, GLeague, um both teams play, but Frontway Arena here in Oceanside, and that’s kind of been our test bed for doing a bunch of this stuff. In addition to the um production team up there using it to do their broadcasts, we have other robots in there that we kind of uh test out some of these other things we’ve developed elsewhere. Um expanding out from that and going to outdoor soccer, obviously biggest sport in the world and World Cup coming this summer. Um it’s it’s really interesting the different techniques that are required to be able to do game follow or identify the ball on the field. So we’ve got some really interesting stuff we’re working on to expand the things that we’re doing indoor to outdoor to that larger field. Having the robots work um synergistically together. Um we call it a hive mind internally where one robot shares its information with the other robot. That’s something I’m really kind of geeked about because that’s going to enable the capabilities to uh grow exponentially when the robots can share a brain essential essentially. Um do things like okay I identified you as the person I’m tracking but you disappeared from my view. I can hand that identifier off to the next robot and it can pick you up because it knows what those criteria are. Um, there’s also some g game follow is one of the areas where we’re expanding out as well because I can track an individual player right now, but to be able to follow a game that requires a different model because you’re you’re tracking ball and a bunch of players and those things. Identifying where the ball is is really crucial um to being able to do that game follow. A lot of the stuff that’s already out in the market, it’s okay. It’s, you know, it runs 85 90%. Um, but for us, that’s not good enough. Um, if you’re gonna rely on these robots to shoot the game, you got to be you got to be multiple nines of reliability. Um, I mean, I don’t I don’t imagine that the robots are ever going to be 100% perfect, but at the end of the day, humans are not 100% perfect. And there are some things, this is one of the really interesting things that has developed in our work with the NHL is there’s some things that the robots can do that human operators can’t. Like um with NHL, what we’re doing is we’re using their existing technology to track the puck and that puck position then translating to the robots and tracking the game. So what that means is that if the puck is on the near boards where your camera operator normally couldn’t see it, you don’t really know where it is. I mean, those guys are really good at reading body language and all of those things up over the top to to kind of figure out where it’s going, but they can’t actually see it. The beauty about interfacing with that puck tracking system is the robots always know where it is, even if it can’t see it. And it can also track super tight in a way that it’s very difficult to do as a camera operator. One of the things I get all the time from camera ops is, “This robots are going to take my job.” And I keep explaining to him the robots are not going to take your job. It’s another tool for you to do your job. So the thing that that human operators bring is their knowledge of the game and the pictures required to tell that story. That’s the skill. It’s not this and it’s not, you know, the manual manipulation of the camera. It is knowing what the shot is to tell the story. And this is what the robots help you to do as a camera operator even better because you can you can turn it loose and it’s always going to be paying attention and it’s always going to be on its target. >> We’re we’re used to covering a game from one side of the arena. So the the motion is always we don’t we don’t break that plane uh the 180 degree plane. But if you got a camera on the other side on a wide shot and it’s tracking the camera and it’s telling all of the other cameras exactly where that is. I mean, that that’s kind of like it’s like having an extra assistant director being able to to tell everybody exactly where everything’s going to be. It’s not knowing where the puck is necessarily, but knowing where the puck’s going to be. I’ve heard that I heard that for years in this industry. One other one other point along that along those lines. At the end of the day, what we’re doing is helping people helping producers and directors better tell the story and more cameras in more positions in places where you may not be able to put a human operator. That’s really where we’re seeing, particularly in the tier ones, that’s where we’re seeing the air ones being deployed. on the reverse side, above goal, you under goal, down in a corner, in places where you’re not you couldn’t put a camera operator there because you’re blocking a sight line. Or in motorsports, you’re going to put them down on a dangerous curve where you want the shot from there, but you can’t put a person. That’s kind of where the the robots are are working. >> Yeah. Or we used to put people there and then they almost got run over. And I I’ I’ve had friends who are operators in motorsports and some of them have some harrowing stories and video of those bare misses. >> Yeah, that can get that can get very scary. >> It can. You’ve described your target as the $247 billion OTT live streaming market. But you’ve also got early customers that are that are wild. You got dog agility competitions like you mentioned uh nature documentarian. I know you discussed it for a fine art we had discussed it actually for a fine arts venue in New York City recently. >> Yeah. >> So, who is air right now and is that who you expected you’d be? >> Wow, that’s a really good question. Um so right now air is a problem solver for producers and production entities that want to either do more with less or want to augment their existing uh production infrastructure in a way that they can’t with current tools that are in the market. Um we’re a pathway to automation. um because it’s an IoT device at its core and because we have the autopilot visual language model tracking, it’s a unique set of tools that enable people to do things that just that just weren’t possible before. Um I absolutely see us growing eventually outside of sports, but right now we have so much stuff going on in so many different sports and we’re a fairly small team. Um, so I I I absolutely see us growing beyond it eventually. For now, there’s more than enough for us to uh take on across a bunch of different sports in a bunch of different parts of the world. I mean, mostly North America. We also have really good partners in um in Europe. In Italy, we have Video Proetti. It’s our distributor there. They just did a a test with a big tennis broadcaster. We got a thing coming up with soccer over there soon. sorry, football over there soon. Um, so it we’re seeing a bunch of stuff happening. Combat sports now. That’s um one UFC is one of our customers, but we’re seeing a number of other um uh combat sport type competitions um that are adopting it. Universities, uh we just did a whole trial at uh Notre Dame. They tested it across I think six or seven different sports. Yeah. to in a nutshell, yeah, we’re we’re further than I expected us to be with the tier ones. Um, I didn’t quite expect us to be as good a fit as we’ve ended up being for them. Um, I would expect us to grow even more across the tier 2 and tier three type productions uh in the coming years. And I think probably within 24 to 36 months, you’ll see us doing some more in some alternate spaces outside of sports broadcasting. >> Tier one leads to the tier one venues as well. So >> yeah, we put you you get end up in the crypto arena for for you know NBA and then suddenly someone’s like, well, we can use that for monster truck stuff, too. Why not? >> Because it’s physically there. Why why not use it? >> Yeah. I think we’re going to see a lot more of that. I mean, we’ve already seen that happen with uh multiple different like tier ones where we’re doing something with the league where now the team wants it for something and now the venue wants it for something. So, uh I think we’re going to see more and more of that. One of the challenges we face there is that the production systems aren’t yet aware that we’re there. Like we still have right right now behind me you see a setup up here at Frontway where they’re doing Arena Football. all of the contract for all that production was already done ahead of time. They didn’t know that there was already a system in place um to be able to do a complete multicam production out of that venue. I think as we become more established and are in more places uh we’re going to see that traction pick up even more. >> Maybe maybe venue certification. We’re we’re certified for this. You just here’s your positions. that it already knows the it already knows the venue and just tell it the venue, tell it to the sport, and off it goes. >> Yeah. The the interesting thing that I’ve learned along this journey is that I I I was a little naive coming in. I’m thinking better mousetrap, everybody’s going to want it. And we’ve ended up having a lot of people want it, but I didn’t really recognize how difficult it is for production workflows to change in broadcast. Right now, there is an incredible amount of technology available, not just air, but across the spectrum that could absolutely revolutionize things that we’re doing in production. The thing that’s preventing its adoption is not the technology. It’s the people. The people don’t understand yet how to take advantage of these tools. And they’re in some cases, they’re so busy doing the production, they can’t pick their head up to look for new things. That’s why shows like NAB are so important and things like the podcast that you’re doing here because it helps people become aware of these other options. But man, the technology the technology across the board right now is amazing. We need to get the people now to come along with us on these uh on these workflows cuz it it it absolutely can energize your storytelling for what you’re doing. That gets into the the question I was going to ask you next about where where do you see the industry landing on crew reduction through automation and it sounds like as folks learn more about what these tools can do they’ll discover better workflows for those tools and and workflows that even you folks didn’t anticipate coming along. >> Yeah. I mean there’s a big there’s a big concern in the industry. Obviously you’re a camera operator. Your robot’s going to take my job, right? You’re concerned about that. what we’re seeing happen is actually the opposite that that rule of economics where when you make something less expensive, you get more of it. So, you know, we’re seeing stuff where with some of the bigger leagues that were doing some lower tier stuff with one camera operator. Now, we’re talking about having five cameras deployed for that because that one camera operator can then do more stuff in a day. They don’t have to travel to the site. Um, you’re using them more expeditiously on on the shoot. there. So I see it I see it being a a combination of of two things. You are going to have a cost overall cost reduction for the productions. There’s no question about that. Everybody is getting downward pressure on budgets. But you’re also going to see an increase in an exponential number of things now being covered and now being covered with multiple cameras. So I think net it is going to be an increase in work for people and an increase in overall spend probably it’s just going to be spread across more productions and I think you’re going to see new people coming into the game because this is one of the challenges. I mean I go out to these venues and I’m looking the camera ops are my age and some of those guys are starting to retire and you don’t really have the kid being trained up in the same way that they were in the past. And this is one of the other things people love about our robots is the kids are on it. Like I can train a kid to use this way faster than I can train a traditional roo because the robo has to unlearn the joystick thing and the kid goes, “Oh, it’s like a game.” Um, so it it’s one of the things that’s that’s good about it is it doesn’t require that decades of muscle memory for how to operate the camera. It’s more about in your brain, do you understand the picture that I want to show in order to tell the story? And then the the technical barrier between the thought and the action, the thing being delivered um is much shorter and requires less technical skill in order to execute >> and it’s more about creative skill at that point. How do I tell the story? >> Exactly. And at the end of the day, that’s what we’re here to do. We’re here to tell the story. Like these things are just tools. Like I mean I’m a tech geek. I’m as much of a geek as anyone. Like I love the autopilot AI. I love the touchscreen. Like I love all that technical stuff. But at the end of the day, the magic part for me is telling the story. I got this shot. You know, I’ve been shooting for 30 years. There’s still some shots, you know, cuz I’m not on a camera every day. There’s still some shots that I miss because I’m a little bit rusty or whatever. I don’t have that problem with the robot or with the AI because the shot is what I’m telling it to do. >> So, you you’ve gone up against some serious incumbents in the robotic camera space and EVS is going to be putting out T-otion at NAB26. >> It’s telemetrics plus XD Motion. Um, >> how do you feel about that competitive landscape, where your opportunities lie, and anything you could share about what you’re bringing to Vegas this year, more than you already have? Yeah, I mean we’re going to see we’re seeing this across not just in broadcast but all industries. We’re seeing this um robotics and automation growing exponentially. A lot of those ones that you mentioned are tend to be more focused on studio based productions, much simpler. Uh the traditional computer vision functions a bit better in those kinds of environments. For sports, you really kind of needed a better mousetrap than what we’ve had traditionally. And our technology at Air is based on a different platform. It’s based on this IoT concept. It’s based on a gimbal with brushless DC motors. Our robot can turn 180 I’m sorry, can turn 800° a second. So it can move like we’ve slowed it down cuz if I move 800 degrees a second, I can’t really see what I’m shooting at, right? Like there’s limited use of that practically if it’s a human operating. But if you have AI operating it and it very quickly needs to get to a position, that kind of speed can be useful. So at the end of the day, what we’re building is not just an individual product, but it’s a solution and it’s a platform to accomplish the tasks um that you need to accomplish. So what we’re going to be showing um here at NAB this year is going to be our on-prem autopilot. So that’s our autopilot um visual language model running on a little Mac Mini right there on the edge. And the reason that this is important is um our existing models that we have deployed all runs on cloud instances. These big heavy GPU computes where um you’re spinning it up, you’re running it for the thing. It requires a connection from the robot up into the cloud. The venue connections, some of them are great, some of them are a nightmare. It’s a complete crapshoot on what you’re going to get. Some of them will be great when you set it up and then you have issues. If you’re doing streaming via SRT or one of those other formats and you have error correction, it’s fine if you get little bumps in that. If I’m trying to remote control a robot for AI, I cannot have that delay. Like a delay of more than 100 milliseconds kills my ability of the robot to track its target. So that’s why we’re bringing the compute right on prem next to the robot is because it eliminates that timing gap. It’ll enable us not only to be more reliable in the tracking, but then you kind of doesn’t matter what the outside connection from the venue is. And at the end of the day, it will allow us to track stuff. >> Also gives you the ability to handle content that can’t be done in via cloud. Security issues and that sort of thing. whole whole host of new things that you weren’t looking at before. >> Yeah, that’s a really good point. Um, one of the uh we were talking to this is yet another area that we um are going to be expanding out in over the next year is reality. So reality programming, they’re switching that stuff on prem contractually. Those signals cannot go out and certainly can’t travel over the public internet even if they’re encrypted like they are in SRT. Um, so that all would have to be handled on prem. Um, we have we had we did actually build uh an onprem um server with the old model with the big GPUs in it and that was really funny because it was it was we had that at a couple of shows and it was as big an attraction as the the actually what the autopilot was doing. People are like, “Oh, look at those GPUs.” Um, but that’s just not very practical for long-term deployment. And we’re trying to do costefficient solutions. Like if we were one of the big companies, probably what we would do is say, “Here’s your $25,000 computer that you need to put on prem instead of here’s your $600 computer that you need to put on prem.” Because, you know, we get like I spent 30 years in the business. I know the budget pressure. Um, we need things, we need new tools in the industry so that we can still make money doing events right now. It’s gotten so tough for production companies and entities to make money on productions, you know, just everything’s getting so compressed. That’s that’s another big thing we bring to the game is guess what? You can do this a different way and now you can make money on that $15,000 show, right? You can do that now and not just do it at a, you know, to cover your nut, but you can do it and make money on it. >> That’s while we’re all about storytelling. We were in this business to tell stories people would like to pay us for. >> Yes. >> So, the NAB show floor, it’s gotten it’s going to be very AI heavy. There’s now two dedicated AI pavilions. How are you going to cut through the noise when everyone’s just grabbing that AI sticker and slapping it on their product name? >> Yeah, that’s a great question. The the AI hype is kind of out of control. It was really out of control last year and we had people slapping it on stuff that just wasn’t at the end of the day. Whether you’re going to label it AI or not doesn’t really matter. It’s a tool. We happen to have real AI. It is really artificial intelligence. It’s not computer vision rebranded with this new label, but at the end of the day is does the tool do the thing? Does it have the capabilities to do the thing that you want it to do? And in this business, the proof is in the pudding, right? Everybody wants to get our robots on site, test them, make sure that they do the thing that we say they’re going to do. That’s true across most technologies in this business. Most people are not just going to, you know, buy it and take it home and then see if it works. You’re going to need to test it in your actual environment to see if it does what you need it to do. Thankfully, we have, you know, a really broad cross-section of sports and examples of that. So, you can see it. You can go to our YouTube channel. Um, it’s YouTube, see it in action because at the end of the day, that’s all that matters. Doesn’t matter what label you put on it. It’s does it does it work? Does it does it do the thing I need it to do? >> Speaking of things doing what they need to do um at NAB this year, if you’re able to get untied from your own booth for a little bit, what do you think is going to be interesting that you want to see at the show for where production and and broadcast and post is going? You know, one of the things I really miss now that I’m in the booth the whole show, I really miss being able to walk around and see what’s going on um at other places. One of the things that I’ve always loved about NAB is the smaller booths, those small innovative companies doing something unique um where they have some tool that really speaks to me as a filmmaker. That was one of the things that I didn’t realize. My background is in production like at 30 years making TV shows, not building product, you know, for other folks. I didn’t realize before I started this that most of the big companies making the gear that I’ve been using my entire career don’t really understand what I do. like they understand kind of and they’ve evolved long enough so that it’s a tool, but generally it’s the user uh adapting to what the tool is instead of the tool adapting to what the user wants. And I think that’s the thing I’m really excited to look for at NAB. We’re now entering into this realm where we can do more bespoke stuff. We’re getting the sports leagues are taking control of their tech stack from top to bottom. We’re seeing that happen. I think we’re going to see more and more of that where we’re going to get we’re going to see more of these kind of solutions but not at the huge expense where we’ve seen them in the past. >> More cost-effective ways to get the same task done. >> Yeah. And customizing tasks. Like everybody wants their I can’t tell you how many people I’ve had on the Well, not I haven’t had a ton, but I get left-handed people to get the iPad and go, I want the Zoom slider on the other side. And I’m like, “Oh, okay. I get that.” You know, you know, like in Europe, they have the I think they do the zoom on the on the right and we do zoom on the left. Um, is that or is it the other way around? I can’t remember. On the tribe. Yeah. On on the sticks. >> So, um, being able to customize the interface to something that’s more appropriate to your workflow. I think that’s the other thing that we’re going to see. It’s not going to be oneizefits-all stuff. we’re going to get more maybe not super granularly customized but at least in kind of general terms more um more bespoke solutions or more flexible solutions. Certainly that’s you know for us that’s one of the things that we built in with the aircloud and with um all of the different um components that we interface with the ability to be flexible and work with anytime. flexibility is is where we all need to be moving forward. >> Yeah. I mean, we’ve that’s the other thing about the business that’s always made me crazy is nothing talks anything. Like, you know, I have this product from this company doesn’t talk to this product from this company. And uh I think we’re entering an era where that’s going to go away because we can’t have these walled gardens anymore. There’s too many other part of the issue part of the thing that’s driving that is there’s too many other good solutions that do network together and software like companion and uh central control that stitch all of these things together. I think we’re going to see more and more of that. We’re taking our API and giving it to people like Scarhoy to embed in their systems to do their joysticks. And ZCAM, the camera that we mount on our robot also is in Scarhoy and Cyan View. like we’re going to see more symbiotic systems where different components from different manufacturers are working together. I think it it absolutely has to go this way because as a a purchaser of equipment, you can’t be stuck inside this box. I’m sure a lot of people are going to relate to that. Oh, we bought into this system from this company and now we need to take advantage of this thing, but it doesn’t play nice with there. So, I got to throw away my $50 million investment in that thing to move over here. That’s very painful. We’re not gonna I I don’t see that as being a viable path going forward. People that we’re talking to people we’re talking to are flat out, hey, can your robot work with this thing? >> Because we have an open API and because we it’s an IoT device. The answer is almost always yes >> and and yes is a great answer. So my last question and I need to start asking everybody this that that I get on here. If you could violate the temporal prime directive and go back in time and give yourself one piece of advice at that moment you decided to turn uh a soccer camera practice shoot thing into a company, what would it be? >> Oh, that’s a great question. Um, that one’s a little a little tough to answer because my knee-jerk reaction would would be raise more money. like literally just cuz it’s going to take longer and be more expensive and you’re going to need things you didn’t anticipate when you’re creating some new piece of technology. You can’t know always completely what that path is. Things are going to come up and having the resources to be able to react to that um is absolutely critical. Having said that, having a shortage of funds to do certain things that we wanted to pursue made us ha absolutely had to laser focus have you know made us have to stay hungry made us have to be absolutely ruthless for focusing on the most important features for customers. So that thing that I would do differently I I I don’t know I I often feel like you go down these paths and you do things wrong and that or make a mistake and that helps you inform a better product at the end of the day. So I don’t think that there are necessary unless there you make a mistake that’s fatal that kills the project entirely. Um I don’t necessarily think mistakes are a bad thing. I think what you need to do is you need to fail quickly and adapt. I mean, that’s one of the great things about us being a small company is we can very quickly adapt and iterate and there’s not a whole bunch of meetings. It’s, you know, me and the devs going, “Hey, let’s do this and that and we’ll fix it in real time together.” So, um, yeah, I think I think having said all that, I’ve now talked myself out of it. But I’d rather have the money and just still stay hungry and still stay focused and have the, you know, more resources to do it. >> The money and the cheat sheet of the answers that you already have that you So, we don’t have to go down that road. Here’s the thing you Here’s the thing I learned. Let me let me take that and and use that now. >> Well, I think there’s a quote from Elon Musk about a bunch of in really talented engineers solving problems they shouldn’t be solving. In if I look back in time to when we started our um our our autopilot tracking, we if we had a bunch of resources, we could have thrown a bunch of whole resources on making computer traditional computer vision better. Um that’s what all the big guys are doing. Like we could have done that. In retrospect, I’m really glad that we didn’t do that because we came up with a newer and better and kind of rethinking um how image tracking or target tracking is done. So, at the end of the day, I don’t know that there are again, unless they’re fatal, I don’t know that there are really mistakes. They’re just how you react to those setbacks. Make them temporary. um be resilient. Uh it’s been, you know, doing a startup. My, again, my background is production, so I’m used to working, you know, 12-hour days, 7 days a week for, you know, 3 4 months while a project is underway. What I’m not accustomed to is doing that for 6 years. So, that’s been a little bit of a that that’s been the biggest challenge. just enough enough hours in the day uh to do the things that I want to do um to, you know, to move stuff forward. It’s it’s really I got to say though, it’s it’s really fun. It’s really fun creating new tools for people to do their jobs and tell their stories. There’s nothing funner for me than when I go down to a someplace where our robots are being used and I see someone using it in a way that I didn’t expect. um or see somebody doing like we were at the Savannah Bananas is one of our customers and we were down they were here in San Diego and it went down and they have it uh behind the behind home plate is where that camera’s positioned and this kid’s been operating it with the touchcreen on a whole bunch of these Bananas games and I watch him later on in the game he pushes in on the pitcher and tracks the ball at manually as it’s coming the pitch is coming over the plate. If you’d asked me if somebody could do that, I would have said, “No way. Too small, too fast.” >> But this guy has figured out a way >> to do that. He’s had enough repetitions with it. And seeing that um that somebody can use it in a way I didn’t expect. That’s that’s super fun for me to see that. >> And that’s why we’re in the business, to have fun. We’re not just here to make money. We If we just wanted to make money, we’d be in we’d be in accounting somewhere. >> Yeah. Banking. We be banking. >> Exactly. >> All righty. Well, to our off audience, thanks for dropping by with us here today on the script. If you found value in the conversation, please hit those like and subscribe buttons down there. Helps us get guests like Nick in front of the right people. Special thanks to Nick and the team at Advanced Image Robotics for joining us. You can meet up with Nick and see the whole line of products in the central hall at booth 4212. And as always, thanks to Avid Technology for the editorial platform that I utilize. Uh, if you’ve got a workflow that’s ailing you, the doctor is in. Drop me a line at contact the postdocctor.com and find everything we do at the postdocctor.com. I will also be at NAB. Reach out and see if we can get some time on the calendar. We will see you next time on the script.