Open questions I have about the industry: Link to heading

  • The creator economy is alive and well in VFX, but with artisans bouncing from house to house how can we, as an industry, better support them? Can we figure out a way to extend the discounted rates enjoyed by the studios to individual VFX/color artists working on behalf of those studios? Or do those capabilities simply need to be brought back in house? (Especially pressing given the news about Technicolor).

  • For the first time in a long time, I found myself recommending The Phoenix Project, Gene Kim’s seminal novelization of what a DevOps transformation looks and feels like from the inside. The specific conversation was about how to measure and improve the efficiency of production engineering– a subject which is perenially kicked aside as the Giant Studio Machine lurches from project to project, with very little time to reflect (let alone optimize workflows). With studios in-housing, could we finally see the adoption of Agile practices in production technology?

The meta-conversation about AI Link to heading

Throughout every session and interaction, I felt there were two different “meta conversations” happening about AI.

  • The first was a conversation about incremental change: which models and tools are “useful” (and the inevitable segue about how to effectively measure utility); novel AI use-cases, including a “production AI” that’s like part PA, part line producer and has perfect knowledge of all script and shooting changes; and of course SDI-to-IP type conversations about where and how to plug the AI cord into your existing workflow (my favorite in this genre was the MovieLabs demo, featuring an AI that filters through AI-generated images to identify only the most useful ones). 😆

  • The second was a conversation about radical change, and topics in this bucket were: musing about whether AI can ever replace human creativity; radically uninformed assertions about AI’s impending “death spiral” due to its place in the hype cycle, an impending lack of training data, or the problem of model collapse; and of course plenty of over-used, under-examined aphorisms: “the data you get out is only as good as the data you put in,” “new models don’t show the same performance gains as the early models,” and my favorite– “DeepSeek discovered new optimizations out of necessity.” (I’ll probably write a post dedicated to debunking these AI “truisms,” because I hear them all the time and I think they can lead to some dangerously bad conclusions.)

But while there was a lot of talk about change, there was very little talk about how to actually manage through change. And this, I think, is what clear-headed individuals and organizations need to be focused on in this moment. Topics like optionality, communication, culture and organization are key. Focusing on incremental change risks failing to see the forest for the trees, whereas ruminating on the Philosophy of AI risks falling into analysis paralysis.

If you want some practical, simple guidance into how I recommend organizations lean into this moment, check out Five Questions Everyone Always Asks Me About AI.