AI (Artificial Intelligence) is promising a revolution in filmmaking, from script-writing to special effects, to moving image generation and editing. Like many, I find myself wondering if my job as a TV and film editor will even exist in a few years. Should I consider a new career? If I embrace AI and work with it, will I just be training it to replace me? How can I best work with new technology, stay creative, and earn a living?
During my career in film and TV, Artificial Intelligence (AI) has transitioned from a niche topic of interest among computer programmers, roboticists, neuroscientists and philosophers to a matter of widespread public concern. As digital technology has become the foundation of modern post-industrial society, software has evolved into AI, which has advanced and grown in its visible applications. AI is increasingly used in everyday technologies, and AI tools, image and voice generators, are now being used in the creative industries. Having studied and written about AI extensively, I am in a better position than most in the media business to understand what’s going on, but that understanding won’t pay the bills.
In the past year, we have seen the advent of generative AI tools for still images such as Firefly (Adobe), DALL-E (OpenAI), Image Designer (Microsoft), and MidJourney, among others. These tools are impressive, and by bursting onto the creative scene, they seem to be radical and revolutionary. With these recent advances in AI, my editor and filmmaker friends are talking about AI, and the AI folks are talking about filmmaking. Making headlines are Large Language Models (LLMs) like ChatGPT, and AI text-to-image generators such as DALL-E, MidJourney, and Firefly. These AI interfaces and tools are viewed as something that encroaches on what is said to replace what makes us uniquely human — our creativity.
OpenAI, the company behind ChatGPT and DALL-E, has now released their text-to-video application Sora. It can create impressive-looking 3D animation sequences quickly and efficiently. This can be seen as the next step in generative AI’s march into what were once considered uniquely human skills within the creative industries. But technology companies require hype to generate customers and sell shares. As a human filmmaker, I want to explore if AI really will render my job defunct. Beyond all the hype, I contend that AI in filmmaking, is not actually creative; it is derivative, but it can provide us with more efficient tools. I’ll consider how and where AI is used in filmmaking, and how as used within current industry structures, it will affect the quality of what we watch.
The new filmmaking AIs come in various types. However, the “video” in all these applications is 3D computer-generated animation. There are not yet AI-controlled drones replacing camera operators and directors. So, moving image AI is, in fact, advanced animation, more similar to what we see in high-end computer games than movies. In addition to open-AI’s Sora, a whole host of other companies are offering this style of text-to-video (or computer animation) of varying qualities.
There are also a range of companies claiming to be AI video editors aimed at the corporate video sector. They offer quick creation of what are essentially glorified PowerPoint presentations with AI avatar presenters; combining AI voices, motion graphics, and stock footage-style clips. They are used to present short sales, explainer, and training videos (examples include companies like Colossyan, Synthesia, and DeepBrain AI).
AI features are also being built into current industry editing tools, such as Adobe Premiere, which promises a revolution in film post-production, bringing AI creation to the media production workflow. The use of AI in an industry tool such as Adobe Premiere, can give us the most clues about how it will affect the production process.
Media Technology: Democratising the Tools of Production
In my work, I have often adopted the latest technology. I came of age at the start of the millennium, as film and TV were going digital. These technological advances enabled my career as a filmmaker, editor, and video artist. Technological developments gave me affordable access to production and post-production tools that were previously far out of my reach. In my desire to make films, I learned how to use and hack the latest cameras, and edit software. I didn’t have to work my way up the corporate TV industry ladder; I could just get on with making movies, offering production, post-production, editing services from my small studio. Over the last 25 years, I’ve made films for international TV broadcasts, museums, corporate big brand advertising, and various NGO and charity clients. But I’ve also often worked in the arts sectors, outside the traditional TV and film industries, creating video artworks, immersive video installations in galleries, live-visuals at music events and festivals, and projection-mapped work as part of West End theatre productions. I’ve also collaborated with traditional oral storytellers and fine artists.
This wide range of projects required working with all sorts of moving images, shot on many different cameras and even phones, as well as analogue, experimental, and digital animation. The varied nature of these projects and often limited budgets meant that I was keen to think outside the box and adopt new types of video technology. This enabled me to create the sort of video effects and content that were previously the preserve of bigger companies and studios. This experimental attitude to creativity and technology served me well, with strategies and insights learned in the arts feeding into my more mainstream TV work.
I managed and helped design the first tapeless studio in London for MTV. As an early adopter, I shared my knowledge of digital post-production skills through my company, Globestream Training. We worked with editors and journalists, teaching them how to operate and best use digital post-production tools. This job took me around the world, including India, where I helped set up a 24-hour news channel. I was always excited by the latest technological developments, seeing them as democratising the tools of production and enabling creativity. But like everyone else, I have seen the digital dreams of the 90s and 00s sour through corporate greed and exploitative business models. Are these new technological filmmaking tools I helped introduce now going to turn on me, discarding me as I discarded the mini-DV tapes and Digi-Beta cassettes? Will the machines get the last laugh? Are art and film the new frontier in the tech companies digital dominance of cultural and economic life?
AI’s Rapid Impact on TV and Video Production
AI will have a rapid and widespread effect on TV and video production. It’s worth noting that changes in production and consumption go hand in hand. TV and film are culture industries, guided by both cultural trends and economic considerations. The consumption of media has been rapidly changing. The internet, smartphones, and streaming media have affected where, when, and how people watch films and videos. This greatly effects what and how something is made, such as the move to very short videos on TikTok, Instagram, YouTube, often watched on smartphones. On the economic side, what gets seen is the result of the power of media companies, tech companies, social media, and the huge power of algorithms. AI will take a big part on this short form creation and distribution side of the media equation. But as a filmmaker, I want to explore in more detail how AI is changing the creative process and the production methods within the more traditional ‘longer form’ film, TV, and video industries.
I love film editing. You are playing with time and space, reordering events, manipulating emotions, telling a story. Film editing is the role in which the technical meets the creative, making it a very appropriate area to discuss the meeting of the technical wizardry of AI and the creativity of filmmaking. Digital film editing is a space where maths meets language, and logic meets emotion. As an editor, you must understand video formats, technical media workflows, know how to operate advanced software. An editor has to work with many types of media: moving images shot on many different cameras and sometimes phones, animation and motion graphics, and recorded sound, sound effects and music. These many different types of images and sounds have to be combined together well in a temporal order to build an engaging story and create strong emotional impact in the audience. It all has to look amazing, fell great, be engaging, and be done within tight time and budgetary constraints.
There has been a lot of hype around AI, with technology companies and journalists creating many promissory stories of the future. There is much discussion of AI’s effect on creativity, but I view it more as a continuation of particular trends I have experienced in the creative industries. How these trends affect creativity are often a lot more to do with structural factors within the industry itself, such as the continual squeeze on budgets and time and the desire to be risk-averse, rather than technologically determined.
What is now called ‘AI’ in the marketing copy is often what used to be called a software ‘plug-in’. These plug-ins and software tools, such as Adobe After Effects, enabled colours to be changed, objects added and removed, and all manner of motion graphics effects to be added. The new AI tools do these same tasks but quicker and more efficiently. The advent of generative AI is a further step accelerating us along this road.
The Evolution of AI Tools in Filmmaking
Music libraries that provide ‘ready-made’ tracks for use in TV and film have been around for decades. They are cheaper than hiring an original composer. As editors, we can choose from different mixes and durations of many thousands of tracks, and we often ‘remix’ these tracks ourselves to suit the film. The success of these music libraries has also grown with the advent of online accessibility and search functions. It has made all the content more readily available, easy to peruse and download.
As well as library music I also regularly use ready-made stock images and footage, CGI animations and various graphics in filmmaking. These footage libraries started as a way to re-sell shots from existing films as licensed ‘archive’ for use in new productions. In the past, footage librarians had to be contacted, requests made, and a small selection of watermarked sample tapes or low-resolution video files sent out. Archive footage was originally a way to show images of a historical period or event. As internet speeds increased, the archive/stock footage libraries followed the audio libraries and moved online. The amount and accessibility of stock images and footage available and used is growing through the extensive use of these online libraries. Footage libraries such as Shutterstock and Getty Images now provide hundreds of thousands of ready-made shots of people, nature, cities, almost anything you can imagine, as well as motion graphics backgrounds and 3D computer animation. Almost every TV documentary and corporate film uses these sorts of shots, from close-ups of seeds or cells to drone footage to a ‘happy corporate couple in a meeting with iPads’. If used well they can help a lot with storytelling. Expensive shots can now be purchased and included in a film at a fraction of the cost of shooting original footage.
We also have motion graphics templates to be used with software such as After Effects. This enabled editors to add, adjust, and combine different sorts of titles, 2D and 3D animation, and visual effects. The templates provide the ‘bones’ of an effect which the editor can then change and adjust for each project, making the effect their own. In recent years, 3D animation was added to this list, and is accessible to use in editing software through Motion Graphic Templates (MOGRTS). These can expedite the process of adding on-screen graphics and animations.
Like the tracks composers sell in music libraries, many of these ‘stock’ images and video clips in footage libraries, are now original works in themselves, created to be ‘stock footage’. They posses particular styles and aesthetics drawn from and imitating aspects of other original film work; but they also have a ‘cheesy’ derivative style of their own. These images commissioned by the libraries have an identifiable style, the style and content choices are based on what sells well. Creators of stock images submit their footage to the libraries; they have internalised the more popular styles, and create particular typs of stock images and video they know the libraries will buy as they are similar to the existing selection of clips available. Generative AI accelerates this process, It can create and pump out clips at an even faster rate than filming new shots. It replicates a certain view, a house style, we get many similar versions. These factors combine with budget and scheduling constraints on TV and film productions resulting in a lot more of this type of similar ‘stock’ content appearing on TV. These libraries and content marketplaces have now become corporate behemoths that dominate the market. Generative AI tools for image creation and video animation are being integrated into these software tools that we already use, and into the stock and motion graphics libraries. The extensive integration of these AI tools has not happened yet, but it is coming very soon.
We can view these various pieces of library music, archive/stock footage, and motion graphics templates as tools and pieces of creative content with which to build new films. With the technology comes new opportunities to reconfigure and remix. As film and video became digital, and non-linear editing technology improved and became more accessible, this ‘archive’ could be more readily experimented with, remixed in much the same way as hip-hop and techno musicians could sample and re-mix existing beats and samples. Whole films can now be made using ‘stock footage’ from cheesy corporate films, to you tube videos, to more creative work such as that created by the filmmaker Adam Curtis, who uses BBC archive footage to bring us new understandings of history. But due to economic pressures, increased use of AI generated content could also lead even to derivative TV shows and films that are configured and produced to make money, not art.
Style over substance : If you that, you’ll like this . . .
When using Adobe Firefly AI text to image generator; if switched to ‘photo real’ mode it uses the training data it has culled from Adobe Stock, and produces images which echo the style of the stock library. When an AI text to image or video generator is fed text prompts, it will provide some options, often similar. I often get one to four images, but all are in a particular style defined in the prompt. Even with further prompt definition the image generator offers a selection that has a particular ‘aesthetic’, in the same way as the different ‘stock’ photo and image libraries do.
Comments0