AI in TV: 5 predictions for 2026 from Dan Taylor-Watt
Top tips on what’s coming down the track for the hottest tech topic in TV
AI is still the principal word on everyone’s lips right now.
Last year the tech really made its mark in TV, culminating in Disney’s Sora licensing agreement with OpenAI. But what can you expect this year and what opportunities do the new AI models present?
We asked Dan Taylor-Watt - who led the development of two of the UK’s most successful digital products: BBC iPlayer & BBC Sounds and now advises media companies on AI - for his thoughts on how the technology could affect the content creation industry in 2026.
He thinks we are at a “watershed moment” and your unique skillset and industry understanding can prove fruitful; so read on to find out why and learn about:
The benefits and challenges ahead
The new AI to keep an eye on
Strategies to learn from the big deals
Five developments in AI you can expect to see this year
The benefits and challenges ahead
I believe AI can bring immediate benefit to all TV companies and team members with the right training, tools and time to experiment.
It can help with day-to-day productivity tasks like transcribing, summarising, translating and cleaning up audio, but it can also play a valuable role in more open-ended tasks like research, idea generation, writing, and generating and re-versioning media.
One of the biggest challenges is knowing what’s possible. Most AI tools have text input boxes inviting you to “ask anything”, but it’s not clear what tasks they’ll be good or bad at. There’s also huge amounts of hype.
Another challenge is people viewing AI as a single, monolithic and often threatening thing. In practice, AI is a label that’s been given to a wide range of different tools and we have agency over what we do and don’t want to use them for.
Having a basic understanding of how AI works and its strengths and weaknesses can help identify the areas where it can actually add value, as can seeing examples of what’s worked elsewhere. I’m hopeful we’ll start to see more sharing within the industry of where AI is and isn’t adding value.
The new AI to keep an eye on
With Nano Banana Pro and the new ChatGPT Images, AI image generation and editing have got to a fairly high level of quality, consistency and control.
I expect the same to happen with video in 2026, with improved adherence to real-world physics, AI video generation’s Achilles heel, better text rendering, longer generations and the ability to easily make fine-grained edits.
In addition, I think Google and OpenAI will continue pushing the envelope with their Veo and Sora models, although expect companies with a more singular focus on video, for example Runway, to potentially offer more fine-grained control and better tooling. To date, Chinese AI models have been catching up with, rather than overtaking, the leading US models, but we may see that change in 2026.
It’s easy to fixate on wholly AI-generated video, but outside of advertising, animation and historical re-enactments, I anticipate the primary on-screen application of AI for established TV producers in 2026 will be a more affordable route to on-screen graphics and VFX.
AI-generated and edited audio is another area set to mature in 2026, as the quality increase makes AI a viable tool for narration, music and sound effects.
I’d also call out the ability to transform content from one format to another. After the addition of “audio overviews”, AI-hosted podcasts increased attention on Google’s NotebookLM, it’s been quietly plumbing in its image and video generation models to enable content to be morphed into a variety of new formats.
Strategies to learn from the big deals
The Disney/OpenAI agreement feels like a watershed moment in the media industry, accepting generative AI is here to stay and that attempting to exercise some control over it may be a better strategy than pure opposition.
It’s a story we’ve seen play out before with online music, Napster to Spotify, and video, pre-Google YouTube to post-Google YouTube.
However, I think the partnership isn’t going to be plain sailing. Problematic generations feel inevitable and the restrictions imposed on what can and can’t be generated, and Disney’s claim on any outputs involving their IP, are likely to frustrate Sora users. Featuring select Sora-generated videos on Disney+ is also likely to draw ire from some viewers opposed to AI-generated video.
Once the dust has settled and more equitable licensing deals are in place, I’m hopeful we’ll get back to focusing on storytelling and the creation of characters and worlds viewers want to spend time with, as the audience appetite for that remains undiminished, regardless of the tools used to bring it to life.
Five key developments you can expect to see this year
1.) From standalone tools to AI capabilities in the software you use every day
Thousands of AI tools have been launched in the last few years. In 2026, more AI capabilities will be integrated into the software we use every day, from operating systems like Windows and iOS, to productivity suites like Microsoft 365 and Google Workspace, to creativity platforms like Adobe Creative Cloud, to content platforms like YouTube and TikTok.
Standalone AI tools won’t disappear, but lots of AI startups will see their idea emulated at scale by established players. The challenge for TV companies will be keeping on top of these new capabilities as they get integrated, both to realise the benefits and to manage the risks.
2.) New tools better tailored to real industry workflows
Whilst AI will increasingly get peppered through the software we already use every day, I anticipate we’ll also see a new breed of tool emerge, tailored towards real industry workflows. However capable Google and OpenAI’s models become, they won’t be tailored to the unique workflows of established industries.
This presents an opportunity for companies with an in-depth understanding of industry workflows to create simple tools which take the powerful capabilities being developed by the AI giants and stitch them together in a way that is optimised for specific industry workflows.
For example, MotionHub, which was developed by the founders of Chalk Productions with a grant from Innovate UK, addresses a real pain point for them, which was managing and monetising rushes.
3.) More generative AI models trained only on licensed material
Most generative AI models, including all of the household-name AI assistants, ChatGPT, Copilot and Gemini, have been trained on unlicensed material. Whilst the volume of material these models have been trained on means they’re unlikely to recreate copyrighted material unless you directly ask them to, most within the creative industries are unhappy that the AI companies have used material for model training without permission or payment.
As more material gets licensed as training data, we’ll see more models trained exclusively on licensed material made available. Last year gave us Moonvalley’s video model Marey and ElevenLabs’ music model Eleven Music. Expect to see more emerge in 2026, giving production companies the option of using media generation models without a question mark about the material that was used to train them.
To what extent fairly trained models can catch up and keep pace with the frontier models from the likes of Google and OpenAI remains to be seen.
4.) AI tailored to you or your company
One area the big AI companies are now focused on is how to make their products more tailored to you and your company. It’s already possible to customise AI models to write in your or your company’s style, to respond using only information you’ve provided, rather than some random Reddit thread, and to connect to your email and other data stores.
Of course, there are trade-offs in sharing this sort of data, and the AI companies which are likely to succeed in this area are those that can provide sufficient reassurance to customers that they will keep their data safe and not use it for other purposes, for example model training or advertising.
However, to get the most value from AI, I’d suggest tailoring to you and your company is a necessary step.
5.) Hype over ‘agentic AI’ will give way to ‘tell me what it actually does’
I have an aversion to the term agentic AI, which is being liberally used to describe and hype a range of wildly different capabilities and tools. Truly agentic AI is AI that can autonomously use tools in pursuit of a goal you’ve given it. In practice, much of what is being marketed as agentic AI is predetermined ‘if this then that’ workflows, which make use of generative AI to report back in natural language. There’s nothing wrong with that, it’s just not agentic AI.
The challenge with truly agentic AI is reliability. Generative AI is inherently probabilistic, which means it’s not predictable and can easily go off the rails. Consequently, agentic AI is currently only useful in some fairly narrow domains, where missteps are easy to spot and correct. Generative AI will increasingly be paired with more deterministic, and therefore predictable, systems to achieve more agentic outcomes.
However, we are a long way from generalised agentic AI that can reliably go off and complete any task you throw at it, so don’t let FOMO lead you to invest in tools which are overpromising on their agentic capabilities. If it sounds too good to be true, it probably is.
*Dan Taylor-Watt writes a weekly Substack Dan's Media & AI Sandwich



