Does Netflix Use AI-Generated Photos In Documentaries?

Last year, Hollywood was stopped in darkness. The writers, actors, technicians and everyone went head on against the production houses for the use of AI as well as pay disparity.

The deal was signed and the production houses agreed to not use AI in an unethical manner. But the recent allegation against Netflix might make you rethink the viability of the whole deal.

So what is actually happening?

Netflix is facing criticism for using AI-generated images in their documentary “What Jennifer Did.” The documentary tells the story of Jennifer Pan’s involvement in a murder plot.

Instead of using real photos, Netflix opted for AI-generated ones to show Jennifer. These images were supposed to depict her as cheerful and genuine, but they ended up looking strange, with distorted features and odd backgrounds.

It is a well known fact that AI has some serious limitations. Just like text based AI responses can’t crack good jokes, image based AI can’t produce accurate images of hands and ears.

The distortion is very much visible in those pictures. See for yourself.


This decision raised concerns because it blurs the line between reality and fiction. Jennifer Pan is a real person with a real story, and altering her image like this can be misleading and unethical.

Documentaries are meant to be a real document of some event. If they are fictionalising it this way then it questions everything we have been believing in so far from Netflix documentaries. While technology can enhance storytelling, it’s essential to use it responsibly which Netflix doesn’t seem to be following.