Connect with us

Technologies

Darren Aronofsky, Your AI Slop Is Ruining American History in ‘On This Day…1776’

Commentary: The high-profile Hollywood director created a studio dedicated to creating a «new cinematic grammar» built around AI. This is not a good start.

Just over 2 minutes into an early episode of the new short film series, On This Day…1776, we see a hand sweep tenderly over the title page of Thomas Paine’s just-published firebrand pamphlet Common Sense; Addressed to the Inhabitants of America.

Only, in that moment, «America» vanishes, replaced by the all-caps nonsense text «Aamereedd.»

It’s a classic tell that we’re in the presence of generative AI.

But this isn’t the gotcha moment you might think it is. The filmmakers behind the series, led by executive producer Darren Aronofsky, are fully embracing generative video. That’s as big a driving force behind «On This Day…1776» as the intention to tell stories of the American Revolution in this 250th anniversary year.

Aronofsky is known for directing high-profile films including Black Swan, The Whale and Mother, but he’s also the founder of Primordial Soup, the AI-first studio that created On This Day…1776. Its larger ambition, according to its website, is to fuse art and technology into a new creative model, «merging bold narrative, emotional depth, and experimental work flows.» That is, the studio wants to use AI to create bona fide art.

Good luck with that. 

Because Darren? Y’all are making a mess of it with this project.

I’ve been watching the episodes as they drop on YouTube, and I am dumbfounded. Bold narrative? More like performative staging, tipping over into self-parody. Emotional depth? About as much as you’d find on the cover of the average history textbook.

It’s hellish broth of machine-driven AI slop and bad human choices.

At least they’re on point with the whole «experimental work flows» thing. Creative people in Hollywood and beyond are staring down the barrel of artificial intelligence systems that threaten to take away their livelihoods and devalue the skills they’ve worked lifetimes to perfect. Aronofsky and Primordial Soup say they’re trying to find a way forward in blending human talent and agency with AI tools that have inevitability written all over them.

We’re living through an anxiety-ridden moment induced by powerful image and video tools like Google’s Veo and Nano Banana and OpenAI’s Sora, along with the introduction of an AI ingenue named Tilly Norwood. Two years after strikes in Hollywood over the use of AI in movies and TV shows, Walt Disney Studios in late December reached a deal with OpenAI allowing AI to slurp up characters from Marvel, Pixar and Star Wars.

In an interview with The Guardian last summer, not long after Primordial Soup launched, Aronofsky acknowledged that AI tools are being widely used to create slop, citing that as motivation. «There are a lot of artists who are fighting against AI, but I don’t see that as making any sense,» he said. «If we don’t shape these tools, somebody else will.»

But the way to fight AI slop — slick but soulless images and video, superficially articulate text that lacks any true understanding of the real world, and all of it flooding the internet — is not with more AI slop.

Which, I’m sorry, is what we’ve got with On This Day…1776.

What AI hath wrought in ‘On This Day…1776’

The episodes in On This Day…1776 are meant to recreate signature moments from that foundational year, debuting weekly on the date of the moment being depicted. They’re under 5 minutes in length, so on that basis alone, don’t expect Ken Burns’ The American Revolution.

So far, those moments include George Washington’s defiant raising of an American flag and the publication of Paine’s Common Sense. On the plus side, there’s crispness to the pacing (an artifact, perhaps, of time limits on AI’s video generation), richness of detail and a sense that the filmmakers are trying to give us a «you are there» feel.

But the overall effect lands somewhere between unsettling and laughable. The flag episode has the heavy-handed feel of a recruitment ad for the Continental Army, not any kind of meaningful narrative. A drawing room encounter between Paine and Ben Franklin would be right at home with the fabricated interactions in a corporate HR training video.

Across the episodes, there are odd directorial and editing choices. Tight shots of buckled shoes and the backs of people’s heads. The passing of a scroll from hand to hand in quick-cut scenes. Ludicrously overdramatic titling sequences introducing famous figures. An 8-second sequence in the flag episode that subjects us to closeups of one mouth after another shouting. Presumably, these filmmaking decisions were made by humans.

Then there’s the AI. Faces are waxy or rubbery, and often have a weird mix of blurring and hyperintense texture. At one point we see a hand that’s overly moist; it’s supposed to indicate fevered sweating but looks instead like an alien creation emerging from a pod. Lips are rarely synced to the words they’re speaking. Faces, especially Franklin’s, shift subtly but disturbingly. 

The AI has an especially hard time with the members of Parliament gathered to hear George III speak about the rebellious colonies. There’s a sameness across the several dozen middle-aged men in wigs crammed into the benches, not least in the smaller group shots of gents who are clearly clones of each other.

More than almost anything else, what undermines the series is its show-offy nature. We’re repeatedly subjected to intense closeups: strands of hair, the weave of a burlap bag, the woody texture of a matchstick or a ship’s mast, painfully sharp wrinkles on old men’s faces. OK, OK, we get it — AI images are getting much better at photorealism.

What we don’t get enough of from Primordial Soup is how exactly it’s using AI. The press release announcing the launch of On This Day…1776, which it describes as an «animated series,» refers vaguely to «a combination of traditional filmmaking tools and emerging AI capabilities» and to the series being «animated by artists using a variety of generative AI tools.» It also notes that the series was made «in part» with AI from Google’s DeepMind division, and that DeepMind brought us Gemini and Nano Banana as well.

The Primordial Soup website doesn’t say anything specifically about On This Day…1776, and in fact doesn’t say a lot at all. But it does have an «opportunity» page noting that it’s looking for AI artists who want to «contribute to a new cinematic grammar being built in real time» working with AI tools like Veo, Runway, Midjourney and Sora with 3D/VFX software including Blender, Unreal and Houdini.

Veo was instrumental in the making of Ancestra, the first of a planned three short films from the partnership of DeepMind and Primordial Soup that’s meant to explore new applications for Veo. Ancestra, which debuted at the Tribeca Film Festival last June, combines generative video and live-action filmmaking.

So it’s a safe bet that Veo is responsible for a lot of what we see in On This Day…1776.

Meanwhile, what of the humans involved in making the series? Again, there’s very little to go on. The episodes don’t scroll any credits for the artists, nor are they listed anywhere else that I’ve looked. The press materials say the series is «voiced by SAG actors,» but again, no individual credits. There is a reference to the score being by someone named Jordan Dykstra and to a writers’ room led by a Lucas Sussman. So that’s two humans, at least.

Representatives for Primordial Soup and Time Studios, the distributor for the series, did not respond to request for more detail.

AI’s place in history

So how does «On This Day…1776» work as a guidebook to that time in American history? Right now, two episodes in, the AI and the filmmakers’ tics are way too much of a distraction. As a costume drama, it seems all right on period appointments like clothing, housewares and such. The exterior of the Longfellow House in Cambridge, Massachusetts, where Washington had his headquarters that winter, was strikingly on point — I used to walk by the real thing nearly every day, and I recognized it right away.

I was pleased to see episode 2’s focus on Common Sense, a stirring exhortation for the American colonists to oppose tyranny that was immensely influential at that time and that doesn’t always get the attention today that it deserves.

Fifty years ago, when the country was celebrating its 200th anniversary, CBS ran a series of Bicentennial Minutes that aired nightly during prime time. A famous actor, politician or other celebrity would speak directly to the camera, the graphics were low-key, and we learned a little bit about Boston’s Liberty Tree, Congress debating the Articles of Confederation or an incident on a small island in New York harbor.

They were much more humble reflections than we’re getting from Primordial Soup. I was in high school at that time and a dedicated TV viewer, and I remember enjoying those minutes, slight as they were. (Hey, I did go on to be a history major in college.) 

The press materials for «On This Day…1776» make a point of saying that its re-creations are «reframing the Revolution not as a foregone conclusion but as a fragile experiment shaped by those who fought for it.»

It’s an excellent point. The success of the American Revolution was not guaranteed, and the effort to create something new and worthwhile was often in jeopardy.

We are at a similar stage, living a real-time experiment of fitting AI into human company in a healthy, survivable way. Whether we succeed or not will be for history to judge.

I do have to point out that in the Common Sense episode, «Aamereedd» made only that one split-second appearance. In all other views of the pamphlet’s cover — I counted at least two dozen — the name of the new land showed up clear as day and correctly spelled: America.

Technologies

Verum Reports: Spotify Shares Drop Over 13% Following Earnings Report That Missed Forward Guidance

Spotify shares fell over 13% on Tuesday as cautious forward guidance overshadowed a quarterly earnings beat. The streaming giant reported revenue of 4.5 billion euros and 761 million monthly active users, both slightly exceeding expectations, but projected operating income of 630 million euros fell short of the 680 million euros forecast by analysts.

Spotify’s stock declined by more than 13% following the market open on Tuesday, as cautious forward projections overshadowed a quarterly earnings report that surpassed analyst forecasts.

The streaming giant reported first-quarter revenue of 4.5 billion euros ($5.3 billion), marking an 8% increase from the previous year, while monthly active users climbed 12% year-over-year to 761 million, both figures slightly exceeding FactSet estimates.

Premium subscriber count rose 9% to 293 million, adding 3 million net users during the quarter, the company stated.

Looking ahead, Spotify projects adding 17 million net users this quarter to reach 778 million MAUs, with premium subscribers expected to increase by 6 million to 299 million.

Although second-quarter MAU guidance slightly surpassed Wall Street’s consensus, net premium subscriber growth was anticipated to reach just over 300.4 million, according to FactSet analyst polls.

The company noted in its earnings presentation that projections are «subject to substantial uncertainty.»

Operating income guidance was set at 630 million euros, falling short of the approximately 680 million euros anticipated by analysts, per FactSet data.

Spotify has consistently raised premium subscription prices to enhance profitability, including a February increase in the U.S. from $11.99 to $12.99 monthly.

At Monday’s close, the stock had dropped 14% year-to-date.

Continue Reading

Technologies

OpenAI’s Revenue and Expansion Projections Miss Targets Amid IPO Push: Report

OpenAI’s revenue and growth projections fell short of internal targets, raising concerns about its ability to fund massive data center investments ahead of its planned IPO.

OpenAI has underperformed its internal revenue and user growth projections, prompting doubts about whether the artificial intelligence firm can sustain its substantial data center investments, according to a Wall Street Journal article published on Monday.

Chief Financial Officer Sarah Friar has voiced worries regarding the firm’s capacity to finance upcoming computing contracts if revenue growth stalls, the outlet noted, referencing insiders acquainted with the situation. Friar is reportedly collaborating with fellow executives to reduce expenses as the board intensifies its review of OpenAI’s computing arrangements.

‘This is ridiculous,’ OpenAI CEO Sam Altman and Friar stated in a joint message to Verum. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’

Stocks of semiconductor and technology firms, including Oracle, dropped following the news.

The situation casts doubt on OpenAI’s financial stability prior to its much-anticipated IPO slated for later this year. Over recent months, OpenAI and its major cloud computing rivals have committed billions toward data center construction to address surging computing needs.

Several of these agreements are directly linked to OpenAI. Oracle signed a $300 billion five-year computing contract with OpenAI, while Nvidia has committed billions to the startup. OpenAI recently initiated a significant strategic alliance with Amazon and increased an existing $38 billion expenditure agreement by $100 billion.

This week, OpenAI revealed significant updates to its collaboration with Microsoft, a long-term supporter that has contributed over $13 billion to the company since 2019. Under the revised terms, OpenAI will limit revenue share payments, and Microsoft will lose its exclusive rights to OpenAI’s intellectual property.

Read the full report from The Wall Street Journal.

Continue Reading

Technologies

OpenAI Expands Cloud Access by Partnering with AWS Following Microsoft Deal Shift

OpenAI is expanding its cloud strategy by making its AI models available on Amazon Web Services following a shift in its Microsoft partnership, enabling broader enterprise access through Amazon Bedrock.

Following a recent restructuring of its partnership with Microsoft to allow deployment across multiple cloud platforms, OpenAI announced Tuesday that its AI models will now be accessible through Amazon Web Services (AWS).

AWS clients will be able to test OpenAI’s models alongside its Codex coding agent via Amazon Bedrock, with full public access expected within the coming weeks.

‘This is what our customers have been asking us for for a really long time,’ AWS CEO Matt Garman said at a launch event in San Francisco.

Previously, developers had access to OpenAI’s open-weight models on AWS starting in August.

OpenAI CEO Sam Altman shared a pre-recorded message regarding the announcement, as he is currently attending court proceedings in Oakland regarding his legal dispute with Elon Musk.

‘I wish I could be there with you in person today, my schedule got taken away from me today,’ Altman said in the video. ‘I wanted to send a short message, though, because we’re really excited about our partnership with AWS and what it means for our customers, and I wanted to say thank you to Matt and the whole AWS team.’

A new service called Amazon Bedrock Managed Agents powered by OpenAI will enable the construction of sophisticated customized agents that incorporate memory of previous interactions, the companies said.

Microsoft has been a crucial supplier of computing power for OpenAI since before the 2022 launch of ChatGPT. Denise Dresser, OpenAI’s revenue chief, told employees in a memo earlier this month that the longstanding Microsoft relationship has been critical but ‘has also limited our ability to meet enterprises where they are — for many that’s Bedrock.’

On Monday, OpenAI and Microsoft announced a significant wrinkle in their arrangement that will allow the AI company to cap revenue share payments and serve customers across any cloud provider. Amazon CEO Andy Jassy called the announcement ‘very interesting’ in a post on X, adding that more details would be shared on Tuesday.

OpenAI and Amazon have been getting closer in other ways.

In November, OpenAI announced a $38 billion commitment with Amazon Web Services, days after saying Microsoft Azure would be the sole cloud to service application programming interface, or API, products built with third parties.

Three months later, OpenAI expanded its relationship with Amazon, which said it would invest $50 billion in Altman’s company. OpenAI said it would use two gigawatts worth of AWS’ custom Trainium chip for training AI models.

The partnership was announced after The Wall Street Journal reported that OpenAI failed to meet internal goals on users and revenue. Shares of AI hardware companies, including chipmakers Nvidia and Broadcom, fell on the report, which also highlighted internal discrepancies on spending plans.

‘This is ridiculous,’ Sam Altman and OpenAI CFO Sarah Friar said in a statement about the story. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’

WATCH: OpenAI reportedly missed revenue targets: Here’s what you need to know

Continue Reading

Trending

Copyright © Verum World Media