How Much of Our Humanity Are We Willing to Outsource to AI?

Tech companies claim artificial general intelligence systems will propel our society forward. But the cost to our humanity may not be worth the risk.

OpenAI claims “to ensure AGI benefits all of humanity,” but in reality it threatens the very concept of humanity.

(Photo: CFOTO / Future Publishing via Getty Images)

OpenAI’s latest system, Sora, creates videos on the basis of text prompts. You write a prompt and Sora does the rest. The result: highly photorealistic videos and endearing animations. From mammoths sauntering through a snowy meadow to a market full of people in Lagos, Nigeria, in the year 2056. Nothing seems too wild for Sora’s imagination.

Sora is not yet available to the wider public, because OpenAI is first letting “red teamers” assess the product in regard to its potential for abuse. OpenAI also asked creative professionals for feedback on how Sora can benefit them, framing Sora as a tool that could enhance their output.

At the same time, however, technologies like Sora may render many creatives irrelevant. Just last month, famed director and producer Tyler Perry halted an $800 million studio expansion after seeing what Sora is capable of. It would eliminate the need for him to build sets or travel to film on-location. “I can sit in an office and do this with a computer, which is shocking to me,” he told The Hollywood Reporter.

It should come as no surprise that advances in AI have sparked debates about the future of work as well as concerns about misinformation, bias, and copyright infringements. Responding to the backlash, tech companies have (virtue-)signaled consideration for these issues by stressing the importance of Ethical AI, Responsible AI, or Trustworthy AI and developing guidelines for these noble goals. Indeed, OpenAI’s website includes the disclaimer that despite extensive testing, it cannot predict all possible benefits and harms of its technology, and that (ironically) it is critical to release (potentially harmful) AI into the real world to increase AI’s safety over time.

Policymakers are also actively discussing how to regulate AI in order to prevent or, at least, limit the aforementioned problems. In fact, earlier this month the EU passed the landmark AI Act, which includes bans and restrictions related to biometric identification systems, manipulative uses of AI, and artificial general intelligence (AGI, or AI with human-level or above intelligence and the ability to self-teach).

While the widespread attention for AI ethics is to be applauded, it is guiding attention away from a deeper issue. Focusing on ways to regulate and improve AI confirms a techno-deterministic narrative, one that does not question the overall desirability of technological advancements and assumes that AI and AGI are inevitable. However, we as a society must question whether generative AI systems like Sora really bring about progress and whether these systems should be welcomed in the first place.

Current Issue

Cover of March 2024 Issue

OpenAI claims “to ensure AGI benefits all of humanity,” but in reality it threatens the very concept of humanity. We are beyond AI just beating us at chess and Go, optimizing pricing, and recognizing faces. Although these previous milestones were impressive, shocking, and maybe even demeaning to those with a strong sense of human superiority, that technology “merely” automated routine and rule-based tasks. Now, as our preemptive concerns about super intelligence have slowly dampened, AI has begun to master a realm many long thought was uniquely human—the realm of creativity and imagination.

Creativity and imagination are related but different phenomena. To create or be creative means to materialize a vision or idea into a painting, a movie, a song, and so on. To imagine goes a step further. Imagination requires the ability to develop that vision in the first place—to see what has not yet been manifested. Sora fills the gaps in our prompts. The model not only understands prompts (a point OpenAI repeatedly emphasizes on Sora’s site), but also imagines what the scenes described could and should look like. Yes, that imagination is based on training data, but so too is ours.

For humans, imagination takes time and space to develop. But in our capitalist society, which determines our value based on productivity and volume, artists are told that AI will be helpful to them because it will allow them to create more content so that they can work faster. These promises of productivity make it easier for people to overlook the fact that AGI-driven intelligent systems are on track to have capacities equal or better than humans’. AI’s imagination will likely become more grandiose in years to come.

We should not let the promise of productivity or narrow debates about AI’s ethical implications distract us from the bigger picture. Under the guise of improving humanity by increasing productivity, we risk releasing our ultimate replacement.

We should not overestimate the durability of human skills in the face of technological advancements. Calculators decreased the need for mental math. GPS has limited independent exploration and map navigation. While some might be glad to let these skills go, there must be a limit to the qualities we are willing to outsource.

Given AI’s current skills and capacity for imagination, it seems plausible that AI will not propel humanity forward, as OpenAI claims, but threaten humanity instead, by de-skilling us and rendering our unique features superfluous. That is why it’s past time that we discuss not only ethical implications like copyright breaches and algorithmic biases, but also what AI’s ability to imagine means for humanity. What does it mean to be human when our distinctive and characteristic skills and features are no longer uniquely ours?

It’s also crucial that we not allow ourselves to be misled by narratives about how ChatGPT and Sora will save humanity by making us more productive in work and everyday life, when they will likely primarily boost earnings for the one percent. Ultimately, as AI continues to develop, we may be left skill-less, uninspired, and dependent on our replacement. At that point, it will be too late to ask ourselves if building AI was worth the loss of our humanity.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.
In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.
There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.
The Nation does now bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Sage Cammers-Goodwin

Dr. Sage Cammers-Goodwin is a philosophy of technology researcher at the University of Twente. Her interests and expertise span corporate social responsibility, smart cities, and emerging technologies, including artificial intelligence. Her publications appear across multiple platforms including Oxford University Press.

Rosalie Waelen

Dr. Rosalie Waelen is currently working at the Sustainable AI Lab, which is a part of the Institute for Science and Ethics at the University of Bonn, Germany. Rosalie has previously published on the importance of critical theory perspectives in the AI debate and on the ethical and societal issues related to computer vision applications.

More from The Nation

Esteury Ruiz playing baseball

Outfielders Esteury Ruiz and Brent Rooker stood with their fans against a planned move to Las Vegas. They were punished, and the owner is pulling the team from Oakland early.

Dave Zirin

Young students get on a school bus in Muhlenberg Township, Pennsylvania, on October 20, 2021.

Social media apps are not the enemy of America’s children. But Republican mega-donor Jeffrey Yass is.

Maria Hernandez

When Faith Meets Fascism

Producer Rob Reiner and director Dan Partland discuss the rise of Christian nationalism and their new film God & Country.



Laura Flanders

The TikTok social media platform's logo is reflected in the eye of a 13-year-old boy as he looks at a computer screen in Bath, England.

With TikTok, Instagram, and other platforms using algorithms to send teen viewers addictive, dangerous content—and reaping immense profits—self-regulation has clearly failed.

Zephyr Teachout

A protester is arrested at a Columbia University demonstration.

The university is under pressure to root out any students or faculty critical of Israel—and it’s already caved.



Katherine Franke

Pregnancy in Jails: A New Report Finds Flagrant Violations of Illinois Laws

Jails throughout Illinois continually deny, restrict, interfere with, or discriminate against pregnant people’s rights to reproductive care.

Victoria Law