DTC 338 | Fall 2025

Creative Challenges in
AI ART

Emotion-Evoking Media

By Chelsea Cowsert, Rachel Karls, Tucker Christensen, and Callum Robinson

DOUG #1
Figure 1: In D.O.U.G._1 – Mimicry (2015), the robotic arm watches the artist’s line and immediately echoes the gesture in real time. The machine imitates the human hand—but lacks the lived impulse behind the mark. The resulting drawing becomes a duet of motion, highlighting how simulation can mirror form but not necessarily the feeling that generates it.

Artificial intelligence can now generate images, narratives, and performances with remarkable aesthetic fluency, but it still lacks the most fundamental ingredient of emotional storytelling: human experience. Unlike human creators, machines have no bodies, memories, sensations, relationships, personal histories, or inner consciousness to draw from. They can analyze and reproduce patterns of emotional expression, but the feeling itself never originates within them.

This gap becomes more apparent in emerging art forms of human-machine collaboration, where a machine’s output may appear expressive on the surface but remains rooted in simulation rather than in the art-making experience. When an AI system produces a “sad” image or a “nostalgic” echo in a poem, the emotional effect stems from aesthetic pattern-matching, rather than a lived moment of loss, joy, longing, or fear. This raises an essential question for the future of creative work: If emotion in art comes from lived experience, what are the limits of machine-made emotion?

For most of human history, art has served as a vessel for a lived experience. From Paleolithic cave paintings to Renaissance altarpieces to contemporary performance art, emotional expression has been inseparable from the human body and psyche. Art has always emerged from a specific context, a memory, a trauma, a joy, a cultural ritual, a political event, a deeply personal relationship with the world. Even the most abstract artistic movements, such as Expressionism or Abstract Expressionism, were built on the premise that internal human states could be externalized through a gesture, color, form, and texture.

This long artistic lineage underscores a simple but profound truth: emotion in art has always been tied to embodied subjectivity. A painting or poem does not feel on our behalf; instead, we feel because we recognize something of ourselves in it. This recognition is grounded in shared human experiences, the same nervous system, the same social instincts, and the same emotional development shaped by memory and culture.

Echoes of the Earth
“Echoes of the Earth” by Refik Anadol is an immersive AI-driven data sculpture that transforms vast collections of ecological and environmental information into a living, ever-shifting visual experience. Using machine learning models trained on datasets of landscapes, weather patterns, and natural phenomena, Anadol allows the system to “dream” new forms inspired by the planet’s textures and rhythms. The result is a hypnotic flow of color and motion that feels both organic and otherworldly, blurring the line between nature and machine imagination. The work invites viewers to consider how technology can deepen our connection to the natural world by revealing patterns and “memories” of the Earth that exist beyond what the human eye can perceive.

AI disrupts this lineage not because it creates “new” art, but because it creates art without experience. It produces outputs that resemble emotional expression, but without the internal states that historically generated emotional meaning. That tension is at the heart of contemporary debates about authenticity, authorship, and emotional depth in machine-made media.

The central ideological divide in AI-generated art concerns the difference between emotional simulation and emotional experience. A machine can process millions of images tagged with “sadness,” learn their statistical patterns, dark palettes, drooping forms, slowed tempo, and generate something aesthetically similar. However, the machine does not comprehend sadness as a genuine feeling. It only understands sadness as a pattern.

This pattern-learning is precisely what the Stanford ArtEmis project demonstrates. As described in the MuseumNext article How AI Has Learned Human Emotions from Art, researchers trained AI on over 440,000 human descriptions of emotional responses to artworks. The algorithm became remarkably capable of predicting the emotions a viewer might feel when looking at a specific painting, even generating explanations that sound convincingly human.

In *Swarming Emotional Pianos*, Erin Gee uses biosensors to capture a performer’s heartbeat, breath, and sweat, translating those signals into the movements and tones of robotic ‘pianos.’ By converting emotion into data-driven sound, the work exposes both the expressive potential and the limitations of AI—showing how machines can perform the traces of feeling without ever experiencing emotion themselves.

Yet this emotional intelligence is synthetic. The machine does not arrive at these explanations through introspection, memory, or empathy, but through a probabilistic reconstruction of human emotional language. As the article notes, the result is powerful yet uncanny: AI can articulate emotional meaning without ever having experienced it.

This distinction between feeling and imitating the language of feeling reveals the ideological core of our challenge. If machines lack embodied experience and emotional consciousness, they cannot originate emotional content. They can only reproduce its surface-level signatures.

Artist Sougwen Chung’s work makes this distinction visible. Chung collaborates with robotic arms trained on her own drawing gestures, creating hybrid performances where human motion meets machine precision. The emotional resonance of her work emerges not from the machine’s “intent,” but from the dynamic, relational space between artist and algorithm.

DOUG #2
In D.O.U.G._2 – Memory (2017), the robot is not just mimicking—it draws from a learned bank of the artist’s gestures. The system ‘remembers’ style and returns interpreted marks. The human-machine line becomes layered: lived experience → archived data → robotic output. But the gap remains: the robot recalls style, not sensation.

The robot does not feel the line, but Chung does. The emotional depth in her pieces stems from her breath, movement, memories of drawing, and the physical tension of performing alongside a system that mirrors her. Chung’s practice demonstrates that machines can extend human expression, but they cannot originate its emotional core.

The Substitute
“The Substitute” by Alexandra Daisy Ginsberg is a video installation that digitally resurrects the extinct northern white rhino, using AI-driven behavior and evolving CGI to question how we value real versus artificial life. As the life-size virtual rhino gradually becomes more realistic—learning to move, vocalize, and eventually locking eyes with the viewer—the piece highlights the paradox of humans investing in technologies to recreate lost species while failing to protect them in the real world. It ultimately asks whether a digital or engineered substitute can ever replace the living animal we allowed to disappear.

Daisy Ginsberg’s installations take a different approach, using AI to explore themes of ecology, extinction, and environmental grief. Works like Machine Auguries and Pollinator Pathmaker use generative systems to model futures shaped by human environmental impact.

"The Substitute" Video

But the emotion in her work, nostalgia for vanished ecosystems, anxiety about the future, mourning for lost species, is entirely human. The AI models do not feel loss. They simply simulate ecological outcomes. Ginsberg’s art relies on human emotional projection, inviting viewers to confront their own fears and hopes. The machine becomes a tool for staging emotional questions, not a source of emotional content.

Refik Anadol’s large-scale immersive works push machine-generated aesthetics to their sensory limits. Installations like Machine Hallucinations or Unsupervised overwhelm viewers with continuously transforming patterns derived from massive datasets. The result is moving, even awe-inducing.

Machine Hallucination
“Machine Hallucination” is one of Refik Anadol’s landmark explorations into how artificial intelligence can visualize collective memory. Using millions of images—often drawn from archives of cities, museums, or natural environments—Anadol trains neural networks to “dream” new forms based on what they have learned. The resulting visuals unfold as fluid, shifting landscapes that feel both familiar and entirely invented, as if the machine is imagining alternate versions of reality. In this work, architecture, time, and memory blend into a continuous stream of motion, inviting viewers to witness how a machine interprets the world when freed from human logic. “Machine Hallucination” asks us to consider what creativity, perception, and imagination might look like through the eyes of an AI.

But here again, the emotion resides in the viewer, not the system. The machine produces spectacle swirling patterns, vibrant colors, and algorithmic motion, but not emotional intention. What we feel in front of Anadol’s work is our own psychological response to scale, sound, rhythm, and novelty. The machine amplifies sensation, but not subjective sentiment.

Together, these examples support a clear claim: AI can generate emotional effects, but not emotional sources. Its creative output may provoke awe, melancholy, or wonder, but these emotional responses arise from human interpretation, human memory, and human embodied experience.

AI art is influential not because machines feel, but because humans continue to bring feeling to the encounter.

As AI becomes more integrated into creative practice, the question is not whether machines will replace human emotional expression; they cannot, but how humans will navigate a world where simulated emotion increasingly resembles the real thing. The future of artistic storytelling may depend less on what machines can generate and more on how we choose to interpret, collaborate with, and critically question the emotional simulations they produce.

Authors & Sources

  • Authors: Chelsea Cowsert, Rachel Karls, Tucker Christensen, and Callum Robinson
  • Tools Used: ChatGPT
  • AI Contribution: Text drafted collaboratively with AI and edited by the authors.
  • Prompt Log: https://chatgpt.com/share/691bdfcf-2510-8012-aaba-afc72ab37ee5
  • Sources & References:
    • Sougwen Chung, D.O.U.G._1 (2015)
    • Sougwen Chung, D.O.U.G._2 (2017)
    • Kate Crawford, Echoes of the Earth (2020)
    • Refik Anadol, Machine Hallucination (2020)
    • Erin Gee, Swarming Emotional Pianos (2014)
    • Daisy Ginsberg, The Substitute (2019)