DTC 338 · Creative Challenges in AI Art

Simulated Feelings / Real Body

An interactive field of simulated emotions and sensations.

Final project by Chelsea Cowsert · Fall 2025

Interactive Artwork

The flowing bands below are made only from numbers: grid coordinates, palette indices, and noise values that bend rectangles into rivers of color. When you choose an emotion, the system rearranges the flow into a different simulated mood, while the sentence beneath it describes how that same state feels inside my body.

Select emotion The bands are simulated. The feeling is not.
How this feels in my body

Calm feels like my ribs widening on each inhale, then gently floating back into place as the breath leaves.

Reflection:

Human emotions are among the most familiar yet mysterious aspects of our lives. They arrive in the body before they arrive in our minds or language. Tightening of the chest, softening of a breath, the warming of a memory, or clouding a thought long before we can describe what is happening. In this project, I wanted to explore the contrast between lived emotional experiences and the way machines simulate feelings through patterns, randomness, and mathematical structure. The result is an interactive p5.js sketch inspired by generative artists like Tyler Hobbs, a softly shifting grid of geometric tiles that “responds” to four of our emotional states: calm, anxious, nostalgic, and overwhelmed. But the system itself doesn’t feel anything. It only produces visual behavior that appears expressive because I designed the rules governing its motion, color, and variation.

This distinction between expression and simulation has been central in DTC 338. Machine systems can generate images, texts, and patterns that appear emotional, but they lack what gives emotion meaning: a body, memories, sensation, and a lived history. My own descriptions of each state. Calm as slow breathing, anxiety as buzzing tension, nostalgia as warm ache, overwhelm as mental noise. These emotions all come from experiences we have physically felt. The machine, meanwhile, only manipulates numbers. This collaboration became a way to think about what happens when these two forms of “knowing” overlap inside a single artwork. This tension between human feeling and machine simulation also connects to the other work we have explored in class. In my group project “Emotion,” we discuss how the Stanford ArtEmis project is using AI models to learn to describe art emotionally. In the article, researchers trained an algorithm on over 440,000 human-written emotional responses to artworks, enabling it to predict how a viewer might feel upon viewing a painting (MuseumNext). But the AI’s emotional fluency comes entirely from human input. The AI learns patterns of expression without ever experiencing the sensations that inspired them, which mirrors my own project.

In Emotion, I also give the work of artist Sougwen Chung as an example of someone who collaborates with robotic arms trained on her own drawing gestures. While the robot can reproduce the physical traces of her lines, only Chung feels the intention behind each stroke (Chung). Across these examples, the ArtEmis research, Chung’s hybrid performances, my generative sketch, and what was reiterated in DTC 338 have the same idea: machines can mirror the surface signatures of human emotion, but the source of that emotion remains firmly in the body.

In my final project, each emotion is defined by a set of numerical rules stored in an emotionSettings object. “Calm” has a low motion value, a small rotation range, and a palette of muted blues. “Anxious,” in contrast, uses a higher motion value, more rotation, and warmer, sharper colors. “Nostalgic” leans toward browns and soft orange tones with moderate motion, while “overwhelmed” uses the highest motion values and a palette of saturated purples and oranges. When the viewer chooses an emotion, the system does not access any emotional meaning; it just pulls a different group of parameters that changes how the grid behaves or “feels” mathematically. These rules affect movement using Perlin noise, rotation angles, scale jitter, and palette selection. The interesting part is that despite the machine’s lack of an actual experience, the visuals can still feel expressive. The calm grid simulates a gentle kind of drift, like a quiet inhale and exhale. The anxious grid has more agitation in its motion, with shapes that tilt and shudder slightly. The nostalgic grid has a warm tone that is kind of like recalling old photographs or memories, as seen in older films. Lastly, the overwhelmed grid moves the most, creating a crowded, energetic feel and visual noise. The system isn’t feeling any of these things, but our embodied experiences are how we asked Chat to help me define those rules. The machine simulates all of these emotions only because I translated sensation into a structure for it.

This is where embodiment enters the work. In class, we discussed how AI systems operate without bodies, how their “outputs” are patterns rather than felt perceptions. Embodiment is what grounds human meaning. Emotion arises from muscles, breath, pulse, sensory input, and memories stored in specific physical contexts. When I describe anxiety as “buzzing under my skin,” that’s a physiological truth. When p5.js generates the movement using noise functions, that is an approximation that mimics the visual impression of tension without any actual cause. In the end, what this project made clear to me is that meaning doesn’t come from the machine at all; it comes from us. The computer can shift tiles, adjust colors, or generate movement based on noise and parameters, but it has no sense of what those changes represent. Emotional resonance happens only when a human brings their own embodied experiences to the visuals and creates them. That gap between simulation and feeling isn’t a weakness of generative art. It’s the space where interpretation, memory, and sensation live. For me, that is what makes this kind of human–machine collaboration interesting. The visuals may be produced by code, but the emotions come from the human body.

The process of creating the piece was itself a hybrid collaboration with ChatGPT. I used Chat to brainstorm visual metaphors, sketch code structures, and refine movement behavior until it looked neither chaotic nor clinical. But the emotional grounding, which mattered most, came from me. I asked myself What does anxiety feel like in my body? What colors represent nostalgia to viewers? How does calm move? These decisions were not automated. They came from introspection, memory, and experience.

Writing the code was also a process of translating subjective sensation into code. Calm needed slower noise movement, so I lowered the motion variable and reduced rotation jitter. Anxiety needed more angularity and agitation, so I increased the range of the noise seeds. Nostalgia needed warmth and softness, so I chose earth-tone palettes and moderate motion. Overwhelm required a feeling of too-much-ness, so I increased motion while keeping the background gentle enough that it didn’t become visually stressful. Ultimately, this project is about the tension between appearing emotional and being emotional. The machine’s version of sadness, joy, or calm will always be an approximation built from training data, rules, or noise. It can simulate the shape of a feeling but not its source. And yet, when I interact with my own system, I still resonate with the emotional visuals. This suggests something important about the future of storytelling and emotional media in AI contexts. Machines may never feel, but they will increasingly participate in creating the visual and narrative forms through which humans express feeling. The meaning will continue to come from us and from our bodies, memories, and lived experiences, even as the medium increasingly becomes machine-generated. My project sits right inside that boundary: a digital structure animated by noise, but designed and interpreted through human emotion.

Documentation Sources & Tools

  • HTML5 & CSS for layout and page structure.
  • p5.js for generative drawing, noise-based motion, and animation.
  • Sublime Text for editing and small iterative design tweaks.
  • ChatGPT for brainstorming code, palettes, and early text drafts.
  • Course readings and discussions from DTC 338.

  • https://chatgpt.com/share/692f6b50-4764-8012-a428-07ef3e64b864
  • Chung, Sougwen. D.O.U.G._2 — Memory.
  • “How AI Has Learned Human Emotions from Art.” MuseumNext, 2023.