×

Dynamic AI
Co-Creation

A Human-Centered Approach
by Will Luers

Created through the Digital Publishing Initiative at The Creative Media and Digital Culture program, with support of the OER Grants at Washington State University Vancouver.

publisher logo

Chapter 1: Introduction

Stylized brain wired with circuits

1. About This OER

Dynamic AI Co-Creation: A Human-Centered Approach is an Open Educational Resource (OER) developed with support from a Washington State University Vancouver mini-grant. It is designed as a concise guide to the rapidly evolving landscape of generative AI. The site was authored, designed, and coded with the assistance of tools such as ChatGPT, Midjourney, and Runway, drawing on teaching experience from the 2024–25 course AI in the Arts. Throughout the resource, readers will encounter short videos, project prompts, and examples that illustrate AI in practical use.

The purpose of this OER is to provide readers with the understanding and confidence to engage generative AI within creative, scholarly, and professional contexts—without being overwhelmed by technical terminology. It emphasizes foundational concepts, adaptable workflows, and human-centered practices that will remain relevant even as specific applications and platforms evolve.

Because generative AI technologies change rapidly, this resource is not intended as a step-by-step software manual. Rather, each chapter distills key strategies—how to break down a problem, construct an effective prompt, or integrate multiple outputs—so learners can apply these methods across tools and versions. Each section concludes with a Unit Exercise that encourages readers to test and reflect on these concepts through hands-on practice.

Finally, this OER takes a critical perspective on AI’s capabilities and limitations. Generative models can support intuition and accelerate iteration, but they also reproduce cultural biases and contribute to the proliferation of synthetic media. The chapters encourage both experimentation and reflection, positioning AI as a partner in the creative process rather than an unquestioned authority.

2. AI Tools

Below is a snapshot (mid-2025) of tools referenced in this text. For hands-on instructions please consult each platform’s own documentation—features change fast.

  • ChatGPT Plus (GPT-4o) — multimodal chat assistant that can see, hear, speak, and code. Lets you build custom GPTs, upload documents, and generate images with DALL·E 3.
  • Sora — OpenAI’s text-to-video model for realistic and imaginative clips up to one minute.
  • Runway Gen-3 Alpha — high-fidelity video generation and editing with frame-level control.
  • Midjourney v8 — versatile text-to-image model known for painterly, cinematic results.
  • Stable Diffusion 3 — open-source image generation you can fine-tune locally or on the cloud.
  • ElevenLabs — realistic voice cloning, dubbing, and multilingual narration.
  • Suno v4 — text-to-song model that produces full vocal tracks with lyrics.
  • Udio — AI music studio for stems, mixing, and mastering.

3. Fears & Concerns

Popular stories about AI often swing between two poles: the helpful sidekick (R2-D2) and the runaway super-intelligence (HAL 9000). Both tropes let us rehearse real worries about 2025-era models that can already generate text, images, voices, and deepfakes with uncanny ease. Key challenges include value alignment (teaching machines what we actually care about), systemic bias, consent around training data, and the authenticity of synthetic media.

Frankenstein monster
Frankenstein
HAL from 2001
HAL (2001: A Space Odyssey)
Roy Batty replicant
Roy Batty (Blade Runner)
M3GAN doll
M3GAN

When a model can imitate any voice or face, misinformation becomes cheap. Policymakers debate “authenticity watermarks,” while researchers test guardrails to stop models from revealing personal data or generating harmful content. The tension is real: we want open access for innovation and strong norms to protect people from harm.

R2D2 and C3PO
R2-D2 and C-3PO (Star Wars)

R2-D2 offers a useful model. It doesn’t talk like a human or pretend to understand our emotions. It doesn’t need a face or a “soul.” But it gets things done, and people trust it. Maybe our machines don’t need to become more human—they just need to become more reliable, more useful, more aligned. We don’t expect our pens or notebooks to feel, just to work. Generative AI can serve in similar ways, augmenting without impersonating.

Spike Jonze’s film Her remains a useful metaphor: the AI assistant Samantha feels caring and intimate, yet her superhuman scale ultimately disconnects the protagonist from organic relationships. As we fold AI into daily life—study buddies, writing partners, emotional support bots—we need cultural practices (bottom-up) as much as regulation (top-down) to keep the tools in healthy proportion.

Colossus – The Forbin Project (1970)

This early techno-thriller imagines a U.S. defence AI that links up with its Soviet counterpart and locks both superpowers out of the nuclear arsenal. Half a century later, the lesson still stands: never outsource final authority to a black-box system you can’t unplug.

4. AI in the Arts and Humanities

How This Artist Uses A.I. | WIRED

Ada Lovelace once imagined an “engine” that could weave algebra and art together. Generative AI finally realizes that dream at scale: writers bounce ideas off language models; historians explore synthetic reconstructions; musicians co-compose with text-to-music systems. For humanities classrooms, AI is both a subject and a medium—something to study critically and to create with playfully.

When students train a model on local archives or personal diaries, they practice close reading, curation, and critical coding. Such projects keep human questions (context, meaning, ethics) at the center while using machine speed for exploration.

The Return to the Humanities in the Age of AI | TEDx

5. AI in Education

How AI Could Save (Not Destroy) Education | Sal Khan | TED

Personal tutors used to be a luxury. Now free chatbots can break down calculus problems or rehearse a Spanish conversation on demand. But the same tools can short-circuit learning when they hand students finished answers. Educators face a design challenge: how to use AI as scaffolding (hints, analogies, feedback) rather than a one-click shortcut.

Practical moves include “zero-shot” oral exams, transparent citation policies, and assignments that ask students to critique or improve a model’s output. When AI is framed as draft rather than finished, it becomes a catalyst for deeper thinking.

6. Unit Exercise

Time to set the gears turning. In this exercise you will draft a creative-research project with ChatGPT (free or Plus). The aim is to test how AI can accelerate early ideation while exposing its blind spots.

  1. Pick a spark: Start a brainstorm with ChatGPT on a topic you genuinely enjoy—film montage, coral bleaching, copyright law, anything.
  2. Gather raw material: Ask the model for open datasets, public-domain texts, or image libraries relevant to your idea.
  3. Prototype a product: Decide what you could make—a mini-documentary, interactive timeline, zine, game mod—and sketch the workflow.
  4. Map the toolchain: List which AI services (voice, image, code) fit each step and what human judgement you’ll need in between.
  5. Write a brief: Refine the plan into one paragraph that a collaborator—or future you—could follow.
  6. Reflect: Note where the model surprised, disappointed, or confused you, and how you would adjust your prompts next round.

Examples of Custom Research GPT Descriptions:

Introduction: I’m a 25-year-old writer obsessed with short stories and graphic novels. I’d like a GPT that riffs on my prose style, brainstorms visual panels, and keeps me on schedule by chunking big tasks into sprints.

Objective: Build a creative partner that studies my sample texts, suggests story arcs, drafts panel descriptions for Midjourney, and reminds me to ship work every Friday.


Introduction: I’m a biology major who teaches forest camp to kids. A GPT that mixes ecology facts with art prompts could help me design activities that blend observation, drawing, and music in the woods.

Objective: Generate lesson plans, safety checklists, and creative exercises (e.g., “compose a birdsong chorus”) that adapt to different age groups and local flora.

7. Discussion Questions

  1. How can we weave AI into coursework while keeping assessment authentic and fair?
  2. What guardrails are needed to stop models from amplifying stereotypes or privacy leaks?
  3. Who owns a generated image trained on millions of copyrighted photos—and who should benefit?
  4. Which uniquely human skills become more valuable when AI handles routine drafting?
  5. Could generative tools open doors for students who struggle with traditional writing or coding?

8. Bibliography

``` ```