×

Dynamic AI
Co-Creation

A Human-Centered Approach
by Will Luers

Created through the Digtal Pubishing Initiative at The Creative Media and Digital Culture program, with support of the OER Grants at Washington State University Vancouver.

publisher logo

Chapter 4: AI Images

1. Image Generation

AI image generation tools like Midjourney, DALL·E, and Stable Diffusion have transformed how we create and imagine images. These systems generate pictures from text prompts, drawing on what they’ve learned from large collections of online images—including public domain art, open datasets, and copyrighted works. When you type something like "a Van Gogh landscape painting," the AI produces a new image in that style, based on its understanding of Van Gogh’s work. It doesn’t copy a specific painting, but it mimics the patterns, colors, and brushstrokes.

This kind of style imitation can help us explore how AI works, but it may not always lead to meaningful art. A better use might be to remix references—asking, for example, “What would a Van Gogh sculpture look like?” That kind of question pushes the technology into creative, unfamiliar territory. AI image generation is more than imitation—it’s a tool for visual thinking, experimentation, and surprise.

These tools are also incredibly useful in professional fields. Interior designers and architects can quickly visualize styles, color palettes, and room layouts before starting a project. Clients get to see possibilities and give feedback early, making collaboration easier. In film, AI helps directors visualize costumes, lighting, and locations during pre-production, allowing them to try different looks before committing time and money.

Educators can use AI-generated visuals to make abstract ideas more engaging. Scientists might create new kinds of diagrams, visualizations, or artistic representations of their research. And artists can explore wild new ideas—experimenting with style, symbolism, and visual form in ways that traditional tools can’t easily support.

But AI image generation also raises serious legal and ethical questions. Who owns an image created by a machine? If it mimics the style of a real artist, is that fair use or is it copying? Some argue that these systems transform existing work enough to be legal. Others believe AI companies should license any copyrighted materials used in training. And what about images with no human author at all—can they even be copyrighted?

There are also risks of misuse. AI can be used to generate fake or misleading images—so-called “deepfakes”—that blur the line between real and fake. As these technologies evolve, new laws and guidelines are needed. At the same time, there’s incredible value in using AI creatively and ethically. It’s a tool that can extend human imagination, not replace it.

30X40 Design Workshop | Using AI as a Design Tool in My Architecture Practice
Figma | AI and the future of design: Designing with AI
Script to Storyboard AI

2. GAN & Diffusion Technology

IBM Cloud: What are GANs (Generative Adversarial Networks)?
AssemblyAI: Diffusion models explained in 4-difficulty levels

Generative Adversarial Networks (GANs)

Generative Adversarial Networks, or GANs, are a type of AI used to create new images. They work by combining two systems: a generator, which creates images, and a discriminator, which judges them. You can think of the generator as an artist and the discriminator as a critic. The artist tries to create realistic images, while the critic tries to spot what’s fake. Over time, the generator improves its skill by learning from this feedback.

latent space
Latent space

GANs use something called latent space—an invisible map of ideas. Each spot in this space represents a different kind of image. When the AI picks a location (called a latent vector), the generator turns that point into an image. Nearby points in latent space produce similar images, while distant points make very different ones.

As the GAN trains, the generator learns to make better, more realistic images. But GANs aren’t perfect. They can run into problems like mode collapse, where the AI keeps generating the same type of image, or training instability, where learning gets stuck. Still, GANs opened the door to the idea of machines having a kind of “synthetic imagination”—generating new visuals based on what they’ve learned.

"a tree in the city"

Diffusion Models

Diffusion models are another major approach to AI image generation, and they’ve become more popular than GANs. These models work very differently. Instead of learning to trick a critic, diffusion models start with random noise—like TV static—and gradually turn it into a detailed image through a process called denoising.

This is like sculpting from chaos: the model learns how to remove bits of noise step by step, until a clear image appears. It’s based on how particles spread (or “diffuse”) in physics, but done in reverse. Because of this step-by-step method, diffusion models are very stable during training and produce high-quality, photorealistic images.

Popular Apps That Use Diffusion

  1. DALL·E 2: Creates creative and diverse images from simple text prompts.
  2. Stable Diffusion: A flexible, open-source model popular with artists and developers.
  3. MidJourney: Known for its unique, stylized outputs; based on diffusion-like methods.
  4. Runway ML: Offers tools for generating both images and video using diffusion models.

GANs vs. Diffusion: What’s Next?

Both GANs and diffusion models have their strengths. GANs are fast and good for real-time applications. Diffusion models are slower but more reliable and capable of stunning detail. In recent years, diffusion has become the go-to technique for high-end creative work because of its stability and flexibility.

Looking ahead, we may see hybrid models that combine the strengths of both systems. Diffusion models are also now being used in video generation, where their frame-by-frame refinement is ideal for motion and texture. As these tools continue to evolve, they’ll give artists and designers more powerful ways to create, imagine, and experiment with visual ideas.

3. Creative Strategies

Machine Art

latent space
Edmond de Belamy by Obvious art collective

The first AI artwork to be sold at auction, Edmond de Belamy, marked a groundbreaking moment in the art world, showcasing the novel capabilities of machine-generated creativity. Created by the Paris-based art collective Obvious, this portrait was not painted by a human hand but by a machine. Utilizing a Generative Adversarial Network (GAN), the AI was trained on a dataset of 15,000 portraits spanning six centuries. The resulting piece, characterized by its hauntingly abstract features, reflects the essence of classic portraiture while simultaneously challenging traditional notions of artistry. When Edmond de Belamy was auctioned at Christie’s in 2018, it fetched an astonishing $432,500, far exceeding initial estimates. This sale highlighted the profound potential of AI in art, demonstrating that, with the right data and algorithms, machines can produce works that are both innovative and evocative, without direct human intervention beyond their initial programming.

While AI tools like GANs and diffusion models can generate images on their own, many artists prefer to use AI as a creative partner—not just a machine that makes art for them. Here are some of the key ways artists use AI to support and expand their own creative process:

  • Image-to-Image Translation: Start with a sketch, painting, or photo. The AI transforms it into a more detailed version while keeping the original idea intact. For example, an artist might draw a rough sketch of a futuristic city. AI can fill in the details—textures, lighting, colors—offering different stylistic versions to explore. This lets the artist stay in control while expanding possibilities.
  • Textual Inversion: Artists can "teach" the AI their unique style by training it on personal images. This gives the AI a better sense of their aesthetic. A photographer, for instance, might upload a collection of moody, low-light portraits. The AI can then generate new photos that reflect that same mood, allowing the artist to remix their own visual language.
  • Crafted Prompts: Well-written prompts make a big difference. By describing a scene or mood in detail—like “foggy mountains at sunrise with glowing lanterns”—an artist can guide the AI to generate images that match a specific vibe. Filmmakers often use this technique for storyboarding, helping them visualize settings, characters, and emotions before filming.
  • Iterative Curation: Artists don’t just use the first image the AI generates. They pick their favorites, tweak them, and feed them back into the model to generate improved versions. For example, a designer might create a batch of logo ideas with AI, choose one, refine it, and run it through again. This cycle continues until the final design feels just right.

In all these cases, the human artist is in the driver’s seat. AI is a powerful assistant, but it’s the artist who provides direction, selects what works, and makes the final creative choices. This kind of collaboration turns AI into a creative companion—one that supports exploration, ideation, and refinement while amplifying the artist’s unique voice.

How This Guy Uses A.I. to Create Art | Obsessed | WIRED

4. Generative Imaging Tools

AI-powered image creation has exploded in recent years, giving rise to a wide array of tools that support artists, designers, educators, and creators in transforming ideas into visual outputs. These platforms use different AI models—mostly based on diffusion techniques—and offer varying degrees of control, quality, and customization.

Text-to-Image Generators

  • Stable Diffusion – Open-source and highly customizable, used in countless creative projects for generating realistic and artistic images from prompts.
  • DALL·E (OpenAI) – Known for creative, coherent results and integration with ChatGPT, enabling in-chat image generation and editing.
  • Midjourney – Popular among artists for its rich, stylized aesthetic and surreal imagery. Operates via Discord and prioritizes visual storytelling.
  • RunwayML – A creative suite combining AI video, image, and audio generation in a user-friendly platform. Often used for rapid concepting and media production.

Sketch-to-Render & Concept Design

  • Vizcom – Transforms line drawings into polished renders, ideal for product design and industrial sketching workflows.
  • PromeAI – Converts rough sketches into refined digital artwork; widely used in fashion, architecture, and concept art.
  • Stable Doodle (by Clipdrop) – Uses Stable Diffusion to turn basic doodles into detailed images with style options.
  • Scribble Diffusion – A simple, educational tool for generating images from drawings—great for creative exploration and classrooms.

AI Editing, Upscaling, and Enhancement

  • Adobe Firefly – Adobe’s generative AI engine built into Photoshop, Illustrator, and other Creative Cloud tools. Enables generative fill, object replacement, and creative variations.
  • Remove.bg – Quickly removes image backgrounds, ideal for e-commerce, design, and quick editing tasks.
  • Let’s Enhance – Upscales and improves image resolution, helpful for web publishing, print, and marketing.
  • Topaz Gigapixel AI – Professional tool for increasing resolution while maintaining sharpness; widely used by photographers.
  • Deep Image AI – Enhances and cleans up photos for commercial use, especially in real estate, printing, and media.

These tools are evolving rapidly, with new features being released monthly. Because most are browser-based or app-integrated, creators at all skill levels—from students to professional designers—can now generate, edit, and experiment with high-quality visuals in just minutes.

5. AI Artists

While the AI art tools themselves are impressive technological marvels, equally essential are the pioneering human artists pushing the boundaries of how this tech can expand modes of creative expression and communication. Notable AI artists include:

AI Artists

Sofia Crespo

latent space
Sofia Crespo - LINK

Sofia Crespo is a pioneering artist whose work focuses on the intersection of biology and artificial intelligence. She uses AI models, particularly GANs and neural networks, to create intricate digital art pieces that explore the relationship between natural and artificial life forms. Her work often features organic shapes and textures reminiscent of biological entities, reflecting on how AI can mimic and interpret the complexities of the natural world. Crespo’s notable projects include "Neural Zoo," where she generates images of speculative creatures and plants that do not exist in reality but appear convincingly organic, challenging our perceptions of nature and machine-generated art.


Sougwen Chung

latent space
Sougwen Chung - LINK

Sougwen Chung is an interdisciplinary artist who explores the relationship between humans and machines through collaborative drawings and installations. Her work frequently involves drawing alongside robotic arms controlled by AI systems that learn from her style. This ongoing human-AI collaboration, as seen in her project "Drawing Operations," highlights the evolving nature of co-creativity and questions the role of autonomy in artistic creation. Chung’s practice bridges art, performance, and technology, offering a poetic exploration of how humans and machines can create together.


Refik Anadol

latent space
Refik Anadol - LINK

Refik Anadol is an artist known for his immersive installations that transform data into visually stunning and thought-provoking art pieces. He leverages AI and machine learning algorithms to process large datasets, such as urban landscapes, social media interactions, and cultural archives, turning them into dynamic visualizations and media sculptures. Anadol's work often involves projecting these data-driven visuals onto architectural surfaces, creating a seamless blend of the physical and digital realms. His projects like "Infinity Room" and "Melting Memories" push the boundaries of how data can be experienced aesthetically, offering a glimpse into the future of media art and the potential of AI to reshape our interaction with information.


Anna Ridler

latent space
Anna Ridler - LINK

Anna Ridler is an artist who works with both AI and data to explore themes of narrative, storytelling, and ownership of information. She often creates hand-annotated datasets, which are then used to train AI models, allowing her to maintain creative control over the outputs. One of her most well-known works is "Mosaic Virus," where she used a dataset of tulip images to comment on historical tulip mania and contemporary financial bubbles. Ridler’s work offers a thoughtful examination of how humans can shape the datasets that AI learns from, turning the creative process into a more personal and intentional act.


Ben Snell

latent space
Ben Snell - LINK

Ben Snell is an artist who uses artificial intelligence to explore the intersection of human creativity and machine learning. He trains custom AI models on datasets of his own sculptures, letting the machine generate new forms that he then brings into physical existence. By transforming AI-generated concepts into tangible objects, Snell's work investigates the evolving relationship between creator and tool, and the ways in which machines can co-author the artistic process. His project "Dio" involved feeding a computer's physical body back into the machine, creating a self-referential loop of artistic creation.


Roma Lipski

latent space
Roma Lipski - LINK

Roman Lipski is a Polish painter who collaborates with artificial intelligence to push the boundaries of his artistic practice. Through a constant exchange between the artist and an AI system, Lipski's work has evolved into a process of co-creation where the machine suggests novel ideas based on his past works. His "Unfinished" series is an ongoing dialogue between human intuition and machine learning, where AI not only inspires but also influences the creative process. This partnership explores the potential of AI to expand human imagination and challenges traditional notions of authorship in the arts.

6. Unit Exercise: AI-Assisted Comic Creation

This exercise guides you through using AI tools to help design, develop, and reflect on a short comic. It combines sketching, prompting, editing, and artistic decision-making.

  1. Sketch Your Characters: Start by drawing your comic characters on paper. Create several versions to explore different looks. Photograph or scan your favorite sketches to bring them into the digital workflow.
  2. Generate Comic Panels with AI: Use a sketch-to-image tool like Vizcom or Clipdrop Stable Doodle to turn your sketches into stylized panels. Add descriptive prompts to guide scene generation, e.g., “Low-angle shot of a pirate captain silhouetted against a glowing moon.” Experiment with different angles, expressions, and environments. Aim to generate at least three distinct panels for a basic comic strip.
  3. Apply and Refine a Visual Style: Decide on an overall art style for your comic (e.g., watercolor, cyberpunk, manga). Adjust prompts and regenerate as needed to match this aesthetic across your characters and scenes. Save multiple versions for comparison.
  4. AI-Human Collaboration: Import the AI-generated panels into a design program like Photoshop, Illustrator, or Procreate. Edit, arrange, or redraw elements to ensure consistency. Add speech bubbles, captions, or hand-drawn details. If needed, generate AI backgrounds separately and integrate them with your characters. Finish a complete comic strip of at least three connected panels.
  5. Reflect on the Process: Write a short reflection (about 200–300 words) on your experience. Consider:
    • How did AI change your normal creative workflow?
    • What surprised you or frustrated you about using AI tools?
    • Where did your own creative choices feel most important?
    • What might you try differently next time?

7. Discussion Questions

8. Bibliography

Dynamic AI Co-Creation: A Human-Centered Approach
by Will Luers | Sept. 2024