×

Dynamic AI
Co-Creation

A Human-Centered Approach
by Will Luers

Created through the Digtal Pubishing Initiative at The Creative Media and Digital Culture program, with support of the OER Grants at Washington State University Vancouver.

publisher logo

Chapter 2: AI Foundations

1. Types of Intelligence

AI systems, particularly those based on machine learning and neural networks, rely on algorithms and structured processes to learn from data and make decisions. These processes follow clear rules and mathematical models to produce outcomes from specific inputs. Even probabilistic AI models still follow patterns based on statistical laws and require a lot of data to improve.

Human Intelligence, on the other hand, comes from the brain—a complex and still not fully understood network of neurons. It includes not just electrical signals and chemistry, but also emotions, experience, memory, and possibly even unconscious thought. Human thinking is shaped by our senses, our past, our emotions, and our social and cultural environments. This allows for creativity, self-awareness, and adaptability that goes beyond what AI can currently do. Human learning is deeply personal and influenced by society in ways that structured AI training is not. This makes human and machine intelligence fundamentally different, yet potentially complementary.

As we explore the basics of machine intelligence, it’s important to understand that the word "intelligence" means something very different for machines than for people. While AI is great at handling specific tasks quickly and with lots of data, human intelligence involves awareness, emotion, ethics, and imagination. Learning about the technology behind AI can help us appreciate both the limits and the strengths of these tools—and how they can work alongside human intelligence.

Machine Intelligence

The field of AI has developed various ways to categorize machine intelligence:

  • Reactive Machines can react to situations but have no memory. IBM's Deep Blue, a chess-playing computer, is an example.
  • Limited Memory systems like self-driving cars use recent data (like sensor input) to make decisions in real time.
  • Theory of Mind AI is a future goal for AI—systems that understand human emotions, beliefs, and intentions. Research is ongoing.

Human Intelligence

Psychologists have proposed different models to explain how human intelligence works in real life. These frameworks help us understand how thinking involves more than just solving problems.

Sternberg's Triarchic Theory:

  • Analytical Intelligence: Logical reasoning and problem-solving. Like taking a test.
  • Creative Intelligence: Generating new ideas. Like writing a poem or inventing something.
  • Practical Intelligence: Applying knowledge to everyday situations. Like managing time or social dynamics.

Gardner's Multiple Intelligences:

  • Linguistic Intelligence: Sensitivity to language. Seen in poets and writers.
  • Logical-Mathematical Intelligence: Skill in reasoning and numbers. Scientists and engineers.
  • Spatial Intelligence: Thinking in images and space. Architects and visual artists.
  • Bodily-Kinesthetic Intelligence: Using the body skillfully. Dancers and athletes.
  • Musical Intelligence: Understanding sound and rhythm. Musicians and composers.
  • Interpersonal Intelligence: Understanding others’ emotions. Teachers and counselors.
  • Intrapersonal Intelligence: Self-awareness. Philosophers and psychologists.
  • Naturalist Intelligence: Recognizing patterns in nature. Biologists and gardeners.

PASS Theory:

  • Planning: Setting goals and solving problems.
  • Attention: Focusing on tasks and ignoring distractions.
  • Simultaneous Processing: Understanding how things fit together.
  • Successive Processing: Following steps in a specific order.

2. Machine Learning

The modern AI boom comes from a major shift in the late 20th century—from rule-based programs to systems that learn from data. This shift gave us today's powerful tools that improve through experience rather than hard-coded logic. Machine learning, a branch of AI, uses algorithms and statistical models to learn from data and make predictions or decisions.

The Turing Test, proposed in 1950 by Alan Turing, asked if a machine could behave so intelligently that people couldn’t tell it apart from a human. Early AI tried to pass this test using Symbolic AI, or “GOFAI,” which used fixed rules and logic to simulate thinking. These systems were good at solving specific problems but couldn’t adapt to new ones easily.

Eliza

Eliza was an early program made in the 1960s by Joseph Weizenbaum. It mimicked a conversation with a therapist using simple pattern matching. While it could simulate conversation, it didn’t understand the meaning behind the words. Eliza showed both the potential and limits of early AI.

Eliza: natural language processing program by Joseph Weizenbaum

Symbolic AI eventually reached a limit. It couldn’t deal with the messiness of the real world. That’s when machine learning took over. In the 1990s and 2000s, AI began learning from data instead of depending on fixed rules. This opened up new possibilities—from recognizing speech to predicting medical outcomes.

Some milestones:

AlphaGo

AlphaGo showed that AI could master games of deep strategy. It learned not just from rules, but from experience—studying thousands of games and improving by playing against itself. Its success accelerated AI research across many fields.

AlphaGo vs Lee Sedol: Move 78 reaction and analysis

3. Neural Networks

Neural networks are the core technology behind modern AI. Inspired by the human brain, they are made up of layers of connected nodes—or artificial neurons—that pass information forward and adjust based on what they learn. Each connection has a "weight" that changes as the system trains, helping the network recognize patterns, make predictions, or generate new outputs.

Neural Networks

What makes neural networks powerful is their ability to learn from data—not by following a fixed program, but by adjusting their inner structure based on feedback. This learning allows them to perform a huge variety of tasks, from recognizing faces and voices to generating text and music.

The most influential neural network architecture today is the Transformer. Transformers were introduced in 2017 and revolutionized how machines understand sequences of data like language, audio, or even images. Unlike earlier models that processed input one step at a time, transformers take in the whole input at once and decide which parts are most important. This is called self-attention.

Why Transformers Matter:

  • They can handle long passages of text and keep track of meaning over time.
  • They are fast and efficient to train, using parallel processing instead of sequential steps.
  • They work across different kinds of data—text, images, audio—enabling true multimodal AI.
  • They are the foundation for models like GPT (for language), DALL·E (for images), and Whisper (for audio).

Transformers allow AI systems to do more than just analyze—they can generate and communicate. They read prompts, understand instructions, and even compose stories or build software. By learning patterns in human expression, they’ve become powerful collaborators in creative and intellectual work.

4. LLMs, GPTs, GANs, and Diffusion Models

Large Language Models (LLMs) are AI systems trained on vast amounts of text from books, articles, websites, and other sources. They use this knowledge to understand and generate human-like language. LLMs can help with tasks like translating languages, summarizing text, answering questions, and creating content.

Introduction to large language models

Generative Pre-trained Transformer (GPT) models are a specific kind of LLM that use a neural network architecture called a transformer. GPTs have been at the forefront of recent AI advances. They can generate fluent, context-aware text by analyzing the relationships between words in a sentence or paragraph.

Recent versions like GPT-4 can process and generate not just text, but also images, audio, and even video. This ability is called multimodal AI. It means one model can read, write, and "see" or "hear" at the same time—useful for creative projects, accessibility tools, or educational simulations.

Key AI Technologies

  • LLMs (Large Language Models): Trained on huge text datasets to understand and generate human-like language. Examples include GPT-4, Claude, and LLaMA.
  • GPT (Generative Pre-trained Transformer): A type of LLM built on transformers. Pre-trained on general text, then fine-tuned for tasks like writing, coding, or tutoring.
  • GANs (Generative Adversarial Networks): Two AI systems (a generator and a critic) work together to create realistic images, video, or audio through a process of creative feedback.
  • Diffusion Models: Start with random noise and "de-noise" step by step to produce detailed, realistic images. Used in tools like Midjourney, DALL·E 3, and Stable Diffusion.

How do language models guide image generators? Think of text as the brain and images as the hands. When you type a prompt like “a fox jumping over a fence in moonlight,” the language model interprets the sentence and translates it into a detailed internal structure—something the image model can understand. It’s language that organizes and controls the creation of visual, auditory, and interactive content.

Multimodal AI builds on this by letting different types of data (text, image, audio) flow together through the same system. The result? Unified tools that can read an article, answer questions about a chart, describe an image, or generate a soundscape—all from your prompt.

5. Prompts and Contexts

To get the most out of large language models and other generative AI tools, it’s important to learn how to write effective prompts. A prompt is the instruction or input you give to the AI. The clearer and more thoughtfully crafted the prompt, the better the result.

In generative AI systems, language is more than just input—it’s the interface. Whether you're generating text, images, code, or sound, prompts act as the control mechanism. They help you guide what the model does, how it does it, and who it does it for.

Good prompting is not just about being precise. It’s also about being expressive and strategic. Sometimes breaking a complex task into steps helps. Other times, using creative or poetic language can guide the AI toward more imaginative results. Language is powerful, and how you use it shapes how the AI responds.

People with backgrounds in writing, art, history, or philosophy often excel at prompt engineering because they understand how language works at many levels. Their ability to express complex or nuanced ideas helps them craft prompts that produce more meaningful results.

What Makes a Good Prompt?

  • Role: Ask the AI to take on a persona or expertise.
    Example: “Act as a museum curator…”
  • Task: Describe exactly what you want it to do.
    Example: “Summarize this article…”
  • Tone: Indicate the style or voice.
    Example: “Explain it like I’m five…”
  • Format: Request a specific structure.
    Example: “Use bullet points and headings.”
  • Audience: State who the output is for.
    Example: “For a beginner-level student.”

One advanced method is meta-prompting, where you ask the AI to help generate or structure other prompts. This is useful for setting up a chain of tasks or guiding the AI through a multi-step project.

For example, here’s how you might frame a meta‑prompt for a digital publishing workflow. This isn’t just a single request—it defines an entire process for the AI to follow step by step:

WITH ANY UPLOADED MANUSCRIPT I SUBMIT IN THIS CHAT, PERFORM THE FOLLOWING DIGITAL PUBLISHING WORKFLOW:

STEP 1 – PROOFREAD & COPYEDIT: Identify and list all spelling, grammar, and formatting issues. Provide a clear correction list for my approval.

STEP 2 – APPROVAL CHECK: WAIT for me to approve or revise the corrections before continuing.

STEP 3 – HTML FORMATTING: Once approved, format the text into a clean HTML file. USE proper headings, paragraph tags, and semantic markup.

STEP 4 – METADATA CREATION: Write a metadata file with SEO‑friendly keywords, a meta description, and author details for web publishing.

STEP 5 – SOCIAL POSTS: Draft 3–5 short social media posts to promote the published content, each one emphasizing a different key idea or audience.

STEP 6 – SUMMARY REPORT: Provide a final step‑by‑step summary of what was done in each stage of the workflow.

This format demonstrates two important points:

  • Multitasking: Instead of a one‑off command, this prompt chains several related tasks—proofing, formatting, metadata, promotion—into a sequence the AI can follow logically.
  • Process‑oriented prompting: By including instructions to PAUSE for approval and move step by step, this meta‑prompt turns the AI into a collaborator, not just a generator of single outputs.

This example demonstrates two important ideas:

  • Multitasking: Instead of asking for just “an edit” or “an HTML page,” this prompt chains several related tasks—proofing, formatting, metadata writing, promotion—into one workflow. The AI handles the process in sequence rather than treating each request as disconnected.
  • Process‑oriented prompts: A meta‑prompt doesn’t just request a single output; it defines an ongoing approach. By telling the AI to pause for approval before moving on, it shifts from “one‑and‑done” answers to a collaborative back‑and‑forth. That’s how you get better results—and keep control of the work.

Segmenting the instructions with steps, headings, and short explanations also helps the AI follow the logic without skipping ahead. The model “sees” where each task begins and ends, which makes the whole workflow more reliable—and easier for you to review.

To fulfill this task, a language model would need to:

  • Proofread and copyedit the manuscript, listing all errors and corrections.
  • Format the revised text into a clean HTML document.
  • Create metadata for SEO.
  • Write promotional social media posts.
  • Provide a summary report of the entire process.

This kind of prompt gives the AI a roadmap—a series of linked steps to follow, like a "prompt for prompts" that can guide an extended workflow.

Here are some common prompting techniques:

Prompt Techniques with Examples

  • Few-Shot Prompting: Show the AI a few examples to guide its style or logic.
    Example: “Translate: ‘Good morning’ → ‘Buenos días’, ‘Thank you’ → ‘Gracias’, ‘Please’ →”
  • Chain-of-Thought Prompting: Ask the AI to reason step-by-step.
    Example: “Explain step by step how to bake a cake.” → “1. Preheat oven…”
  • Meta Prompting: Ask the AI to generate its own best prompt for a goal.
    Example: “How should I ask you to summarize a book?”
  • Prompt Chaining: Use one prompt’s response as the setup for the next.
    Example: Prompt: “What’s the capital of France?” → “Paris” → “Describe Paris as a tourist destination.”
  • Retrieval-Augmented Generation (RAG): Combine AI with real-time external data.
    Example: “Summarize today’s climate news.” → [Fetch and summarize current news articles]

Prompting Principles

  • Clarity: Make your prompt easy to understand.
  • Context: Give enough background to frame the task.
  • Conciseness: Avoid unnecessary words.
  • Relevance: Stay on topic.
  • Detail: Be specific about what you want.
  • Creativity: Use open-ended language to invite varied results.
  • Iterative Refinement: Adjust prompts to improve the response.
  • Tone: Match the tone to the audience.
  • Experimentation: Try different phrasing and structures.
  • Feedback: Use AI responses to refine the next prompt.

Many AI tools today offer built-in prompt libraries or templates, making it easier for beginners to get started. These provide helpful starting points for everything from lesson planning to design ideation and marketing strategy.

Context Engineering goes beyond single prompts to shape the entire information environment. Context is what the model “knows,” “sees,” and “remembers” when it’s generating a response to a prompt. Context Engineering is about feeding the model the right knowledge, structuring that knowledge effectively, and managing how information flows over time. Done well, it makes AI output more reliable, relevant, and creative.

Context Engineering Strategies

  • Providing the Right Information: Load up relevant documents, facts, and examples into the prompt so the AI has the necessary “ingredients” for accurate answers.
  • Structuring the Context: Use clear formatting, stepwise instructions, tables, or even custom markup to pack more meaning into the AI’s limited context window.
  • Using Tools Effectively: Integrate resources like search engines, calculators, or APIs so the AI can access extra knowledge or perform actions it couldn’t on its own.
  • Managing the Flow of Information: Include mechanisms for “memory” and “forgetting”—remind the AI of key details, but also trim old or irrelevant info to keep the session sharp.
  • Role Framing & Persona Setting: Ask the AI to “be” an expert, editor, or coach, shaping its voice and focus.
  • Perspective Switching: Have the AI reframe content from multiple viewpoints to explore alternatives.
  • Constraints & Quality Filters: Set rules for tone, accuracy, and length—and emphasize quality over just stuffing more data into the window.
  • Handling Errors Gracefully: Anticipate mistakes; prompt the AI to review, correct, or re-try when something goes wrong.
  • Reflection & Self-Check: Ask the AI to critique or improve its own answers, creating a loop of refinement.

By combining prompting and context engineering, you shift from giving instructions to designing environments for AI thinking. Prompts are the questions you ask; context is the stage you set for the answers.

Finally, remember that prompts and contexts don’t just shape the AI’s output—they also teach it about you. Every choice you make in wording, structure, and examples is a clue about how you think, write, and approach problems. By sharing your reasoning, style, and process in the prompt and context, you invite the model to mirror not just what you want, but how you would do it yourself.

6. Unit Exercise

To develop a grounded understanding of prompt engineering, this unit will have you engage in a hands-on exercise with a language model like ChatGPT or Claude. Here are the steps:

As you complete these exercises, reflect on how AI thinks, what it gets right, and where it needs guidance. Document your process and insights.

7. Discussion Questions

8. Bibliography