
Dynamic AI
Co-Creation
A Human-Centered ApproachCreated through the Digtal Pubishing Initiative at The Creative Media and Digital Culture program, with support of the OER Grants at Washington State University Vancouver.

AI systems, particularly those based on machine learning and neural networks, rely on algorithms and structured processes to learn from data and make decisions. These processes follow clear rules and mathematical models to produce outcomes from specific inputs. Even probabilistic AI models still follow patterns based on statistical laws and require a lot of data to improve.
Human Intelligence, on the other hand, comes from the brain—a complex and still not fully understood network of neurons. It includes not just electrical signals and chemistry, but also emotions, experience, memory, and possibly even unconscious thought. Human thinking is shaped by our senses, our past, our emotions, and our social and cultural environments. This allows for creativity, self-awareness, and adaptability that goes beyond what AI can currently do. Human learning is deeply personal and influenced by society in ways that structured AI training is not. This makes human and machine intelligence fundamentally different, yet potentially complementary.
As we explore the basics of machine intelligence, it’s important to understand that the word "intelligence" means something very different for machines than for people. While AI is great at handling specific tasks quickly and with lots of data, human intelligence involves awareness, emotion, ethics, and imagination. Learning about the technology behind AI can help us appreciate both the limits and the strengths of these tools—and how they can work alongside human intelligence.
The field of AI has developed various ways to categorize machine intelligence:
Psychologists have proposed different models to explain how human intelligence works in real life. These frameworks help us understand how thinking involves more than just solving problems.
The modern AI boom comes from a major shift in the late 20th century—from rule-based programs to systems that learn from data. This shift gave us today's powerful tools that improve through experience rather than hard-coded logic. Machine learning, a branch of AI, uses algorithms and statistical models to learn from data and make predictions or decisions.
The Turing Test, proposed in 1950 by Alan Turing, asked if a machine could behave so intelligently that people couldn’t tell it apart from a human. Early AI tried to pass this test using Symbolic AI, or “GOFAI,” which used fixed rules and logic to simulate thinking. These systems were good at solving specific problems but couldn’t adapt to new ones easily.
Eliza was an early program made in the 1960s by Joseph Weizenbaum. It mimicked a conversation with a therapist using simple pattern matching. While it could simulate conversation, it didn’t understand the meaning behind the words. Eliza showed both the potential and limits of early AI.
Symbolic AI eventually reached a limit. It couldn’t deal with the messiness of the real world. That’s when machine learning took over. In the 1990s and 2000s, AI began learning from data instead of depending on fixed rules. This opened up new possibilities—from recognizing speech to predicting medical outcomes.
Some milestones:
AlphaGo showed that AI could master games of deep strategy. It learned not just from rules, but from experience—studying thousands of games and improving by playing against itself. Its success accelerated AI research across many fields.
Neural networks are the core technology behind modern AI. Inspired by the human brain, they are made up of layers of connected nodes—or artificial neurons—that pass information forward and adjust based on what they learn. Each connection has a "weight" that changes as the system trains, helping the network recognize patterns, make predictions, or generate new outputs.
What makes neural networks powerful is their ability to learn from data—not by following a fixed program, but by adjusting their inner structure based on feedback. This process is called training. During training, the network compares its predictions with the correct answers, calculates the error, and updates its weights to do better next time. After many cycles, the network becomes skilled at recognizing patterns such as the shapes of letters, the tone of a voice, or the structure of a sentence.
You can think of a neural network like a team of musicians in an orchestra. Each “neuron” is one instrument. At first, they are out of sync, but through practice (training), they learn how to adjust to each other until the whole orchestra produces a harmonious performance that fits the piece of music (the task).
The most influential neural network architecture today is the Transformer. Transformers were introduced in 2017 and revolutionized how machines understand sequences of data like language, audio, or even images. Unlike earlier models that processed input one step at a time, transformers take in the whole input at once and decide which parts are most important. This is called self-attention.
Why Transformers Matter:
Transformers allow AI systems to do more than just analyze—they can generate and communicate. They read prompts, understand instructions, and even compose stories or build software. By learning patterns in human expression, they’ve become powerful collaborators in creative and intellectual work.
Example of a Transformer in Action: Imagine you type the sentence: “The cat sat on the mat.” A transformer doesn’t just read it left to right. Instead, it looks at all the words at once and calculates how strongly each word relates to the others. For example, “cat” is strongly connected to “sat,” and “mat” is connected to “on.” This web of connections allows the model to understand not only the meaning of each word but also the relationships that give the sentence its sense. That’s why transformers can keep track of meaning across whole paragraphs or even pages.
Large Language Models (LLMs) are AI systems trained on vast amounts of text from books, articles, websites, and other sources. They use this knowledge to understand and generate human-like language. LLMs can help with tasks like translating languages, summarizing text, answering questions, and creating content.
When we interact with an LLM, the basic unit of information is a token. A token is not exactly a word—it can be a whole word, part of a word, or even punctuation. For example, “unbelievable” might be split into the tokens [un]
, [believe]
, [able]
. The model reads text as a sequence of tokens, not letters or whole words. Understanding tokens is crucial because the model predicts the next token in a sequence, step by step, to build sentences, paragraphs, or even entire essays.
An LLM is like an autocomplete system on steroids. When you start typing in your phone, it guesses the next word. An LLM works similarly, but instead of just finishing “Happy…” with “birthday,” it can generate entire paragraphs, poems, or computer programs by making the most likely prediction at each step.
Generative Pre-trained Transformer (GPT) models are a specific kind of LLM that use the transformer architecture. The term “pre-trained” means the model first learns general language patterns from huge datasets, and then it can be fine-tuned for specialized tasks like coding, tutoring, or creative writing. GPTs generate fluent, context-aware text by calculating which token is most likely to come next based on the tokens that came before.
Recent versions like GPT-4 can process and generate not just text, but also images, audio, and even video. This ability is called multimodal AI. It means one model can read, write, and "see" or "hear" at the same time—useful for creative projects, accessibility tools, or educational simulations.
How do language models guide image generators? Think of text as the brain and images as the hands. When you type a prompt like “a fox jumping over a fence in moonlight,” the language model interprets the sentence and translates it into a detailed internal structure—something the image model can understand. It’s language that organizes and controls the creation of visual, auditory, and interactive content.
Example of an LLM in Action: Suppose you ask an LLM: “Write a short story about a dragon who learns to cook.” The model breaks your sentence into tokens and looks at all the relationships between them (“dragon” relates to “fantasy,” “cook” relates to “food” and “kitchen”). Then, step by step, it predicts the next most likely token: [Once]
, [upon]
, [a]
, [time]
… and so on, each choice influenced by the full context of the prompt. In this way, the model builds a coherent response that matches your request.
For now, our focus is on language. But keep in mind: the same underlying ideas about tokens, patterns, and prediction extend to other forms of media. In later chapters, we’ll explore GANs and diffusion models in detail to see how machines learn to generate images, sound, and beyond.
To get the most out of large language models and other generative AI tools, it’s important to learn how to write effective prompts. A prompt is the instruction or input you give to the AI. The clearer and more thoughtfully crafted the prompt, the better the result.
In generative AI systems, language is more than just input—it’s the interface. Whether you're generating text, images, code, or sound, prompts act as the control mechanism. They help you guide what the model does, how it does it, and who it does it for.
Good prompting is not just about being precise. It’s also about being expressive and strategic. Sometimes breaking a complex task into steps helps. Other times, using creative or poetic language can guide the AI toward more imaginative results. Language is powerful, and how you use it shapes how the AI responds.
People with backgrounds in writing, art, history, or philosophy often excel at prompt engineering because they understand how language works at many levels. Their ability to express complex or nuanced ideas helps them craft prompts that produce more meaningful results.
One advanced method is meta-prompting, where you first ask the AI to design the prompts you will use for a complex, multi‑stage project. Instead of jumping into the work, you begin by co‑creating a plan of prompts that you’ll follow in sequence. This is especially useful when each step depends on the previous one.
For example, here’s how you might frame a meta‑prompt for a digital publishing workflow. First, you work with the AI to create a single reusable prompts that contains a logical sequence of prompts.
This format demonstrates four important points:
Segmenting the instructions with steps, headings, and short explanations also helps the AI follow the logic without skipping ahead. The model “sees” where each task begins and ends, which makes the whole workflow more reliable—and easier for you to review.
Here are some common prompting techniques that help you get the outputs you want:
Many AI tools today offer built-in prompt libraries or templates, making it easier for beginners to get started. These provide helpful starting points for everything from lesson planning to design ideation and marketing strategy.
Context Engineering goes beyond single prompts to shape the entire information environment. Context is what the model “knows,” “sees,” and “remembers” when it’s generating a response to a prompt. Context Engineering is about feeding the model the right knowledge, structuring that knowledge effectively, and managing how information flows over time. Done well, it makes AI output more reliable, relevant, and creative.
By combining prompting, meta-prompting and context engineering, you shift from giving instructions to designing environments for AI thinking. Prompts are the questions you ask; context is the stage you set for the answers.
Finally, remember that prompts and contexts don’t just shape the AI’s output—they also teach it about you. Every choice you make in wording, structure, and examples is a clue about how you think, write, and approach problems. By sharing your reasoning, style, and process in the prompt and context, you invite the model to mirror not just what you want, but how you would do it yourself.
To develop a grounded understanding of prompt engineering, this unit will have you engage in a hands-on exercise with a language model like ChatGPT or Claude. Here are the steps:
As you complete these exercises, reflect on how AI thinks, what it gets right, and where it needs guidance. Document your process and insights.