
Dynamic AI
Co-Creation
A Human-Centered ApproachCreated through the Digtal Pubishing Initiative at The Creative Media and Digital Culture program, with support of the OER Grants at Washington State University Vancouver.

AI systems, particularly those based on machine learning and neural networks, rely on algorithms and structured processes to learn from data and make decisions. These processes follow clear rules and mathematical models to produce outcomes from specific inputs. Even probabilistic AI models still follow patterns based on statistical laws and require a lot of data to improve.
Human Intelligence, on the other hand, comes from the brain—a complex and still not fully understood network of neurons. It includes not just electrical signals and chemistry, but also emotions, experience, memory, and possibly even unconscious thought. Human thinking is shaped by our senses, our past, our emotions, and our social and cultural environments. This allows for creativity, self-awareness, and adaptability that goes beyond what AI can currently do. Human learning is deeply personal and influenced by society in ways that structured AI training is not. This makes human and machine intelligence fundamentally different, yet potentially complementary.
As we explore the basics of machine intelligence, it’s important to understand that the word "intelligence" means something very different for machines than for people. While AI is great at handling specific tasks quickly and with lots of data, human intelligence involves awareness, emotion, ethics, and imagination. Learning about the technology behind AI can help us appreciate both the limits and the strengths of these tools—and how they can work alongside human intelligence.
The field of AI has developed various ways to categorize machine intelligence:
Psychologists have proposed different models to explain how human intelligence works in real life. These frameworks help us understand how thinking involves more than just solving problems.
The modern AI boom comes from a major shift in the late 20th century—from rule-based programs to systems that learn from data. This shift gave us today's powerful tools that improve through experience rather than hard-coded logic. Machine learning, a branch of AI, uses algorithms and statistical models to learn from data and make predictions or decisions.
The Turing Test, proposed in 1950 by Alan Turing, asked if a machine could behave so intelligently that people couldn’t tell it apart from a human. Early AI tried to pass this test using Symbolic AI, or “GOFAI,” which used fixed rules and logic to simulate thinking. These systems were good at solving specific problems but couldn’t adapt to new ones easily.
Eliza was an early program made in the 1960s by Joseph Weizenbaum. It mimicked a conversation with a therapist using simple pattern matching. While it could simulate conversation, it didn’t understand the meaning behind the words. Eliza showed both the potential and limits of early AI.
Symbolic AI eventually reached a limit. It couldn’t deal with the messiness of the real world. That’s when machine learning took over. In the 1990s and 2000s, AI began learning from data instead of depending on fixed rules. This opened up new possibilities—from recognizing speech to predicting medical outcomes.
Some milestones:
AlphaGo showed that AI could master games of deep strategy. It learned not just from rules, but from experience—studying thousands of games and improving by playing against itself. Its success accelerated AI research across many fields.
Neural networks are the core technology behind modern AI. Inspired by the human brain, they are made up of layers of connected nodes—or artificial neurons—that pass information forward and adjust based on what they learn. Each connection has a "weight" that changes as the system trains, helping the network recognize patterns, make predictions, or generate new outputs.
What makes neural networks powerful is their ability to learn from data—not by following a fixed program, but by adjusting their inner structure based on feedback. This learning allows them to perform a huge variety of tasks, from recognizing faces and voices to generating text and music.
The most influential neural network architecture today is the Transformer. Transformers were introduced in 2017 and revolutionized how machines understand sequences of data like language, audio, or even images. Unlike earlier models that processed input one step at a time, transformers take in the whole input at once and decide which parts are most important. This is called self-attention.
Why Transformers Matter:
Transformers allow AI systems to do more than just analyze—they can generate and communicate. They read prompts, understand instructions, and even compose stories or build software. By learning patterns in human expression, they’ve become powerful collaborators in creative and intellectual work.
Large Language Models (LLMs) are AI systems trained on vast amounts of text from books, articles, websites, and other sources. They use this knowledge to understand and generate human-like language. LLMs can help with tasks like translating languages, summarizing text, answering questions, and creating content.
Generative Pre-trained Transformer (GPT) models are a specific kind of LLM that use a neural network architecture called a transformer. GPTs have been at the forefront of recent AI advances. They can generate fluent, context-aware text by analyzing the relationships between words in a sentence or paragraph.
Recent versions like GPT-4 can process and generate not just text, but also images, audio, and even video. This ability is called multimodal AI. It means one model can read, write, and "see" or "hear" at the same time—useful for creative projects, accessibility tools, or educational simulations.
How do language models guide image generators? Think of text as the brain and images as the hands. When you type a prompt like “a fox jumping over a fence in moonlight,” the language model interprets the sentence and translates it into a detailed internal structure—something the image model can understand. It’s language that organizes and controls the creation of visual, auditory, and interactive content.
Multimodal AI builds on this by letting different types of data (text, image, audio) flow together through the same system. The result? Unified tools that can read an article, answer questions about a chart, describe an image, or generate a soundscape—all from your prompt.
To get the most out of large language models and other generative AI tools, it’s important to learn how to write effective prompts. A prompt is the instruction or input you give to the AI. The clearer and more thoughtfully crafted the prompt, the better the result.
In generative AI systems, language is more than just input—it’s the interface. Whether you're generating text, images, code, or sound, prompts act as the control mechanism. They help you guide what the model does, how it does it, and who it does it for.
Good prompting is not just about being precise. It’s also about being expressive and strategic. Sometimes breaking a complex task into steps helps. Other times, using creative or poetic language can guide the AI toward more imaginative results. Language is powerful, and how you use it shapes how the AI responds.
People with backgrounds in writing, art, history, or philosophy often excel at prompt engineering because they understand how language works at many levels. Their ability to express complex or nuanced ideas helps them craft prompts that produce more meaningful results.
One advanced method is meta-prompting, where you ask the AI to help generate or structure other prompts. This is useful for setting up a chain of tasks or guiding the AI through a multi-step project.
For example, here’s how you might frame a meta‑prompt for a digital publishing workflow. This isn’t just a single request—it defines an entire process for the AI to follow step by step:
This format demonstrates two important points:
This example demonstrates two important ideas:
Segmenting the instructions with steps, headings, and short explanations also helps the AI follow the logic without skipping ahead. The model “sees” where each task begins and ends, which makes the whole workflow more reliable—and easier for you to review.
To fulfill this task, a language model would need to:
This kind of prompt gives the AI a roadmap—a series of linked steps to follow, like a "prompt for prompts" that can guide an extended workflow.
Here are some common prompting techniques:
Many AI tools today offer built-in prompt libraries or templates, making it easier for beginners to get started. These provide helpful starting points for everything from lesson planning to design ideation and marketing strategy.
Context Engineering goes beyond single prompts to shape the entire information environment. Context is what the model “knows,” “sees,” and “remembers” when it’s generating a response to a prompt. Context Engineering is about feeding the model the right knowledge, structuring that knowledge effectively, and managing how information flows over time. Done well, it makes AI output more reliable, relevant, and creative.
By combining prompting and context engineering, you shift from giving instructions to designing environments for AI thinking. Prompts are the questions you ask; context is the stage you set for the answers.
Finally, remember that prompts and contexts don’t just shape the AI’s output—they also teach it about you. Every choice you make in wording, structure, and examples is a clue about how you think, write, and approach problems. By sharing your reasoning, style, and process in the prompt and context, you invite the model to mirror not just what you want, but how you would do it yourself.
To develop a grounded understanding of prompt engineering, this unit will have you engage in a hands-on exercise with a language model like ChatGPT or Claude. Here are the steps:
As you complete these exercises, reflect on how AI thinks, what it gets right, and where it needs guidance. Document your process and insights.