Creative Artifact: Before / After Diptychs
This series of five diptychs contrasts real photographs of Vancouver, WA with AI-generated interpretations of the same locations imagined in the year 2050. Each pair invites viewers to compare the present city with a speculative “machine tomorrow”.
Reflection Essay: Entangled Futures of Vancouver
At the core of my final project is a simple but important observation: every city exists in several versions at once — in our memory, in the present, and in countless possible futures. Vancouver, the city where I live and which I see every day, also consists of many layers. We perceive it through our own experiences, emotional associations, familiar routes, and visual habits. However, artificial intelligence offers a completely different kind of vision. It does not know this city, it does not live in it, but it can generate endless hypothetical versions of it. My project, “The City That Does Not Exist: Vancouver–2050 Through the Eyes of AI,” emerges from the collision of these two kinds of perception: human and machine.
My goal was not just to “make cool AI images,” but to create a dialogue between reality and machine imagination, between documentary photography and synthetic fantasy. I wanted to see how my own view of familiar spaces would change when I confront not only their present, but also what a machine might propose as their future. In this sense, the project became not only a series of images, but also a way to see the city again for the first time.
The project grew out of class discussions, reflections in my journal, and multiple conversations with AI. In our aesthetic group we talked about “entanglement” — the idea that humans and machines are intertwined in hybrid creative processes. I became interested in the question: can AI do more than simply decorate or enhance reality? Can it offer its own perspective that still remains in dialogue with mine?
From this question the direction of the project emerged: I decided to take real photographs of Vancouver — places I see every day — and explore how an AI system would interpret them if I asked it to imagine the year 2050. My project extends the ideas from Project 4, but in a more concrete and focused way. Instead of just exploring an abstract aesthetic, I am now creating a specific artistic artifact where each frame becomes a point of contact between two intelligences.
My process unfolded in three main stages: collecting images, generating machine interpretations, and then comparing and selecting pairs. First, I photographed real locations that I consider meaningful for Vancouver and for my personal relationship to the city: WSU Vancouver, nearby residential neighborhoods, forest trails and walking areas, road intersections and bridges, and city streets at night. These are ordinary, everyday places. It was important for me to show my Vancouver — not as a tourist, but as a resident. I photographed them at different times of day to capture natural light, textures, and atmosphere.
Next, I used a generative AI image tool to create speculative futures of these same locations. At first the AI mostly tried to “enhance” or “beautify” the original photos, but that was not what I wanted. I was not looking for an improved present; I was looking for possible futures that might never actually exist. So I started giving the system more complex and sometimes contradictory prompts. I asked it to imagine Vancouver as a dense cyberpunk city full of neon and vertical structures; then as a place overtaken by nature, where plants and trees swallowed the built environment; then as a city after a climate disaster; and finally as an eco–utopia where advanced technology and green spaces coexist in harmony.
Each AI output was not a final result, but a draft. I refined the images, asked the system to adjust lighting, architecture, and atmosphere, or returned to my original photos to try another approach. In this way, the AI functioned not just as a tool but as a collaborator. It offered variations and directions, while I decided which of them matched my concept and which did not.
The final form of the work became a series of five diptychs. In each diptych, the left image is my photograph and the right image is the AI’s interpretation of the same place in 2050. These pairs emphasize the difference between reality and fantasy, invite viewers to compare the two visions directly, and create a kind of tension between the familiar and the strange.
In the beginning, I thought of this project almost as a formal exercise: compare a photo with an AI-generated version of the same scene. But as I continued, my thinking changed. I noticed that the AI generations were reflecting not only the training data of the model, but also my own fears, hopes, and cultural references about the future. When I asked for cyberpunk, the model gave me dark, crowded, highly saturated cities. When I asked for an eco–utopia, it produced idealized green environments where nature and technology felt perfectly balanced. When I requested “after a catastrophe,” it turned bridges into ruins and streets into overgrown paths.
This made me question what exactly I was seeing in these images. Was the AI truly “imagining” the future, or was it recombining visual clichés from films, games, and concept art that our culture already associates with the future? In other words, AI is not predicting the year 2050. It is constructing a visual mirror of our own expectations — our collective dreams and nightmares about what might come next. Each AI-generated Vancouver says more about the present and about me as the author than about the real 2050.
My project is tightly connected to several key ideas from the course. First, it engages directly with entanglement between human and machine creativity. The project would not exist without my photography, my prompts, and my curatorial decisions, but it also would not exist without the AI’s ability to synthesize new visual forms. The authorship here is shared and layered.
Second, it reflects our discussions about information structures and hidden infrastructures. Generative AI models rely on massive datasets and complex internal architectures. These invisible structures shape what the model can and cannot imagine. By placing my documentary photos next to AI-generated futures, the project makes those hidden structures more visible. We see how the model tends to dramatize, exaggerate, or standardize certain visual motifs.
Third, the project connects to speculative futures. It does not document reality; it constructs alternative worlds. Some of them feel hopeful, others feel dystopian, and some are simply impossible. This speculative element invites viewers to think about how images influence our sense of what is possible or desirable. Finally, the project engages with questions of hybrid authorship and creativity in the age of AI. Who is the author of the AI images — the model, its developers, the dataset, or the person who prompts and selects? My project does not give a simple answer, but it tries to show the complexity of this question in a concrete visual form.
This project taught me several things about creativity, authorship, and AI. First, I realized that AI mostly reveals our own expectations rather than any objective truth about the future. The “Vancouver 2050” that the model generates is built from cultural patterns, visual tropes, and my own instructions. It is as much about the present as it is about tomorrow.
Second, I learned that human control and intention remain central. The AI can generate many variations, but meaning appears only when a human chooses, arranges, and contextualizes them. AI does not remove authorship; it redistributes and complicates it. Third, I came to see creativity as a dialogue. In this project, I was constantly responding to the images produced by the model, and the model was constantly responding to my prompts, corrections, and selections. The final work is the record of this back-and-forth conversation.
Finally, I realized that hybrid art — combining documentary images with synthetic ones — can create a new kind of visual language. It allows us to hold reality and speculation together in one frame, without forcing us to choose which one is “more true.” In conclusion, “The City That Does Not Exist: Vancouver–2050 Through the Eyes of AI” is not just a sequence of paired images. It is an attempt to look at the city through two lenses at once: a human lens grounded in lived experience, and a machine lens grounded in data and pattern recognition. Between these two lenses, a space of imagination opens up. For me, this project became a way to get to know my own city again, and at the same time to explore what happens when creative thinking becomes a shared process between a person and an algorithm.
AI Tools and Chat Documentation
AI Tools Used
- ChatGPT — brainstorming concepts, refining prompts, and drafting/refining text.
- DALL·E 3, Midjourney — generating AI interpretations of Vancouver photos.
Key AI Chat URLs
Below are links to the primary AI chat sessions that shaped this project, including prompt development, conceptual framing, and reflection writing.