This week was primarily focused on my thesis video project. At the time of writing it's not entirely done, I'd say around 80%, but there's enough structure and content for me to reflect and give myself some feedback before presenting them on Monday.
My video took the form of a video essay / motion graphic explainer, which talks about the sort of technical aspects of AI, and the conceptual side of the actual point of using AI to generate images. My inspirations for this are primarily from video essays on Youtube, with the best one's having rather high production, storytelling, and educational value (such as Vsauce, SEA, and Lemino to name a few). I chose this form to give myself the most breathing room in being able to explain these things through narration and through designed graphics. Motion Graphics to condense complex concepts into digestible information, and narration to fill in the gaps and breathe more information that the graphics alone might not cover. The final video will probably be around 5 - 5 1/2 minutes, which I think might be running a little long for the scope of the assignment and the time it's taking me to complete it, but there is a lot to cover, which I think will be beneficial as far as the actual written component of the thesis - getting the words out. In this instance it's in the form of a script, but that's much easier to convert to the written portion rather than keeping it all in my head (the written portion will be beginning to be drafted in the next month as per my timeline).
I also think the use of motion graphics is beneficial to me for a number of reasons. They're more engaging than static images or posters (such as the emergent garden poster, which will be referenced in the video), they're interesting and fun for me to make and in putting them to practice I am inherently learning more, they provide a basis for a possible branded component of the thesis (valuable in a portfolio project) and they are a valuable skill to have in practice (and can make you more $$$).
Additionally, motion graphics are an area in design which AI hasn't quite breached into yet (successfully at least), as there is a lot more nuance in designing something graphical that's moving and consistent at 60fps, with a plethora of effects and design choices to make along the way. Of course, you can input motion graphics as a starting point for AI to compose over the graphics you make, but from a start-to-end process it still has a long way to go in this area.
Some highlights from the video are below in .gif format:
These might not display correctly as intended as .gifs on WordPress, they're making my browser lag as they're being uploaded and the frame rate is drastically reduced from what they're meant to be at, but I am happy with how they are looking in the final video. I used a mix of After Effects and Cavalry to make them.
In doing this assignment I'm thinking on the value of what a motion graphics piece could look like for my thesis, and thinking that it could be something that could be incorporated, even if I'm not actually using AI. I'm thinking about Mira Jung's thesis, where she used interactive motion graphics with sensors that played when people crossed a certain threshold of a projector setup, and how something of a similar nature might be used in mine. Additionally, while I think the video may be a bit on the longer side for this assignment, I think what Alex said in class on having a "directors cut", or more specifically, a longer form of the video could also be beneficial for my thesis. Something that's lacking in the video as it is now is a more in-depth overview of how AI works, because there's a lot more to it than what I could cover, such as datasets, how a model is trained/learns, or even on their history, covering things like Joseph Weizenbaum's ELIZA the first AI chatbot, or Harold Cohen's AARON, the first AI painter. I also couldn't cover some of the ethical concerns about AI, which, although my thesis isn't primarily focused on these ethical aspects, it's hard to not talk about AI without covering them at least briefly.
What I Read
Reading again took a little backseat this week so I could focus more on the video project, so I mainly continued reading Transcending Imagination for a couple chapters when/where I could.
In Chapter 3, Form Shapes Perceptions, Manu emphasizes that form is more than structure: it's the framework that shapes how we see and interpret reality. AI and virtual reality accelerate rapid iterations, allowing artists and designers to experiment and create new realities at unprecedented speed. Personally, I think this is a played out view on AI and can undermine its value a little bit. It harkens back to the age old quality vs. quantity tension, where now more than ever people can produce visually interesting things, but the quality might not be there, as evidenced by the AI "slop" that permeates through social media nowadays. However, there is value in this democratization of image making, which I relate to in my "Image Space" section in my video, that of which playing with AI in rapidly making images can let you throw lot's of ideas at the wall to see what sticks so you can run with it further. In this sense AI can act as a sort of filter of ideas, and an unexpected idea maker, which he expands on in Chapter 4
Chapter 4, Incidental Beauty, explores beauty as something that often emerges unexpectedly rather than being deliberately designed. While art and design can intentionally align intention and perception to produce beauty, incidental beauty arises from chance encounters like "a spider’s web in the dew or graffiti illuminated by sunset—that surprise and provoke wonder." This is the art of noticing, and these moments of unplanned beauty challenge fixed aesthetic beliefs and encourage mindfulness, urging us to see the world with fresh eyes. Manu argues that AI-generated art also creates opportunities for incidental beauty by producing unanticipated results, expanding aesthetic diversity and reminding us to remain open to surprise and transformation in our experience of art and the world. I bring up this concept in my video too and I think it relates great to play, as exploring with AI in low stakes and fun environments can lead to something greater.
Where the Next Steps are Leading
Immediately there are a couple things I need to finish with this video, and I did see your (Alex's) email with the IRB form that I need to fill out, which inevitably leads me to having to hone in a little more on the aspects of the interviews and workshops that I need to conduct, things like the audience I'm working towards and the questions I need to ask. After this, into October and November, is deliverable crunch time, where I fill focus on more project based stuff, like a longer form video or TouchDesigner projects. Additionally on the research side of things I want to focus on drafting solid interview questions first after IRB.
This week I focused on creating my submission for the AI poster competition at Iowa State. The poster I made is titled "The Emergent Garden", and speculates on a garden the shows the real-time evolution of fauna, and is inspired by some of the readings I've done that draw parallels to AI and biological evolution. I think this was valuable for my thesis in order to actually document an AI workflow process that involves creativity and play, as the submission required for a "diagrammatic" process explanation with charts, flows, etc. Additionally, in crafting the artist statement, it got me thinking about additional ideas my thesis could go inside the real of play, creativity, and AI, specifically as it regards to human connection, with each other, with technology, and with our shared ecosystem as a whole. The poster and artist statement are below.
Artist Statement:
The Emergent Garden is an AI data-driven poster that imagines creativity as a living, evolving ecosystem. Inspired by parallels between artificial intelligence and biological evolution, it explores how new forms can emerge without conscious intent yet still hold deep meaning for us as humans.
Creativity is often seen as uniquely human, tied to intentionality, consciousness, or free will. Yet one of the most creative processes in nature, biological evolution, has none of these traits. Natural selection shows that beauty and novelty can arise without intention. Similarly, AI generates infinite variations, not to “survive,” but to spark reflection in those who encounter its outputs.
The Emergent Garden speculates a space of care and curiosity. AI’s constant retooling and reprompting reveal an evolution of images in real time. The poster envisions a garden that algorithmically grows unique fauna, never quite the same, offering a shared experience that reminds us of our role as co-creators and caretakers of technologies, ecosystems, and each other.
By blending organic and artificial, the garden suggests creativity is not only invention but also tending to connections: between human and nonhuman, natural and digital, self and community.
At the time of writing this post I haven't quite finished charting out the process, but it involved using edge detection data of the AI generated flowers to blob track and connect points across the canvas, and followed a similar path to the process chart I demonstrated way back in the initial presentation, using pre-processing primitives to "draw" the shape of the flowers before the AI enhanced them, weighting it accordingly to get the desired look, feel, and "vibe" that I wanted, then post-processing in Photoshop to add the more static text elements, such as overlaying the blob tracking script, which touches on the transparency and tech literacy I am looking towards in my overall thesis.
What I’ve Read
Reading took a little bit of a backseat this week, but a book I've discovered called "Transcending Imagination" is very interesting, and touches on how creativity has and will change in the AI era. The author argues that all art is inherently artificial, shaped by human intention, and that AI complicates traditional boundaries between the “natural” and “artificial.” AI, he suggests, should not be feared but embraced as a collaborator that expands human imagination rather than diminishes it. Creativity is reframed through the cycle of intention, articulation, and manifestation, with AI extending human intent into forms that often exceed what creators can imagine. By moving beyond reliance on archetypes, such as familiar design patterns like chairs or cars, AI allows artists and designers to explore radically new possibilities. Rather than threatening creativity, it introduces unpredictability and beauty that challenges our perception and deepens our understanding of art’s purpose.
This can be related to play as a means of expanding our horizons with AI and creativity. With play, I feel like breaking away from structured "reality" and archetypes allows for more imaginative creations, which can then be reintroduced to these archetypes, opening the door to more unseen possibilities than without play, or the play with AI. This is only from the first chapter, and I'm excited to read more on it.
Where the Next Steps are Heading
Aside from the immediate next steps, like continuing to read "Transcending Imagination" and finalizing the process diagrams for the poster submission, my main next steps are more formalizing the workshop and interview processes. I submitted my IRB earlier this week, so pending that I can continue in defining how those look and how they fit into my thesis, such as honing in on an audience like we talked about in class on Wednesday. Additionally, we have the video project coming up, which I think will take the form of an explainer-video essay style video consisting of motion graphics to break down some of the bigger AI concepts into digestible information (on the topic of transparency and AI literacy). I think there is a way to tie in the Emergent Garden piece into it as well, since being made in Touch Designer, it allows for a motion aspect to be further explored as opposed to a static poster, which can highlight the real-time evolution that the poster speculates on.
Bibliography
Manu, A. (2024). Transcending imagination: Artificial intelligence and the future of creativity. CRC Press.
This week I did a bunch of reading and writing down notes during our work days in class. Outside of class, I did some experimenting with the AI Agents that can read and write on my computer in an attempt to make as autonomous as possible creative works, which had limited and mixed results, but could be something to explore in different ways in the future.
What I made
One solid accomplishment I made this week was getting my academic plan approved. Maybe accomplishment is the wrong word, since it's as simple as listing all the classes I have taken, and the 3 classes I will take next semester. But still, it's a thing crossed off the list and one less thing to have on my mind over the coming months.
Now as far as something tangible, I managed to get those AI agents working in some capacity on my computer to output some simple art files, which was an interesting process. It was simple to get the API keys for Claude's Sonnet 4 model and get it to simply run a "hello world" -esque test, but achieving autonomous artistic creativity was a much more challenging prospect, which was to be expected given the current state of AI. However, getting mostly "un-prompted" images was achievable as far as getting the AI to make image files of it's own design. I still did have to prompt it, which is an inherent aspect of AI currently, but in wording it out I aspired to give it as much agency as possible to iterate upon it's own work for as long as I could run it.
The prompt:
"You are an artist agent. Be maximally creative, clever, and unique. Create and evolve generative artwork with python scripts. Write 2 python scripts that each create a file called art1.png and art2.png, then run the scripts, then 'look' at both image files. choose your favorite of the two, then create successor artworks that overwrites the previous python scripts with variations/improvments on the previous. Also, create a helper script that runs both python scripts, produces the art files, then combines them into a single art.png file that stacks both images on top of each other so I can see both. This combined art file should be the one you use to read and select a favorite. Repeat this behavior endlessly: create, observe, select, modify, over and over. Do each with separate agentic actions, do not just write one script that runs forever. Do not stop ever or ask for approval. The generated images should be high resolution, 1024x1024, and be as unique, creative, and beautiful as possible."
The hope in getting it to run python scripts to create the images and then compare them was to allow it to run in a lightweight repeatable environment that it could look back on easily, since it's not actually 'looking', just analyzing the image contents and it's own scripts. Of its own accord it installed the mathplotlib python library to write and render equations that could be seen as "artistic". It was only able to do so much though, and got through 2-3 images per run before getting stuck. The images it made in a limited time frame are below:
Unfortunately, it was getting stuck in the sort of self-iteration and evolution I was hoping for, but I think even in these abstract images there is something to say about how the model is defining it's own artistry through scripts of it's own design. Something I read about in class was this concept of image space. Constrained to a 1024 x 1024 RGB pixel space, where each pixel can have a different combination of 256 values for red, green, and blue, the total possible unique images under this format is 16,581,3751,048,576, a number with over 7.5 million digits. For comparison, the number of atoms in the observable universe is 1080.
This is an unfathomable number, but it's scale becomes somewhat trivial when a vast majority of the images in image space are just random noise. The Universal Slide Show from The Library of Babel showcases this concept, where 'unique' images are constantly being cycled through, but each and every one of them looks like this:
So there's obviously a difference when looking at this random noise vs. the images the AI model created. The colors and forms are arranged in a way to make the notion of spirals, depth, and layers. This makes me ask: in all of image space is there a subset number of "creative" or "valuable" images that distinct themselves from this noise? This is for all intents and purposes, probably unanswerable, but then this leads to the question of if it takes a human to define this value? The AI could have certainly spit out some random noise and called it art, but on the surface there seems to be some mutual understanding in charting image space for aesthetically distinct images from noise.
What I read
During class time I read a lot on some of the research-oriented processes that will back my thesis, these two areas are practitioner interviews (1-1 interviews with artists in the field currently using AI) and a workshop diving into the creative process with AI (using pre- and post-surveys to gain qualitative and quantitative insight into how the process resonates with those using it).
Starting with interviews, common themes and questions among them were diving into interviewees processes, looking into positive aspects in their uses of AI (finding enhancing, enabling, opportunistic, or controllable elements within their work), the negative aspects of the use of AI (inhibiting, constraining, hindering, limitations within their work), ethical considerations (risks, critical reviews), and future forward questioning (changes in fields, improvements and introductions, evolution). These lines of questioning are important to ask both artists using AI and artists not using AI to get a comprehensive view of the current landscape, and answer some of the Big W's + H questions: What are people doing with AI creatively, who is doing it, when are they introducing it in their process, how are they implementing it, and what effects does it have on their process and the perception of their work? These are great questions to ask not only others but myself as I go further down the funnel. Two studies in particular that stood out to me were EXPLORING HUMAN-AI COLLABORATION IN THE CREATIVE PROCESS: ENHANCEMENTS AND LIMITATIONS and EFFECTS OF IMPLEMENTING AI IN PRODUCT DEVELOPMENT, FOCUSING ON INDIVIDUAL CREATIVE PRACTICES.
On the side of workshops, there's surprisingly little that focus on the actual deliverable of AI artifacts, but most of them focus on the exploration of AI in a collaborative sense to augment an individuals creativity. In contrast to the workshop I ran with design social, there seems to be a lack of "fun" or "play" in these AI workshops when it comes to things like image and video generation, or even mixed forms of interactions like I'd done with hand or face tracking. Maybe this is a gap in research: using AI for fun instead of a pure productivity enhancer? The product development paper I read above did a qualitative experiment as opposed to a workshop, in which the results touched on the side of human agency and authorship, emotional connection such as motivation and engagement, and perceptions of trade-offs and gains in AI-supported work. These are interesting talking points, in which I'm thinking could be explored more or differently in a workshop setting as opposed to a qualitative experiment. This also relates to another workshop proposal I read on AI in a creative process, in which the 3 sections were Serendipity, Collaboration, and Creative Reflection: using the randomness of AI (serendipity) to drive the collaboration of a creative artifact, then coming back together and reflecting on the process.
Where the Next Steps are Leading
From the readings, research and mini-experiments so far, I feel like I'm getting closer and closer to having a concrete question(s) to ask that can drive the rest of the thesis. What questions can I seek to answer through research, interviews / workshops, and the prototypes / projects I make? Some ideas so far, but certainly not limited to are: What is the role of AI in low stakes creative fun/play? What new definitions does AI take in this space? (Genre/Material as opposed to Collaborator/Tool). How does the idea of image space come into play? Can AI be autonomously creative? What's limiting it in doing so? How is creativity defined? (Creativity = New + Value?, How do we define things that are new or valuable? Can AI really be New-New?) This will be a crucial step in my thesis, one I'm excited to take.
As far as projects, I would like to continue refining the agents that are "charting image space", and see if I can make them run on loop for longer than 2-4 images before getting stuck. I also have another project idea that involves using a Muse2 brainwave monitor on a person to drive AI image generation, which airs on the side of speculation of future collaboration. To that idea I would start by sketching it out first, but before I pursue that avenue the research question would come first.
Bibliography
Martinsson, T., & Svedberg, M. (2025). Effects of implementing AI in product development, focusing on individual creative processes (Master’s thesis, Uppsala University). Uppsala University.
Yamada‐Rice, D., & Mordan, R. (2022, June). Augmenting personal creativity with artificial intelligence [Conference workshop paper]. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ACM.
Patama, S. (2025). Exploring human-AI collaboration in the creative process: Enhancements and limitations (Master’s thesis, University of Jyväskylä). University of Jyväskylä.
For this week in progress report one of the main things I made is my timeline for what [I think] my process will look like. I'll cover that, along with a reading and one thing based on that reading I will explore on making over the weekend/next week, with a plan to show some progress on how it turns out next week.
The timeline
A major thing I made this week was my timeline. While, of course, there are still refinements to be made, this was a really good exercise in terms of dumping everything I could think of doing based on where I'm at; out in one place, instead of holding it all in my brain. We talked on Tuesday about writing to write and get thoughts out, and this served a similar purpose to reduce the cognitive load of timelines and things. If I were to present this to my committee, I would clean up the weekly area a bit more, and structure it out similar to how Silvia did hers, divvying it up by the type of task. It also could be beneficial to make an Asana board like Carly did too, the idea behind that being Asana could serve as a more structured overview of timeline and progress, while the Figjam could be this living document I continuously come back to. A sort of messy hub as a source of truth for all things thesis related, which is something the inclusion of the funnel supports as well.
Being still in the scoping stages I haven't quite yet made any sort of thing yet besides the timeline, presentation, and the preliminary explorations I presented in the presentation. However, going off this I have made strides and plans for what is to come next, however far away it might be. For instance, I've looked into the IRB exemption forms, which I believe my areas of research should fit nicely into, and have begun filling them out paired with sending inquiry emails to the IRB ISU people just to make sure. Pending their return emails at the time of writing, and some more reading / exploration of workshops as a research method, I will hopefully be submitting the form within the next two weeks.
Readings
Two readings have stuck out to me this week, both relevant to where I believe my work to be heading. The first is a paper devising an experimental framework on AI and Human co-creation, a particularly salient topic for me. The research involved leveraging ChatGPT assistance to produce a physical artifact called "Rest in Pieces", through a newly devised framework "inspired by the Co-Design approach used in the architecture, engineering, and construction industries. The framework echoes my process flow that I devised and presented on, and provides a good reference for expanding on it in a non-linear fashion. In an all around inspiring read, they proposed an initial framework, and adjusted it based on their process and work within the project, all framed under their central research question: "How does human-AI collaboration through the Co-Design approach reshape the creative process, particularly in the context of artistic production?". Provided below is the initial framework, followed by their post-production revision.
Both providing an inspiration in visualizing the non-linearity of the process, even just adding the "unforeseen consequences" adds a lot in terms of describing the control one tends to give up when introducing AI.
The second paper is a bit more conceptual, and provides a theoretical framework, a philosophical discussion on the creative abilities of artificial intelligence, and a reflection on the dynamics between the artwork, the art-maker and the art audience. This paper brings up many good points that are top of mind today, such as the authenticity of AI in creating art, what it means to be 'creative' and if AI can be, and an overview of computer generated art up to that point in time. What interested me the most was one of the last chapters, titled "AI-Art as an Autonomous Art Genre". Up to this point, AI as a Co-Creator and AI as a material / medium have been discussed in regards to my thesis, but wrapping it up into a genre as a whole is a very interesting take on it.
To me the notion of labeling it as a genre subverts the idea of AI as a taker of human culture and creative works, and instead acting as an extension of it, while legitimizing it a little more as something to be analyzed with intent. The author sums it up well in this paragraph:
"The evaluation of AI-art is still bound to human emotions and human taste of aesthetics. Nevertheless, it is a different art form which has computational features. It is a genre that combines the human aesthetic, human culture and analog features of the human mind, with the computational system and digital features of machine intelligence."
I wouldn't say this is true of all things generated by an AI to be art, especially considering the pervasiveness of low-quality AI generated videos and images that are ever so prevalent in social media today (AI slop, as netizens have coined). But I would say that in the same vein that any person messing around in photoshop isn't inherently creating art. This is getting more into the philosophical and conceptual ideas of what art is, which this paper also goes into, and while I'm not trying to reinvent the wheel, it's important to consider for this thesis, especially since the definition of AI is something that's come up in discussions around my thesis.
Where the Next Steps are Leading
I have a couple of next steps in line based on what I've read / done so far. On the side of the more bureaucratic and research oriented aspects of my thesis, I will continue to be pursuing the IRB exemption form into the next week. While I still may need a little more time in ironing out how a workshop will look like / be defined in my thesis, at the very least 1/1 practitioner interviews and surveys will fit into the exemptions based on the form provided in the online portal.
In pushing towards more production, and thinking on the idea of "AI as an Autonomous Genre", I think it would be interesting to actually push the AI to be as autonomous and open ended as possible, just to see what it's limits are in true autonomy as far as creative image generation and artistic/compositional integrity. AI can write code creatively, a la p5.js, but what if it creates its own system in its own environment? A new thing that has popped up in the AI world are "Agents" that can communicate with your computer, a notable ability being that it can read and write files directly on your system. You can have more than one running simultaneously, and even have them working tangentially on the same task. Anthropic CEO Dario Amodei equated this potential as "A country of geniuses in a datacenter". I don't think it would produce anything to that level yet, at least not in terms of visual design, but in doing so the goal would be too get a sense of where the human fits in, by seeing what happens with the absence of the human. I've set up the virtual environment where this would take place, what's next is to get the API calls for the agents I'd use, and craft prompts that would let them act as autonomously as they can.
Bibliography
van der Heijden, S. (2023). Artificial intelligence as a co-creator: Exploring AI’s role in the creative process of visual art (Master’s thesis, Radboud University). Radboud University Thesis Repository.
Costa, A. S. L. (2024). Artificial aesthetic: Exploring the convergence of creativity, artificial intelligence, and human expression in art (Master’s dissertation, Instituto Superior de Contabilidade e Administração do Porto, Polytechnic of Porto).
//about
Ryan Schlesinger is a multidisciplinary designer, artist, and researcher.
His skills and experience include, but are not limited to: graphic design, human-computer interaction, creative direction, motion design, videography, video-jockeying, UI/UX, branding, and marketing, DJ-ing and sound design.
This blog serves as a means of documenting his master’s thesis to the world. The thesis is an exploration of AI tools in the space of live performance and installation settings.