This week I did not do a ton of design work as opposed to finalizing my research protocols for the workshop and interviews and getting that submitted, and almost immediately approved by the IRB exemption review wizard. Compared to the last instances I shared on the last report of the interviews and pre/post survey protocols, I was able to refine and reduce the content in a more appropriate manner. For the interviews, I reduced the master list down to 11 questions that should vary in the time it takes to answer them, ranging from longer form process explanations to questions that can be answered relatively shortly. This is to be respectful of their time and give me the time to find a suitable number practitioners using AI, with an interview time of 30-45 minutes allowing me to recruit more participants compared to an hour or more long time. I am aiming for 3-5 practitioners in various fields in order to get some varied responses as far as their outcomes in playing with AI and learning about AI and technology in their creative process of making. For the pre/post workshop surveys, I restructured the questions to make them more suitable for comparison of data before and after the workshop (for some of the questions, they are 1-1 in the pre and post surveys, making it easy for direct comparison), and making them more neutral and less leading, in order to not assume outcomes from the workshop itself. Another thing that Alex suggested was to have a scale of 1-4 for the Likert-style questions in order to actually push participants over a threshold as opposed to having a neutral option that does not give much room for a baseline improvement or regression. The baseline would be if they answer the same choice for the question before and after the workshop. Now that this is out of the way, I can start planning more in terms of the workshop, things like sourcing materials and refining more of the slide/workshop content; and finding participate candidates for the interviews.
Additionally with that out of the way, I began sketching for the design project portion of my thesis. I accomplished a couple sketches, maybe a little less than I was hoping, but it's still good to get the ideas out that I've been putting off for the sake of scoping my ideas down. I'll post the one's I got the furthest with at the time of writing, and briefly explain the ideas behind them, and keep in mind they are really rough. As a whole, these prototypes and what comes after will seek to communicate and teach those engaging with these things about AI that aren't necessarily on a technical level. While there still are technical aspects that would be included in each one so far, in the big conversation of AI it is important to learn about it's affects either psychologically, cognitively, or conceptually. Additionally from the pieces themselves, I think having some standees or something equivalent can help communicate the ideas, similar to what you'd see in a museum or gallery of an artist statement. Lastly, with all of these I think it is important to include some form of process within the final design, either as side-by-side network that showcases what's going on, or an abstracted visual. This would be to further increase transparency and learning of what the model's and processes are actually doing behind the scenes, by bringing them to the foreground.
First is the idea of using a brainwave monitor as a conduit for driving AI Image generation. The monitor (A Muse 2), would be connected to a phone app that creates OSC data on the 4 main brainwaves (Alpha, Beta, Delta, & Gamma) which can be wirelessly communicated to TouchDesigner, and thus an AI image model. Aside from an interesting and interactive installation idea, this can be used to communicate some cognitive effects AI can have on people using it, and to communicate the beginning of the cognitive chain is the user and how they interact with - or don't interact - with an AI. This could also be expanded upon with more interaction modes than just the brainwave monitor. With a web cam and machine vision setup, body and motion detection can be combined with this to provide a whole-body interaction, mind and movement together in combination with an AI model. In a bento-like layout, you'd see your brainwaves next to the visual they are helping to create (and possibly the body tracking nodes if that is to be included).
Second is the concept of a "non-sentences" project. This actually would not inherently use an AI image generation model, as it is primarily focused towards pulling back and making more apparent the processes of text generation. Gathering a database of words and separating them into where they would fall into a common sentence structure (Articles, nouns/verbs/adjectives/, Prepositions, etc.), and having them semi-randomly form a sentence. This would visualize in an abstract way how text generators work in predicting words and characters to make things that sound like sentences to us, pretty convincingly for the most part too, but also showing where things can go wrong with hallucinations that can half make sense or are just nonsense, revealing behind convincing words and images is just an algorithm at heart. Again, there would be an accompanying visual of the process behind this generation, in the sketch envisioned as a network that links these words together as they are spit out. While there is a structured process, it's nature can still output nonsense, or "non-sentences".
The last sort of sketch that I did not get too far in involved creating a more interactive form of the Emergent Garden, or the Emergent Ecosystem, through things like midi pads or a drawing tablet. I'm on the fence about making this piece interactive however, as some of the idea behind it was making parallels to the unconscious algorithm of biological evolution where the only influence is the underpinning search algorithm of favorable mutations for survival. However, an interactive component could lend to the idea of cultivating curiosity and care, highlighting where our choices cause chain reactions that evolve depending on the circumstances. Regardless, I think the emergent garden poster series as an evolving standalone piece with no interaction can still fit well within my thesis as a conceptual learning piece on what values AI images can have in the context of image space and the evolution and iteration of ideas. That alone would involve a little more work too, as the pieces seen in the video I made involved a little bit of trickery due to time to showcase the concept, having them actually live for viewing will involve a little reformatting.
Overall I think there are some good ideas here that I will have fun exploring into next week. What I plan to tackle first is the "non-sentences" project, as I think it is something I can do relatively fast with the resources I currently have on hand.
What I Read
Reading is taking a further backseat for the actual creation of things, and will probably continue to do so, but I have kept up slightly with "Transcending Imagination", getting through chapters 5-6.
In Chapter 5, Bias and Creative Intent, the book explores how artistic creation is a constant negotiation between skill, intention, and limitation. Every decision: form, color, or concept, involves compromise and reflects the artist’s biases, shaped by experience and perception. Contrasting this with AI-generated art, where algorithms operate free from personal tendencies but are guided by data and semantic intent, forming their own kind of systematic bias, arguing that AI changes the artist’s role from sole creator to facilitator, fostering a symbiotic relationship in which humans guide and interpret machine output. This challenges traditional notions of authorship and originality while expanding the scope of creativity. Bias, both human and algorithmic, is reframed not as a flaw but as a force that can inspire transformation and reveal hidden assumptions within creative intent. this dual-bias is something I definitely would like to explore in future sketches for one of my prototypes.
Chapter 6, Maximizing Creativity, focuses on how AI enhances the articulation of human intent and amplifies creative capacity, discussing how generative systems allow for a deeper and more precise expression of ideas, acting as bridges between abstract thought and manifestation. The author argues that by expanding the language of creation, AI strengthens our ability to communicate emotion, philosophy, and narrative with greater clarity and depth. This chapter concludes that AI doesn’t diminish creativity, but multiplies it, transforming how humans express themselves and reinforcing the shared relationship between technology, thought, and imagination. I think there is some validity to this, but in a way all of AI is predicated on human ideas so to what extent it multiplies human creativity I am unsure, which is an area of thinking my workshop plans to tackle.
Where the Next Steps are Leading
I've outlined a couple next steps already, as they are pretty natural based on where I'm at right now. To summarize, I want to confirm the practitioners using AI that I would like to interview, and I already have a list of people I can reach out to, but there's never any guarantee they will reach back out or be interested in an interview. This is why the goal participant target is 3-5, to account for the possibility of people not responding. Aside from that, my main focus for the rest of the month is developing these prototypes further from their sketches, entering November with solid pieces that I can include in the thesis. Additionally, I will begin to start drafting the written portion of the thesis, starting with literature review and working from there. Nothing super finalized, just putting the pieces in place.
Ryan Schlesinger is a multidisciplinary designer, artist, and researcher.
His skills and experience include, but are not limited to: graphic design, human-computer interaction, creative direction, motion design, videography, video-jockeying, UI/UX, branding, and marketing, DJ-ing and sound design.
This blog serves as a means of documenting his master’s thesis to the world. The thesis is an exploration of AI tools in the space of live performance and installation settings.
Leave a comment