What I Made
This week I primarily focused on narrowing down my research protocol for expert interviews and the workshop - the workshop including a pre- and post-survey to assess participants outcomes in the interpretability and explainability of AI tools / systems. I needed to do this soon for IRB and for my thesis as a whole, but this also lead me to further thinking about my research questions in a different way, and refining them to something I think is more "sticky" that elicits and reinforces the two "big" aspects of my thesis: The body of work that I'll make, and the workshop itself. I think it was a more natural way of coming up with more refined and solid questions as opposed to sort of brute force thinking of questions for the sake of it: What questions can I answer by interviewing partitioning artists using AI and conducting a workshop centered around playing with AI?
The question I think will be answered mostly by the interviews and workshop is: Can experimental play and creation with artificial intelligence (AI) tools and systems make said tools more interpretable, explainable, and transparent?
I think firstly by learning about the processes behind current practitioners, I can gauge how much they "play" with AI, and what outcomes they've had with it, such as how much they've learned about AI and Technology by making with it, or any hard or soft skills they've gained or deem important when using AI. A preview of two questions:
Describe your process from start to finish, from conception to final output. Explain when, where, and how and what AI was used.
What, if anything, have you learned about AI itself (its limits, strengths, inner workings) from making creative work with it?
Did engaging with AI change your understanding of technology or computational systems more broadly? If so, how?
I'll need to reduce the number of questions for this protocol, I plan to make the final protocol and interview take about 30-45 minutes, so I'll either need to combine or cut some of the weaker questions, the master list so far can be found here.
The pre- and post-surveys for the workshop I think came much easier to me, and assess the outcomes the participants will have in the workshop, such as their knowledge and perceptions of AI and play coming into the workshop, and their reflection on AI systems after the workshop - including a creative reflection and artist statement for their body of work. I think with just a little tidying up these can be good to go, and next I'll have to think about how to deliver them in the final workshop (probably a google form is easiest). The pre-survey and post-survey protocols are linked respectively.
The second research question which I seek to explore in the actual body of work - through prototypes and creative works, still needs a little work. Generally, it would be along the lines of: How can experiential design and artistic practice using AI systems... answer or reveal something about AI. I think there are a couple ways I can go about the ending of the question, such as going along with the first question about making AI more transparent and "giving it back to the people", separating it from the institutions that push out these systems, like we discussed briefly in reflecting on the video I made. An alternate leg to this question could be a sort of investigation in things like biases, distortions, and ethical challenges in AI, which is hard to separate from the medium/material/tool/genre and discussions surrounding it, or it's potential as a teaching tool for different areas (thinking of a cross section between sciences and arts), or as a means to bring awareness to social issues, all of which I think are potentially viable directions to go, and are the basis of previously existing research which I'll cover shortly. I also think the "experiential" design aspect is important to include with what I plan to make, which will include things like Machine Vision and body/hand/face tracking, and ultimately the piece using a brainwave monitor. Another aspect of research I'll cover below.
What I Read
Two papers I read this week parallel to the protocol writing helped me think about my research questions, and they primarily focus on the experiential aspect of AI to explain AI (and brought up a term I didn't know about before: Explainable AI or XAI) and make awareness to larger issues.
The first was simply called Experiential AI, and set up the term “experiential AI” as a way for artists and scientists to come together and make algorithms tangible, visible, and more interpretable. It argues that art can serve as a bridge between opaque computational processes and human understanding, offering new ways to make reasoning processes decipherable. The emphasis on art’s ability to map the “inter-agencies” relates well to my own work and thinking so far, connecting directly to my own focus on embodied and playful experiences with AI—things like tracking movement or brainwaves as an entry point into the systems. This was more of a proposal than a body of work, and was made in 2019, and as such the next paper built on this further.
The second, Experiential AI: Between Arts and Explainable AI, written by the same author(s), builds on that foundation. It critiques the limits of technical “explainability” and suggests that experiential, arts-based methods can go further in making AI interpretable. Their “4As” framework, aspect, algorithm, affect, apprehension, was particularly interesting, and I think can be used in further development of the workshop as opposed to isolated installations or works. It frames experiential AI not just as an artistic add-on but as a methodology: one that connects technical models to embodied human experience and allows people to make sense of AI on multiple levels. Their case studies, like Jake Elwes’ The Zizi Show or Anna Ridler and Caroline Sinders’ AI is Human After All, are great examples of how art can expose bias, hidden labor, or socio-technical entanglements in AI.
Reading these papers on experiential AI helps me situate my research in context of what's going on right now in similar veins, where I'm not just making AI “explainable” in a narrow sense, but in creating experiences, through interviews, workshops, and creative works, where participants can get a more hands-on experience with AI through play and alternate interaction, and, in that process, make its mechanisms and implications more transparent.
Where the Next Steps are Leading
Immediately next I need to finish up the protocols to submit to IRB and get interviews going this month as soon as possible. Like I said the interviews will be valuable to know what practitioners have learned about AI themselves through their own experiences in working with AI on their own and for longer than the participants would have, assuming they come into the workshop with 0 knowledge. After submitting the protocols, in the time it takes for IRB to look over it, is when I plan on working on these experiential prototypes, starting out with sketches and then moving into the actual making.
Leave a comment