Weekly Report 11

What I Made

This week I made more progress on the Non-Sentences project, and made more steps in the other two projects for my thesis: The Emergent Garden Interactive Edition and the Cognition project using the muse 2 brainwave scanner.

First for Non-Sentences, I've made more steps in solidifying the visual aesthetic of it, as it "vibes" towards an old computer monitor with a front end "terminal" on the right, and the network of words on the left. I added some feedback and gamma edits to make it more "glowy", and I'll explore some more effects like scanlines, some warping, and grain to pull it all together.

Another edition of this project to place it better in the installation space that Alex brought up which I think can tie it together is the addition of a receipt printer, to print each sentence and network physically. That would primarily take the form of a mono black and white version of the current setup landscape on the printer, and in physical space this achieves a couple of different things.

1) It will add a tactile layer of interactivity with audiences able to see physically what manifests out of a non-sentence, and adds a bit of small permanence to the sentences themselves, as they fade out of existence on the monitor, they would be preserved in their printed forms.

2) It serves as a physical representation of the scale of the "slop" the AI produces. While people would be free to take a piece of receipt with them as they please, I foresee many choosing not to, or no one being around to take them if the system runs autonomously, which would result in a pileup of the papers on the ground. This physical representation would show the speed of potential non-sense that is filling up the digital realm, translated into the physical, as well as an image of the environmental consequences of this non-sensical generation. As far as the content in the paper, as the system runs and runs the words would eventually start to homogenize, which is a potential concern as the internet is being filled more with AI content. AI is trained on human content, but as more AI generations fill the internet in writing and images, a potential feedback loop can occur where AI endlessly echoes and repeats itself based on what its already generated.

3) In the same spirit as number 1, the permanence also leads to the potential for something else to be created, as I can see a potential poster that lays out the receipts as something even longer lasting, and touches on calls for sustainability in AI. Turning nonsense into sense, and meaningless into meaning.
For the Emergent Garden interactive edition, I've taken steps to add this level of interactivity from my previously made TouchDesigner network. From what I've already done, I would say the project is at 1/3 done. The outputs of blob tracking, prompting, and visual code already exist in the project file, the next steps are to add the gestural interactivity of cultivating the garden with users hands: drawing shapes with their fingers and hands, and the physical hardware interactions of changing prompt parameters and the AI behavior with a midi pad. This dual interaction takes into account the hands that shape and the tools that build. The hands that shape the world around us as forming the backbone of the garden with the original primitives that can be physically placed and manipulated, drawn and erased, planted and uprooted. The tools that build, through the hardware and software, show the calculable effects the introduction of these tools can have on the garden, ecosystem, and world around us; but are ultimately driven by human intention and intervention. These interactions are the other 2/3 of the project that need to be done, but will take a primary focus in the coming weeks. As far as the physical space, I'll have the monitor setup with the interactions showing live the final evolving output, with the 3 printed posters next to it to both fill up the space of the exhibition and again to add some permanence to the outputs.
Finally, for the Cognitive Control project, I just got my Muse 2 brainwave monitor in the mail, so I'm excited to explore that and combine it with the face tracking - emotional control project I already have. I think what I'll want to touch on this one is the element of the known and unknown, how, since AI is such a new thing, it is not fully understood how it can affect our brain, but there is some research supporting it's affects on cognition, thinking, etc., both positively and negatively. For a visual I'd like to supplant some real images of brain scans in conjunction with a users real brainwaves, their face tracking, and the AI visual that ultimately comes out of it, drawing from highlighted neurons to create a pseudo-data visualization / visual commentary.

What I’ve Read

Reading this week focused more on play, interaction, and experience as they relate as defining terms for what I'm working on as a whole body of work: installations and workshops. The more I read into the following research papers and into Homo Ludens, it seems that play is naturally making it's way out of my thesis in favor of the cross section of interaction and experience, while some of the ideas of how I personally (and informally) would define play remain, but are more focused and intentional under these terms.

Starting with interaction, as I read more into it, it seems that it shares some similarities with play as a concept in being somewhat unambiguously defined in research, which is surprising to me with it's inclusion in the name of the field of HCI. There are just as many "towards a definition of interaction", "defining interaction", "in support of a definition of interaction" research papers as there are that substitute interaction with play. Some older but relevant papers: “Towards a Definition of the Term and Concept of Interaction” (Schwaber) and “In Support of a Functional Definition of Interaction” (Wagner) make more functional, operational, and rigid definitions of interaction.

Wagner grounds the concept in instructional and learning theory, defining basic interaction in learning contexts as "reciprocal events that require at least two objects and two actions", while also making distinct interactivity and interaction, with interaction as a learning-oriented process of this reciprocal influence between learner and environment, and interactivity as a technological affordance that enables such exchange.

Schwaber moves towards a more psychological approach to defining it, moving a little away from a strict learning environment, with interaction as an "intersubjective co-creation and reflection", again bringing up these ideas of reciprocal engagement between two or more parties but in a less rigid and pedagogical lens. Her definition emphasizes the co-creation of meaning through mutual influence and recognition between entities, in my thesis case this would be between a person and an AI: either through direct making in the case of the workshop, or through experience with the installations.

These definitions are well and good, but a bit old, and not directly related to technology or HCI, which is where Hornbæk & Oulasvirta's "What is Interaction?" fits in nicely, and makes up what these older definitions lack in their "folk notions". A direct HCI paper, they position interaction as "conceptual diversity and human-computer coupling", proposing not one but 7 different sub-definitions under the umbrella of interaction. These sub-definitions being:

Dialogue (turn-taking communication)
Transmission (information exchange)
Tool use (human action mediated through artifacts)
Optimal behavior (adaptive goal pursuit within constraints)
Embodiment (being and acting within a socio-material context)
Experience (emotional and experiential flow)
Control (continuous feedback systems minimizing error)

All of these relate in one way or another to aspects I'm covering in the workshop, or at least one / all of the installations, especially in experience, dialogue, tool use, and embodiment. I want to take note especially of experience and dive into it's own definition as well as it relates to my thesis, with one of my main source papers that I brought up in my midterm presentation that takes a look at experience as it relates to explaining AI, this being aptly named "Experiential AI: Between Arts & Explainable AI".

An overall definition of "experiential AI" in this paper is “An approach to the design, use, and evaluation of AI in cultural or other real-world settings that foregrounds human experience and context. It combines arts and engineering to support rich and intuitive modes of model interpretation and interaction, making AI tangible and explicit.” In other words, experiential AI makes the invisible operations of AI visible through human-centered experience by blending artistic methods with technical insight so people can feel, see, and manipulate how AI systems work.

The definition of experience in this paper is twofold. One is derived from experiential learning theory (Kolb, 2014), where experience becomes a medium for knowledge creation, as people learn through active involvement, reflection, and feedback (seen in what my workshop is trying to achieve).

The second definition is experience as aesthetic encounter (seen in my installations). This puts experience as directly graspable engagement, where audiences emotionally and cognitively interact with AI systems and artworks. These experiences foster understanding, critical reflection, and emotional connection.

Taking all of this together, I've gotten a working synthesized definition of the concepts as it relates to my thesis:

Interaction and experience are understood through the lens of experiential AI, where understanding AI arises through embodied engagement, creative co-creation, and affective reflection. Experience is both process and outcome, a space where literacy is gained through exploration, experimentation, and encounter. Experiential methods make AI systems legible, tangible, and emotionally resonant, transforming technical clarity into situated learning. Interaction functions as the medium through which this learning unfolds, a feedback loop that connects human curiosity, aesthetic interpretation, and algorithmic behavior, ultimately fostering technical and critical AI literacy.

This is by all means a working definition, and is something I will likely bring up in my committee meeting, since the workshop and installation projects will be happening regardless, this remains one insecurity in shaping and defining the overall theme of them together.

Where the Next Steps are Leading

Next steps are to continue the installation works, now that I have the Muse 2 I can work on that project more instead of sketching it out, which I'll start with the basic interaction and build it up from there (as seen in the process diagram I've continued to bring up). I'll need to reach out to see if I can get the space in the college of design by next week for sure, and I'll need to ask Carly in how to do that. Then I'll have to round out my presentation for the committee meeting, and gather any lingering questions or insecurities for the thesis itself.

Sources

Hemment, D., Murray-Rust, D. S., Belle, V., Aylett, R. S., Vidmar, M., & Broz, F. (2024). Experiential AI: Between arts and explainable AI. Leonardo, 57(3), 298–306.

Hornbæk, K., & Oulasvirta, A. (2017). What is interaction? Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 5040–5052.

Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development (2nd ed.). FT Press.

Schwaber, E. A. (1995). Towards a definition of the term and concept of interaction. The International Journal of Psycho-Analysis, 76(3), 557–566.

Tawfik, A. A., Gatewood, J., Gish-Lieberman, J. J., & Kinzie, M. B. (2021). Toward a definition of learning experience design. Educational Technology Research and Development, 69(6), 2941–2962.

Wagner, E. D. (1994). In support of a functional definition of interaction. The American Journal of Distance Education, 8(2), 6–29.

Leave a comment

//about

Ryan Schlesinger is a multidisciplinary designer, artist, and researcher.

His skills and experience include, but are not limited to: graphic design, human-computer interaction, creative direction, motion design, videography, video-jockeying, UI/UX, branding, and marketing, DJ-ing and sound design.

This blog serves as a means of documenting his master’s thesis to the world. The thesis is an exploration of AI tools in the space of live performance and installation settings.