This week was a big focus on getting the Non-Sentences project up and running, and I made some really good progress on it, with most of the backend up and running, minus 1 piece to tie it all together. Quickly on the experience of making it first, it's always nice to learn more about design programs, TouchDesigner being one I am particularly fond of these days. Learning more about it will of course be beneficial in being able to talk about it and teach people about it on a basic level for the workshop, and while there are some things that can be easy for newcomers to pick up and start making things right away, especially with AI involved, this project tested my knowledge on the actual coding and data structure side of things, which was a nice challenge I've mostly overcame.
Below the sketch I made initially just below here to refresh on the beginning of the project vision, it involves the sentences being generated in real-time on a loop (with the words being chosen at random, but in a fixed order that would fit a "grammatically correct" sentence structure) displayed on the right side of a screen, while the left would show the table of words it can choose from and the network connecting all the words together.
A pretty basic sketch, but it's beginning to take form in the short recording below, where I have the table of all possible choices next to the generated sentences being run in real time on a loop.
I think the most interesting one in this video just so happens to relate to the project itself: "Our intelligent network expands into your facilitated language."
Of the progress I made this week, I wasn't quite able to get the network of lines from word to word on the table. It took me the most time to get the sentences to be able to generate word by word, then erase in a loop. As I said before TouchDesigner can give newcomers the ability to make pretty interesting things relatively fast, but with the nature of this project I ended up using a pretty node-less system, having the bulk of the sentence generation in one text block DAT, which took more time than if it would have been a pure node system. Below is a screenshot of the tables and the small number of nodes used, and a screenshot of the 'sentence_controller' text file.
While not specifically using any AI model (for now, I'll discuss near-future plans for this a little further down), the conceptual basis of this project is that it would shed some light on the process of how a chat bot comes up with strings of characters that can be interpreted as sentences by us. This combination of database driven algorithm and human pattern-seeking intuition gives these strings a meaning for us in the form of sentences. Even if they are semi-random (true random being a contentious topic in the world of computers), our meaning-making machines of brains decide when these random strings make sense, and even if they provide some value in the form of poetry. On the other hand, the random nature of the algorithm can spit out sentences that don't make sense: "Non-Sentences", which is where a conversation on AI-hallucinations comes in. Chatbot hallucinations make sense in the fact that they are grammatically correct sentences that you can read, but the information can be wildly and totally incorrect, which leads back to us and makes our ability to be able to detect and interpret these hallucinations for what they are much more important. This is AI literacy not in the technical sense, but the interpretive sense, and the ability to read AI bogus will become a more important skill to have when things like scammers and misleading ads on social media or the news have already begun implementing these chatbots into their writings. That's why in the tables I made it a point to use words that AI prefers to use: latin-english buzzwords of higher than normal complexity, like "delve", "underscore", "bolster", and "transformative" to name a few.
To add onto this more, obviously the next step in my vision for this project is to have the "network" lines generate from word to word along with the sentence, then erase and start over again, along with the sentence. This could be a pretty complex step but from how the tables are set up in TouchDesigner I think I'm in a good position to make it happen relatively quick. To hone more into the linguistic side of it (and maybe how AI is changing our language as per The Verge's recent blog post and the research behind it (paper 1, paper 2)), I've dabbled around with adding a text-to-speech model to read the sentences as they come out. I've experimented with gTTS, googles python based text-to-speech model, which can be easily implemented within TouchDesigner as it's a python based environment, but as I've gotten it to "work", it is pretty glitchy with how the sentence is being spoken, currently stumbling over the sentence over and over again at a fast unintelligible pace. A next step could be getting it to slow down and read word for word by using similar logic for the sentence generation, just going word for word as opposed to combining them all. Another option could be using ElevenLabs for speech, as they have plugin API's that work in TouchDesigner, and may offer some more intuitive, quicker, and more extensive control.
Looking briefly towards other projects, for the Emergent Garden I've thought about modes of interaction and I think that using hand tracking as a form of activation and control would be a good way to expand more on the idea of cultivation of a technological ecosystem, where by using motions like finger pinching or flicking the hand, you can place or remove primitive shapes to add to the ecosystem, and change the AI's parameters to create new evolutions of plant and animal organisms. Adding this semi-tactile form of interaction lends more to the idea of human intervention in the ethical use of AI, speaking more towards the balance between control and collaboration. The act of “tending” to the system through motion could represent the ways humans guide, nurture, or even disrupt technological growth. It would also introduce a more embodied and intuitive relationship with the work, allowing the audience to feel physically connected to the generative processes at play. This connection can enhance immersion and reinforce the metaphor of co-creation, where human gestures influence the evolution of an AI-driven environment, mirroring the ongoing negotiation between human intention and machine autonomy.
What I’ve Read
For readings I've continued with Transcending Imagination, which I think is proving valuable for this project-based portion of my thesis in investigating the literacy of AI as it pertains to its effects outside of technology, especially in this chapter (only one this week, most of my time is now spent on making as per my thesis timeline).
In Chapter 7, the term “narrated economy” is introduced, which reframes the designer’s role from fabricator to narrator, bringing more emphasis to the articulation of intent and meaning as a "new" creative act. I think that is ties into where I'm going with my projects and prototypes, with the idea that interaction itself can become a form of inquiry, with storytelling, gesture, and experience as tools for making AI’s invisible processes and effects visible. By positioning AI as an active participant in creation, Manu says that the designer’s task is no longer to simply use AI but to converse with it and to narrate, test, and reflect on its responses. Through this thinking, the interface becomes a "site of revelation", where AI’s biases, limitations, and assumptions can be surfaced experientially rather than explained abstractly. The resifting of human-centered, narrative-driven design also shows the ethical and aesthetic dimensions of this shift. If designers are now “architects of experience,” as he describes, then we must build spaces where people don’t just observe AI outputs but feel and reflect on how those outputs are shaped, which I can see in my own approach to experiential design as a reflective medium, using interaction, visual form, and sensory engagement to expose the underlying mechanics and ideologies of AI systems.
Where the Next Steps are Heading
Of course in the immediate future I'll be needing to put together my presentation, which I've outlined fully, just a matter of getting everything onto slides and writing a pseudo script to follow the beats to. After that I'll continue on the Non-Sentences project to at least what I had in my original vision of the sketch, expanding on it if it doesn't take too much time in areas of relevant text effects or text to speech plugins in TouchDesigner. After that I plan on ordering my Muse 2 for the brainwave project, then reaching out to my shortlist of creatives who implement AI technologies in their works for interview times, continuing to work on other projects in the meantime, such as refining the emotive-control project I'd done already, or adding hand-tracking capabilities to the Emergent Garden.
Ryan Schlesinger is a multidisciplinary designer, artist, and researcher.
His skills and experience include, but are not limited to: graphic design, human-computer interaction, creative direction, motion design, videography, video-jockeying, UI/UX, branding, and marketing, DJ-ing and sound design.
This blog serves as a means of documenting his master’s thesis to the world. The thesis is an exploration of AI tools in the space of live performance and installation settings.
Leave a comment