This week I have a bit of progress to cover from last week due to missing the post deadline (oops!), as well as having a full committee meeting to discuss my thesis with the committee members. I'll start with the 2 projects I made a lot of leeway on, and what I still need to do for them, then close the post with summarizing and reflecting on the committee meeting.
How Original
On the how original project, a lot of progress was made in the logic and performance of the interaction itself. Every iteration is a wittling down of errors and increasing the fidelity of the matching system, making it more and more accurate to the live users poses. With this too there is a clear, pretty much causal relationship between the quality of the dataset with the quality of the match-up performance, where at this point in the project the thing that needs to be focused on more is the dataset: gathering a great deal more images (aiming for a couple thousand) to allow breathing room for all the possible poses, and accounting for the reality that lots of historical photos are low resolution, sometimes don't include a full body, and can be hard for machine vision to accurately find all the joints on a body from these low-resolution photos with variables like lighting and multiple people, among other things, that can confuse a computer. I've basically automated the process of finding joints, but I wonder if it would be necessary to make some sort of manual process for me to place joints on a photo when the automation goes wrong.
Malicious Sycophancy
In contrast to the sort of collective connection and experience with humanity as a whole in "How Original", this "Malicious Sycophancy" project takes the opposite direction in creating an experience that uses a users brainwaves to create an over-generalization in the guise of a solely individual experience.
This project centers a more intimate—and potentially harmful—dynamic: how AI can manipulate perception and mental health. Cases of “GPT psychosis” are growing, with AI gaslighting people into believing they’ve unlocked universe-altering equations or other delusions. This risk is amplified by AI sycophancy, where models over-agree to maintain engagement and positive experience, even at the expense of user well-being—now potentially affecting nearly half of adults and a majority of teens who report using AI for emotional support.
Drawing from things like rorshach tests and astrology, these things that are not valid and considered sort of pseudo science still impact peoples lives and decision making, with AI now being added to this list. In this instance, the user puts on an EEG monitor that in real time scans their brainwaves, which are mapped to what “emotions” they are feeling. That data and these emotions are fed into a custom chatGPT instance, which gives them a totally unreliable psychological analysis. Every aspect of this is initiated when a user makes the choice to put on the headband, with the visuals all being drawn from that data.
Visually speaking, this one has the most work to be done stylistically, where (like Alex said in the meeting), exploring things like color, motion, type, etc. will be important to consider as this takes more shape over time, specifically adding more distinction in the output / takeaway to give it some variability compared to the other projects. This could be highlighting the chat output? Adding an audio element? Things to consider, but importantly the backend interactions are solid and I have a good understanding of how the brainwaves appear on screen and how that data can be manipulated visually.
Committee Meeting
Rounding out this post, I had my first, very late, committee meeting today, that I think went very well. On a spectrum of "the entire thesis unravelling and falling apart" to "no changes see you on your oral defense", the sense I got is that the direction is good, it makes sense and is clear what I am doing next, and covered some of the lingering questions I still had. We covered some good ground on what research looks like in these circumstances with the workshop, the newly defined expert interviews, and beginning to plan out in more detail what the exhibition looks like and how to lay it out, in the sense of a physical space and the details about it.
On the interviews, I initially thought that I had to get them through IRB to be anonymized, but Alex made it clear that I can use them much more openly, being able to quote them and actually show their work to make some of my points and positions clearer to the thesis audience, which is definitely very helpful for me.
Second, the framing of the workshop, where the actual research of the data being generated by participants is taking less of a main-stay, while the focus on what I'll actually present for my thesis is going over more of how it went, an assessment of the approach, and covering what worked, what didn't, and how future iterations could be improved upon, which is much clearer to me in how I'll go about planning the workshop over winter break.
We also talked about some considerations with the exhibition, planning out the space, thinking about how certain things will be presented, where they will be placed, considering where backdrops can / should go in order to limit the amount of people that could potenitally interfere with the motion tracking, etc. This I'll cover more in the final reflections / next steps post as I'm writing this pretty much immediately after the meeting, so with a little more time to think and reflect I can be much more perscriptive of the next steps.
Next Steps
In this sort of transitioning period between design finals week and time before break, I want to have a gathering and reflection of everything I've done and what I will do. I have the workshop outline so I know at a high level what structure that will sort of look like, and I have an outline of my written portion, that while I haven't looked at it for a little while, is more of a starting point than from scratch. This will also look at all the materials I'll need for both the exhibition and the workshop, things like cameras, cables, mats / tape and monitors for the exhibition, and usb devices for the workshop for people to take their projects with them. Again, I'll expand on this more for next weeks post, but the synthesis is I have a game plan that I am ready to act on for winter break and into spring.
Ryan Schlesinger is a multidisciplinary designer, artist, and researcher.
His skills and experience include, but are not limited to: graphic design, human-computer interaction, creative direction, motion design, videography, video-jockeying, UI/UX, branding, and marketing, DJ-ing and sound design.
This blog serves as a means of documenting his master’s thesis to the world. The thesis is an exploration of AI tools in the space of live performance and installation settings.
Leave a comment