Dates: 20 April 2023 – 08 June 2023
Group Members: Rebecca Hodge, Tanya Singh, Jakob Prufer, Charlie Hou, Ruoxi Song and myself.
WEEK 1
This project was developed in collaboration with the design studio RGA, London. We were required to design a way for a person to pass as a ‘generative AI’ in an everyday setting. As a headstart, RGA provided us with some thought-provoking questions to ponder upon as we moved forward.
- What are the key features of AI-generated content?
- What process of behaviour and interactions would a human follow to imitate an AI output?
- How do you train a human with data sets to influence outputs
Our first instinct as a group was to understand what is AI and conduct secondary research for the same. So we dived into a number of topics around AI and came together to share our findings.
Alongside this we did some bodystorming to have a more practical understanding of what an interaction betweeen a human and an AI might look like:
This helped us understand how indistinguishable human and AI responses actually were and I was especially interested in the way the prompts were framed. For questions such as “What are the potential difficulties for a pregnant woman?” – it was interesting to see the responses reflect the ‘politeness’ of an AI model and the ‘humanness’ of a human.
We wanted to take this a step further and also try some research methods for Siri as an AI-based voice assistant. We conducted two exercises namely:
- Artefact Analysis: We studied Siri as an artefact while collecting secondary research data about the material, aesthetic, interactive, social, political, psychological, economical and historical aspects of the voice assistant.
- Siri sketches: We took turns drawing on a folded sheet of paper without seeing what the previous person was drawing and the general depiction of Siri was feminine in nature – a 70s wife, colourful, stylish, and old-fashioned.
Feedback received at the end of the week appreciated the number of resarch methods we had used. We were advised at looking at a particular kind or type of AI tool and then to further pick apart what our relationship with that tool is like. Instead of focusing on how AI works, we were asked to focus on the implications of it in society and how it affects people’s lives. Going ahead we needed to focus on framing questions well to get the answers we needed and also had to focus on which characteristics set humans and AI apart from each other. Through the process we were also asked to document the project thoroughly and continue asking the question about where this lives in the world.
Looking back
- The exploratory start on the brief surfaced some interesting themes in the project by this point and I was particularly interested in exploring Siri further.
WEEK 2
Our research from the first week helped us come up with a few interesting themes but also made us aware of the broad spectrum of AI tools available. At this point of our project, our larger team of 10 split and the 6 of us began working together based on a common interest to pursue Audio based AI tools. This led us to begin experimenting with music, sound and voice AI tools.
To understand voice assistants better, we also did another bodystorming exercise:
In this exercise- a human pretends to be an AI without using any assistive AI tools. What we observed is the monotonous tone, respectful responses and composure in the human pretending to be AI and eventually when the participant began getting tired, he resorted to more human responses since they came more naturally to him.
Intrigued by the things we were finding through these research methods, we also began looking at literature especially around the distinguishing factors between AI and humans. Some interesting insights from the literature review were that:
- A computer can be viewed as a logic device, lacking a grounding element and they regard AGI computers as “directionless intelligences because AI systems are tools not organisms” (Braga & Logan, 2017).
- AI Chatbots don’t have any inherent goals they want to accomplish through conversation and aren’t motivated by what others think or how they are reacting (Browning, 2023).
Feedback received at this week’s presentation was that our process was good but it was not very clear that we were now focusing on sound and audio AI tools. We were nudged to explore the understanding of AI and look at extreme usage scenarios of AI and maybe attempt to normalise it. We had to try looking at believable scenarios where a person would actually want to be an AI. Our presentation skills needed work and we were asked to explore ways of sharing media through transcriptions, audio pieces, video etc. and also to document the analysis of our research well.
Looking back
- At this point we were delayed in our research and should have already begun primary research because we did not do much this week.
- It was a good decision to seperate into two groups and decide to focus on one medium i.e Audio. It helped us narrow down our research towards understanding this better.
WEEK 3
At this point of the project- we went through our brief again and broke it down into two phases, namely:
- Phase 1: Understanding how do people perceive AI.
- Phase 2: Deceiving people through that understanding of perception.
After spending the first couple of weeks understanding AI through secondary research mthods and primary research methods carried out within the group, we realised it was time to venture out for some opinions, perspectives and feelings around Artificial Intelligence.
Our first instinct was to reach out to people on the street with some basic questions around AI, voice assistants and their feelings and thoughts around it. So we went to Granary Square, London and offered people a cookie for a chat and yes it worked! We were able to speak to 20-25 people in total.
After the survey we realised we were not very sure if the questions we were asking were helping us get the information we needed. So we decided to start putting down our process in the form of a branching out topic guides map. This helped us identify the areas that needed more research and also form questions accordingly.
Learning the areas lacking in research from this, we developed a set of questions for Directed Storytelling to get a deeper qualitative understanding of how people perceive AI.
Since the number of people we could have long conversations with was fairly limited, we decided to modify our time-taking questions into shorter ones and circulated it around in the form of an online survey. By this point of our research, some recurring themes and patterns started to emerge amongst our insights.
- That a human did not compare to an AI.
- AI would always be a tool and not a companion.
- Voice-assistants are anthropomorphised by most humans.
This week we were told that our scope needs further defining and also ask the question of how much do people know? Maybe people are speaking from a place of ignorance? Consider placing this research in everyday scenarios and start looking at the way we treat AI vs human attributes that AI mimicks. The research methods that we had been practicing needed to be communicated visually for better clarity.
Looking back
- The primary research insights were very helpful and pivotal for our entire project and I would have liked to continue doing primary research but my group mates taught me the importance of knowing when to pause research and start ideation.
- Dividing the project in 2 phases was very helpful because I knew what my focus was right now for the project.
WEEK 4
The research we had was really good but it still needed some refining and analysis. Although I could also see that we needed to begin prototyping our ideas. At this point we decided to work parallely within the group to help divide the tasks and achieve our set timeline.
Having done a lot of research, we sat down to analyse and draw insights from all of our research over the past 3 weeks. Having individual insights from each research method was helpful but also confusing and therefore it was helpful to use thematic networks as one of our methods for collating research:
After discussing the insights amongst the group, we began ideating and prototyping.
We did an exercise of Crazy 4s to get a jumpstart:
Few of the ideas that stood out for us were:
- An AI world- blurs the line between human and AI.
- A dating scenario where you practise your dating skills.
- Soundscapes in a home setting- disguising everyday sounds using AI
- Cooking with AI, using food as a metaphor for algorithms.
At this point of the week, we had a check-in call with RGA. Feedback received from RGA was that bodystorming seems like an interesting research method and gives unexpected insights. They asked us to try to set up a fake trial test in which you cannot see the human and hence it is easier for the human to pass as an AI. Another thought-provoking insight we received from them was that AI voices usually have the same voice pitches compared to human voices which may vary from time to time- so it might be interesting to spot the difference. We were encouraged to explore questions like: Why would a human want to be an AI? In which situation would you trust an AI more than a human? Is there a metric for trust?
Prototyping helped us get experimental and fun with sound and voice. These are some of the attributes of Ai that we have tried to replicate through physical and digital means:
After this we decided to explore the attributes of voice itself such as speed, volume, glitches, repetition etc.
After these explorations, we wondered how we could take advantage of the physical and digital world and merge them to create intrigue and humour. We also created a low-fidelity prototype for enacting out voice assistants with the means of buttons.
The idea at this stage was to make a toolkit for a human to disguise as an AI voice assistant and so we made this video to introduce the idea while presenting:
The feedback from this week was that the research is commendable and they really like the depth till which we have gone to gain the understanding required, although some even felt that we could have done lesser research and should have begun prototyping sooner. Nonetheless, there was emphasis on making the scenario believable and situated. Humanity and humour were suggested as strong tools to use going forward. We were asked to take into account the future of AI and make valuable judgements. Advised to continue to exploring questions like- What meaning does humour have? Can AI sing? Are there ways to push the limits of Audio-based AI?
Looking back
- Personally, I was content with the time we gave to research, it was purely a time-planning decision and when we did run over by a bit- we decided to divide and conquer by doing research analysis and prototyping side by side.
- The ideation could have been a little more fleshed out because we just sprung out ideas with crazy 4s and did not take enough time to test different scenarios before finalising it.
WEEK 5
For this week we conducted a workshop at Central Saint Martins to help test our prototype with people outside the college and to also test out our dating scenario which was slowly catching our group’s interest.
We did a few planned exercises with them. One of them involved listening to a set of audios and they had to guess whether they were human or AI. The other exercise was building a scenario in which AI might be used and the last exercise was a bodystorming exercise to mimick our dating scenario.
The rest of the week involved a lot of brainstomring, sketching and discussions around what experience to design, how and why.
We concluded with planning to build an experience for the participant to sit in a ‘Dating Booth’ and practise their dating skills with an ‘AI Mirror’ . The setting would be casual, while the person gets ready for a date in front of the mirror and this was planned as a live demo as part of the final presentation.
The feedback we received at the end of this week was to make the interaction uniquely human so that it fits into an everyday setting. This would help us use the research around how we perceive AI. The dating scenario itself was received well since it tied together the research and the brief quite well and had humour in it. We had to test this with a live participant and audience. Furthermore, we were asked to document thoroughly and compare this idea with tools that already exist.
Looking back
- The workshop facilitation definitely required more time and planning than we gave it. It would have allowed us to build a more engaging experience for the participants and given us enough time to advertise it as well.
- Even though the turnout for the workshop was small, we gained a lot of understanding from those few interactions and it helped me realise that quantity is not always a marker for quality of feedback received.
WEEK 6
With just one more week to go, we had to begin testing out our outcome as a low-fidelity prototype. So we invited a few participants to a low-fidelity set-up where we tested out the feasibility of our planned outcome.
The response from the testing was mostly positive but some of the concerns were that the participants could hear the sound of the typing which was distracting, they needed a more suitable environment to allow themselves to immerse in a dating scenario and that the delays between the answers were really long.
We tried to tackle these problems through our booth design, while trying to keep it to a simple design.
Although, just sketching out the way we were imagining the booth to look was not enough since the group was not on the same page about how the demo would flow. For this reason we made a storyboard next:
We received feedback that the demonstration had too many moving parts and that there was a need to simplify our outcome. The concern was that the execution had become too complex and unnecessary, especially considering the time frame left. Wizard of Oz testing was suggested to us as an alternative and instead of focusing on ‘practising dating’ we were asked to consider other use cases also and a vareity of environments that this mirror could sit in. Instead of doing a live demo, we were advised to maybe shoot a video explaining how this piece of tech works? Maybe imagine this already exists among us?
Looking back
- I was quite nervous receiving this feedback with just one week to go. It was just too many last minute changes and I was not sure if this was a good idea at the time but I decided to put my trust in the judgement of my groupmates and tutors.
- Looking back I definitely feel that we should have done more testing with higher fidelities of our prototype as well, especially when the idea changed.
WEEK 7
After the previous week’s feedback, we sat down and brainstormed as a group as to how to simplify. We decided to make a priority list of what is essential for this experience and what was not. We decided to get rid of things that were extra add-ons. Slowly but surely we arrived at a much simpler outcome.
A few key things about the outcome changed at this point of the project:
- At this point instead of a live demo we decided to convey our main concept through a short film to minimise complications.
- Instead of an AI mirror, we decided on an AI lightstrip that would adapt to any mirror shape.
- We also decided to make the installation super simple yet suggestive and got rid of all the extra parts.
Taking inspiration from the previous demonstration script we drafted another one for the movie, including details regarding shots, sequence of events, cuts, dialogues and the set design.
The next few days were spent coding the light strip and making the installation ready for the live demo.
The feedback received at the final presentation by classmates, course professors and invited project partners including RGA was inspiring to say the least. Our storytelling was applauded and the skills that went into making the short film were also well received. Although some more of the human involvement could have been included in the video. The presentation itself which was disguised as a demo of sAIge, helped the audience understand it’s working and experience it in person. The links between our research and outcome were not well presented and needed some more work. The branding of sAIge was also appreciated by some especially for it’s consistency throughout the presentation and video. Largely the audience mentioned that we answered the brief very well and we should continue to push our project and alongwith it the questions around human and AI interaction going forward.
Looking back
- Within the past week, I really understood the value of simplification and how that can sometimes really elevate a project. The changes also made the project achievable in the time frame left.
- Shifting from a live demo to shooting a short film, we were able to get past the technical difficulty we were facing with the delays between responses.
- Another thing I learnt is that it is okay to change outcomes and ideas last minute, as long as they are achievable.
- Really quite pleased with the outcome but at the same time I find it difficult to stop being critical of it. There is always this lingering feeling of – I could have made it better.