Walt

In a team of three, we designed and prototyped a Conversational User Interface that can be applied in a hotel setting, without a reliance on screens. This project was completed over the course of 3 weeks.

How Might We:
Use voice as a form of delightful interface to help guests and provide value to the hotel brand.

Leverage natural conversation and smart error recovery for a stress-free, hands-free experience.

Strictly use language and sound to provide feedforward and feedback, without needing a display as augmentation.

My Role:
Voice Designer, Voice Actor, Interaction Designer.

First, the final prototype of Walt! Then I'll show you how we arrived at this solution.
View Research Decisions

Initial Assumptions and Collective Brain Dump

We began our first forays into the world of CUIs by first seeing what we ourselves knew about hotels. We used an Ecosystem Collection map for this.

Why?
This was essentially a brain dump from each of us to get an idea of what we know/assume about the space we are working in.

Insights
Notably, we found that hotel guests are very diverse and may be accustomed to Siri or Alexa and would expect the system to behave like them.

Could there be a Multi-user context vs a single user? Should the system be PROACTIVE unlike Siri or Alexa?

Group thought excercise: possible activities in a hotel stay

In this activity, we did a “weekend in the life” of a hotel guest. We drew on our experiences of traveling with our families, as well as our research. We kept our scope broad and tried to explore families, individuals, business travelers, etc. We then narrowed down our field to two types of hotels which could benefit from CUI technology, and what kind people may stay in them.

Insights
High End hotels and resorts very likely could be multi-user centric. Mid-Range hotels would likely be a more of a mix.

In order to repeat business and add brand value. A CUI could leverage the hotel's current brand image.

View Ideation Decisions

Ideation

Exploratory Scenarios

We eached brainstormed various scenarios based on our research. We then evaluated each scenario to narrow down what we wanted to focus on.

Insights: Common Themes
- System should be proactive (when appropriate)
- It should contain some level of intelligence. Ex. Volume, tone
- Feedback and Feed Forward Could be through a physical object, e.g. light orb

Multi-User Considerations:
- Families may find this useful in a resort context.
- System knows who’s who via voice recognition.
- Priority / authorization based on pre-trained voices
- AI will pause conversation when multiple loud voices are speaking

Blind Guests:
- Using various sounds as cues for actions.
- Sounds as a way to pinpoint locations for a blind guest.

We arrived at two possible ways we can move forward with our CUI:
A CUI using sound and voice to guide a newly blind person in a hotel.
A CUI that can recognize individual voices and can listen in on a conversation to make plans according to what different people say.

Trying It Out

We jumped in and rapidly prototyped how sounds can guide a blind hotel guest around his room:

Why?
Better test it on ourselves first to see if the idea would actually work in some manner!

Insights
It was relatively successful, Palmer's BING sound guided me around the room.

Being newly blind, I felt the room was much bigger, and I was extremely cautious when following the sounds.

View Implementation Decisions

Implementation

We felt pretty good about the early prototype and decided to go with the Blind Guests CUI and planned out a more sophisticated experience prototype. We also synthesized different sounds for the system’s feedback and feedforward.

We sketched out a map for how we wanted to arrange the room according to where the sounds would appear. The user would be given 3 tasks: Open the door, drop off the luggage by the bed, and wash his/her hands in the bathroom.

With our team mate Palmer D'Orazio leading the sound design, we crafted several feedforward and feedback tones to guide the blind guest: confirmation, error, and a guiding sound.

Why?
From our research and early prototype, a voice guiding the user became annoying and creepy(it made a newly blind person feel like someone is watching him)

Insights:
When we were designing these sounds, we tried to stick with current sound design patterns - upbeat for confirmation, and more monotone for error.

Testing Prototype 2 and Failing Spectacularly.

Our prototype came crashing down when we tested it with our new sounds on someone not in our team! We originally didn't plan on using voice, but we had to improvise and start talking back to the user.

Insights
Even though we had done research and an early prototype, this failure highlighted the flaws of our assumptions.

Voice is still important, the user constantly tried to talk to the system, and with just sounds as replies it left the user very confused.

Pivoting to a Disney Resort Voice Assistant

After this failure, we felt it was not worth to pursue it, as the deadline was only a week away. We also haven't yet tested out our Disney Resort Voice Assistant idea: A CUI that can recognize individual voices and can listen in on a conversation to make plans according to what different people say.

We sat down and wrote out a script we had planned from before, as well as the scenario. We then immediately tested it with three classmates.

I stood behind a wall and acted out the system's voice, unfortunately we were not able to get a video of this prototype. However, the above sound clip is the type of theme we were going for with the voice: fun, cheerful, and inviting.

Insights
Our users responded with great delight when they heard the system's upbeat voice, and it played well with the Disney theme.

The jolly voice kept the user's attention, and was different from everyday speech so that it retained a "Disney uniqueness".

Final Iteration and Production

Palmer and I recorded my voice so that it could be played back through a speaker and synthesized with the sounds we have designed. We also added in custom Disney themed sounds as feedback and feedforward cues.




In our final demo, we recruited 2 classmates alongside Sally, our team mate to act out a scenario based on this script.



Value
Walt acts as the medium in which guests can "brain dump" their plans by talking about it with each other. Instead of one person browsing online, everyone can participate.

It provides both "Disney Delight" as well as efficiency for both the guests and the hotel. It makes the stay and trip planning enjoyable, and spurs re-vists in the future.

Here's a summary of what I personally contributed in this project:

What I Did

Designed and provided the voice of the CUI, Walt.

Led brainstorm sessions and tried to champion my own idea.

What I Learned

A fun and jolly voice is a way to design a delightful CUI! If the users have fun talking to a machine, they can be more willing to continue the interaction.

My favorite idea is not always the best idea, my team and I took elements from each other's ideas and combined them into a much better product.

Here are some more projects of mine