Skip to content

“My Kitchen Did What?”: Demystifying the Smart Home with Explainable AI

An image of the virtual Smart Home used in this study.

This was a semester-long group project dedicated to investigating difficulties users have with understanding the behavior of Smart Home technology. In particular, my group investigated the potential for explanations generated by AI to help users understand the behavior of Smart Home technologies. Due to COVID-19, our study needed to be conducted remotely, so we needed to create a virtual Smart Home environment to show participants. Additionally, we researched what features make explanations helpful, and wrote some formulaic explanations that could plausibly be generated by AI for our Wizard of Oz prototype.

The Virtual Smart Home

Part of the virtual Smart Home's kitchen.

Rather than letting our participants loose in a virtual Smart Home environment, we decided to record videos of a virtual Smart Home for the six scenarios we wanted to show participants. This eliminated the need for participants to download any specialized software to run the virtual Smart Home. We also wanted to avoid needing to moderate each individual participant’s journey through a virtual Smart Home environment. Showing participants videos embedded in a survey gave us the maximum amount of control over how participants interacted with the virtual Smart Home environment, limiting the possibility of extraneous variables.

We used HomeIO as the software to simulate our virtual Smart Home, due to its free trial and having a variety of Smart Home interactions built in. Video editing allowed us to simulate some scenarios that HomeIO was not built for, as we could add in animations and sounds of our own on top of what HomeIO provided.

All About Explanations

For our survey, we used three different types of explanations: selective, contrastive, and social. When developing these explanations, we kept four takeaways from our research about what makes a good explanation in mind:

  1. Contrastive explanations are more helpful than non-contrastive explanations
  2. Good explanations are focused and limited in scope
  3. Explanations that focus on cause-and-effect are easier to people to understand than explanations focusing on probabilities
  4. Explanations are a social phenomenon – the explainer is trying to show the listener part of their mental model for understanding the world

Social explanations were created with the third and fourth principles in mind. Each of the social explanations taught the learner about a machine learning principle specific to the scenario they just watched, and then generalized this information for how Smart Homes operated as a whole. An example of a social explanation used in the study is: “The oven was turned off because in the Smart Home training data, if the oven was left idle for 15 minutes and was left empty, users were no longer using the oven. Smart Homes remember your habits, and adjust to them.”

The contrastive explanations were based on the first and second principles of good explanations. Each contrastive explanation provides an alternate action the Smart Home did not take, and explains why. An example used in the study is as follows: “The oven was turned off, instead of being left on, because it has been idle for 15 minutes and is empty.”

Selective explanations were based on the second principle of a good explanation – all of them were short and sweet. They were also written to be identical to the contrastive explanations, except for the clause providing the alternate action for the Smart Home. This gave us additional insight to the importance of having that contrastive clause in the explanation. An example of a selective explanation is: “The oven was turned off because it has been left idle for 15 minutes and is empty.” Notice how closely it mirrors the contrastive explanation example given above.

Our Survey

Our survey contained six different Smart Home scenarios, in the kitchen, bedroom, and bathroom of the virtual Smart Home. Two scenario videos were shot in each location. We chose six scenarios to give us enough data to perform a meaningful analysis while being mindful of preventing survey fatigue in our participants. 

For each scenario, participants viewed the scenario video, and then answered questions about the Smart Home’s behavior shown in the video. They also answered questions about how confident they were in their assessment of what happened. For participants in the control group, who did not receive any explanation, they moved onto the next scenario. Participants in experimental groups (one for each of the three explanation types) watched the scenario video again, received an explanation, and answered the same questions about the Smart Home’s behavior, and about how confident they felt. Participants in the experimental groups were also asked questions about the explanations around human-likeness, user confidence, justification of the Smart Home’s actions, and how helpful the explanation was.

All participants answered demographic questions about their age group, gender, familiarity with Smart Homes, and comfort level with technology at the end of the survey.

What We Learned

After surveying 22 total participants, we drew four major conclusions after analyzing our results from the study.

A graph showing how participants shown each different kind of explanation compared to each other on comprehension questions, and compared to those who did not receive an explanation. The social explanation ranks the highest, followed by the contrastive explanation, then the selective explanation. Those who did not receive an explanation did the worst out of all groups.
A graph showing how participants shown each different kind of explanation compared to each other on comprehension questions, and compared to those who did not receive an explanation.
  1. Any explanation is better than no explanation.
    • All of the experimental groups performed better than the control group on multiple choice questions testing user understanding of Smart Home behavior. Even the worst explanation type, selective, performed 15% better than the control group.
  2. Social explanations were the most effective explanation type, followed by contrastive.
    • The social explanations did the best at improving user understanding, human-likeness, adequate justification, and confidence. Contrastive explanations performed slightly better than social explanations in ease of understanding.
  3. There was variance in understanding across all scenarios.
    • Some of our scenarios were more difficult to understand than others. In particular, our scenarios about the kitchen oven shutting off after being left on unused and the lights in the bedroom shutting themselves off at the user’s bedtime were difficult scenarios for our participants. On the other hand, our scenario with the kitchen blinds closing themselves received a 100% correct response rate with no explanation.
  4. More difficult scenarios had the most change in perception.
    • Scenarios where participants struggled showed more change in perception after an explanation than the rest of the scenarios. The two more difficult scenarios mentioned above had a 50% average change in perception, compared to a 30% average change in perception for the other four scenarios.
  5. The contrastive explanation type resulted in the most change in perception.
    • Participants who received contrastive explanations reported that their perceptions changed 50% of the time – more than double the change in perception reported by the other two explanation types. However, the social explanation still outpaced the contrastive explanation in terms of best user understanding, so change in perception isn’t everything.