UX Research

BacPack for New Frontiers: A Tangible Tabletop Museum Exhibit Exploring Synthetic Biology

How do we best teach the complex subject of synthetic biology to young children? Do different modes of interaction affect learning?

As a research assistant in the Wellesley HCI Lab, my team and I conducted user testing and analyzed data for an interactive tabletop museum exhibit that engages young children in learning bio-design concepts and skills.

We used our findings to better understand how children interacted with the exhibit and to answer the questions posed above.

TIMELINE

Summer 2016 - 9 weeks

LOCATION

Wellesley, MA

ROLE

User research, usability testing, development

TOOLS

Atlas.ti, Javascript

Check out my work below

Introducing BacPack

The BacPack project aims to create interactive and exploratory educational experiences that expose wide audiences to the promise and limitations of synthetic biology. In this particular iteration I was a part of, an interactive tabletop museum exhibit, children work together to create resources like food, water, and soil through bio-design for an astronaut on Mars.

At the time I joined the lab, there were two prototypes of the exhibit: interactions through touch and interactions using tangible pieces.

The goal of the summer was to design and conduct user research and understand how the two different designs (tangible pieces vs. touch) promote engagement, learning, and collaboration.

A quick video that shows the interaction flow:

Personal Contributions

01. Conducted user testing in the museum where exhibit would be deployed

02. Defined the codes (key behaviors and interactions) to look for while video coding

03. Analyzed and synthesized qualitative data from video footage of user testing

04. Proposed improvements to current design based on research and data

User Testing & Interviews

While designing our study, I kept these questions in mind:

The main question we were trying to answer – Which interaction type (tangible tokens vs multitouch tokens) promotes the best learning and collaboration?

How do we measure learning?

How do we conduct our study in a way that captures both quantitative and qualitative data?

To start – to understand how these design decisions influenced interactions with the exhibit, we conducted an observational study in the Tech Museum of Innovation.

Why? What better place to test than the actual setting where this exhibit would be in! We would be testing on actual museum goers of young ages.

To capture a diverse user group, we tested over four days. For each visitor group, a facilitator seated behind the tabletop invited the visitors to build bacteria for Mars. The facilitator then pointed to the tokens around the table, explaining that they were genes to be selected from, and that these genes would tell their bacteria what to do. The facilitator then encouraged the visitors to choose two genes and insert them into the plasmid. The facilitator would step in to help visitors when needed.

    • 67 groups interacted with the exhibit
    • the largest group had 7 active participants, while over half of groups (55%) consisted of a single user
    • Adults not directly engaged with the exhibit often provided direct and indirect support to groups with children.
    • User ages ranged from 4 to adulthood

To qualitatively capture data: After visitors completed the activity, the facilitator conducted a debrief – asking child visitors:

    • their age
    • questions to assess their understanding of the biological process – “Could you tell me what you did in this activity?”
    • to rank on a scale of 1 to 10 how difficult they thought the exhibit was
    • how much they enjoyed interacting with it.

With these questions, we thought we could capture a sense of how much they learned by their ability to explain what they did and also assess if the difficulty level of the activity should be tweaked in future iterations.

To qualitatively and quantitatively capture data: We used a video camera on a tripod to record the interaction on and around the tabletop surface. By recording the interactions, my lab mate and I could later reference the videos to quantitatively and qualitatively analyze their behaviors/interactions.

Analysis

Video recordings were split into segments by the facilitator based on visitor group. My team and I used Atlas.ti to analyze 16:16:42 hours of clips using a video coding scheme informed by existing frameworks and developed iteratively based on interactions found while observing the videos. Based on emerging themes, my team and I eventually consolidated our observations into 13 codes, each representing a different higher-level theme.

I collaborated with my lab mate and post doc to define these codes. They encompass ideas such as collaboration/interacting with other participants, total time of interaction, and parent intervention. The final codes were:

01 Group change, 02 Touch piece, 03 Start station, 04 System interaction start, 05 Pointing start, 06 Pointing end, 07 Intervention, 08 Station ID, 09 Exit interview, 10 Tangible Interaction – Subconscious, 11 Parents, 12 First success, 13 Active interaction.

Video coding in action:

Findings & Key Problems Identified

After observing and coding all the video (16 hours!), we did not find statistically significant quantitative differences between the two prototypes of BacPack. But, we identified more nuanced qualitative differences between the tangible and multitouch prototypes.

01. Tangible Tokens Offered More Opportunities for Collaboration. In general, the exhibit was able to effectively facilitate a variety of collaboration styles and fluid role switching. The tangible tokens in particular created opportunities for collaboration beyond those afforded by the multitouch only version. For example, because the tokens were spread around the table and often out of immediate reach, people asked for help from other users. Placing the tokens around the surface also encouraged observers (parents) to reach out to the tokens and suggest them to users.

02. Tangible Tokens Allowed for Tinkering and Experimentation. We identified several epistemic actions taken by users on the tangible prototype that include spacial arrangement of tokens and comparing alternative combinations of tokens.

03. Differences in Learning Concepts. We found strong evidence for learning and inquiry in both versions based on the exit interviews and listening for bio-design related terms that the young participants would say throughout their interaction. However, we saw differences between the prototypes in the learning concepts visitors retained. With the tangible version, visitors more readily used terminology like “gene” and “bacteria” and contributed comments about the combination of genes they made, the process of inserting a genetic program to the bacterial cells, and the effect of particular gene combinations. Visitors who engaged with the multitouch version focused more on “Mars” commenting more broadly on the impact of made products on Mars. We hypothesize that through physical interaction with the tangible tokens, visitors developed a conceptual model that explained the role of genes and bacteria, which are represented by the tangible token interactions.

From our findings, while not conclusive, indicate that the tangible tokens provided further support for learning as well as more opportunities for collaboration.

Design Improvements

After observing users interact with both prototypes in a museum setting, I provided some suggestions for design improvements for future iterations.

01. With the multitouch version, young children in particular have small fingers that the multitouch surface could not detect. Especially when swiping the bacteria to deploy to Mars, children had to swipe multiple times for the surface to register the interaction. I suggested to add another tangible token, a 3D printed rocket ship that would transport the bacteria to Mars that children would drag upwards, to promote a smoother interaction.

02. Currently, the exhibit has no endgame. When the astronaut’s resources deplete, she is only left with a sad face. Perhaps to promote more urgency and reward in the users interacting with the exhibit, we could added more gamification elements such as a game over.

Challenges & Reflections

Although we had a great sample of users to analyze, analyzing ~16 hours of video footage is a lot! Even though this task was split between 3 people, it took up a great deal of time.

My team and I coded the actions of each child, which we differentiated through characteristics like clothes and hair. It was difficult to keep track of each child in the spreadsheet of data we were keeping. Perhaps it would have been better to create a system for each snippet of video for identification, so we could more easily identify and reference the correct video in the future.