top of page
Morris Jumel Backround.jpg
Objects.png
Summary

Touchable Tales is an accessible museum browsing experience that is equally engaging for abled, blind, or low vision users.

It leverages the latest 3D printing, machine learning, and computer vision technologies to build a compelling use case for the way we hear, feel, and learn at a museum.

Type

University Study

Duration

1/22 - 5/22

Tools

HTML/CSS/JS
p5.js
Teachable Machine

Team

Sammy Levin (Developer)
Weixi Huang (Researcher)
Elizabeth Lewis (Researcher)
Craig Kapp (Supervisor)
Rosanna Flouty (Supervisor)

Morris Jumel Backround.jpg

The Morris Jumel Mansion's rich history is told through the lens of its historic artifacts. By holding a 3D-printed artifact replica and viewing it through the camera, the mansion's intricate story reveals itself moment by moment.

Objects.png

Storytelling through artifacts

teakettle.jpeg
Subject_edited.png
teacup.jpeg
Subject 2_edited.png

Eliza's tea would have been boiled in this simple tea kettle by Anne Northup, a paid servant working in the mansion's kitchen. She was handpicked by Eliza after her husband was kidnapped and sold into slavery. 

This delicate porcelain tea cup would have been used to serve tea to Eliza Jumel, one of the mansion's early residents, in the 19th century. 

inkwell.jpeg
Subject 3.png

Scattered amongst Eliza possessions is this inkwell set, which was used by George Washington when he occupied the house during the Revolutionary War.

Scattered amongst Eliza possessions is this inkwell set, which was used by George Washington when he occupied the house during the Revolutionary War.

Feel it and believe it

Artifacts which were photographed on site were later remodeled into 3D files that could be easily printed.

Screen Shot 2022-12-13 at 5.03.24 PM.png
unnamed (1).gif

Discovering the narrative

imageedit_1_3240066443.png

1

Present an artifact to the device's camera, which is recognized by the software

imageedit_3_2977592729.png

2

See a photo of the historic object, accompanied by an audible historical description

imageedit_4_7738509378.png

3

Prompts in the storytelling flow can be affirmed using a thumbs up or thumbs down gesture

Making it possible with machine learning and computer vision

Model Training

Google's Teachable Machine API was used to train an image recognition model using a sample set of images for each 3D printed artifact

The original sample set includes over 500 labelled images of artifacts.

Realtime Classification

With a trained model, it becomes possible for Teachable Machine to estimate what a live image is most similar to in the sample set.

The response is almost instantaneous, leaving little time for an artifact to be in frame before Teachable Machine classifies it.

Lessons learned

Touchable Tales is the culmination of learnings I have acquired over a range of disciplines, including rapid web prototyping, 3D modeling, accessibility, and experience design.

  • I was the key technologist and strategist in a group of researchers

  • I executed strategic design decisions based on product objectives

  • I architected a tech stack that allowed for rapid development and scalability

  • I designed from an accessibility-first standpoint

Further documentation

bottom of page