A paper on Skin reading from Know-Center authors has been accepted at the renowned A* conference "International Symposium on Wearable Computers” (ISWC 2016) that will be held in Heidelberg, Germany from September 12-16, 2016.

As a renowned A* conference, the “International Symposium on Wearable Computers” (ISWC) is the premier event for wearable computing and technology, and issues related to on-body and worn mobile technologies. ISWC brings together researchers, product vendors, fashion designers, textile manufacturers, users, and related professionals to share information and advances in wearable computing.

The Ubiquitous Personal Computing and Knowledge Visualisation team are proud that the following paper has been accepted at ISWC 2016:
Skin Reading: Encoding Text in a 6-Channel Haptic Display. 

In this paper, the Know-Center authors Granit Luzhnica, Eduardo Veas and Viktoria Pammer investigate the  communication of natural language messages using a wearable haptic display. The research spans both the design of the haptic display, as well as the methods for communication that use it.

First, three wearable configurations are proposed basing on haptic perception fundamentals. To encode symbols, the authors devised an overlapping spatiotemporal stimulation (OST) method that distributes stimuli spatially and temporally with a minima gap. An empirical study shows that, compared with spatial stimulation, OST is preferred in terms of recall.

Second, the authors propose an encoding for the entire English alphabet and a training method for letters, words and phrases. A second study investigates communication accuracy. It puts four participants through five sessions, for an overall training time of approximately 5 hours per participant. Results reveal that after one hour of training, participants were able to discern 16 letters, and identify two- and three-letter words. Participants could discern the full English alphabet (26 letters, 92% accuracy after approximately three hours of training, and after five hours participants were able to interpret words transmitted at an average duration of 0.6s per word.

Congratulations to this great success!

The photo below shows the wearable haptic device developed by the team: