Luzhnica Granit, Veas Eduardo Enrique
2019
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing.Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Luzhnica Granit, Veas Eduardo Enrique
2019
Luzhnica Granit, Veas Eduardo Enrique
2019
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Luzhnica Granit, Veas Eduardo Enrique, Caitlyn Seim
2018
This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.
Luzhnica Granit, Veas Eduardo Enrique
2018
Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
Luzhnica Granit, Veas Eduardo Enrique
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
Müller-Putz G. R., Ofner P., Schwarz Andreas, Pereira J., Luzhnica Granit, di Sciascio Maria Cecilia, Veas Eduardo Enrique, Stein Sebastian, Williamson John, Murray-Smith Roderick, Escolano C., Montesano L., Hessing B., Schneiders M., Rupp R.
2017
The aim of the MoreGrasp project is to develop a non-invasive, multimodal user interface including a brain-computer interface(BCI)for intuitive control of a grasp neuroprosthesisto supportindividuals with high spinal cord injury(SCI)in everyday activities. We describe the current state of the project, including the EEG system, preliminary results of natural movements decoding in people with SCI, the new electrode concept for the grasp neuroprosthesis, the shared control architecture behind the system and the implementation ofa user-centered design.
Luzhnica Granit, Veas Eduardo Enrique
2017
This paper investigates sensitivity based prioritisation in the construction of tactile patterns. Our evidence is obtained by three studies using a wearable haptic display with vibrotactile motors (tactors). Haptic displays intended to transmit symbols often suffer the tradeoff between throughput and accuracy. For a symbol encoded with more than one tactor simultaneous onsets (spatial encoding) yields the highest throughput at the expense of the accuracy. Sequential onset increases accuracy at the expense of throughput. In the desire to overcome these issues, we investigate aspects of prioritisation based on sensitivity applied to the encoding of haptics patterns. First, we investigate an encoding method using mixed intensities, where different body locations are simultaneously stimulated with different vibration intensities. We investigate whether prioritising the intensity based on sensitivity improves identification accuracy when compared to simple spatial encoding. Second, we investigate whether prioritising onset based on sensitivity affects the identification of overlapped spatiotemporal patterns. A user study shows that this method significantly increases the accuracy. Furthermore, in a third study, we identify three locations on the hand that lead to an accurate recall. Thereby, we design the layout of a haptic display equipped with eight tactors, capable of encoding 36 symbols with only one or two locations per symbol.
Luzhnica Granit, Veas Eduardo Enrique, Stein Sebastian, Pammer-Schindler Viktoria, Williamson John, Murray-Smith Roderick
2017
Haptic displays are commonly limited to transmitting a dis- crete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the lin- ear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and per- sonalisation through sensitivity adjustment increased decoding precision.
Luzhnica Granit, Öjeling Christoffer, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup.Index Terms: H.5.2 [HCI]: User Interfaces—Input devicesand strategies; H.5.1 [HCI]: Multimedia Information Systems—Artificial, augmented, and virtual realities
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper investigates the communication of natural lan-guage messages using a wearable haptic display. Our re-search spans both the design of the haptic display, as wellas the methods for communication that use it. First, threewearable configurations are proposed basing on haptic per-ception fundamentals. To encode symbols, we devise an over-lapping spatiotemporal stimulation (OST) method, that dis-tributes stimuli spatially and temporally with a minima gap.An empirical study shows that, compared with spatial stimu-lation, OST is preferred in terms of recall. Second, we pro-pose an encoding for the entire English alphabet and a train-ing method for letters, words and phrases. A second study in-vestigates communication accuracy. It puts four participantsthrough five sessions, for an overall training time of approx-imately 5 hours per participant. Results reveal that after onehour of training, participants were able to discern 16 letters,and identify two- and three-letter words. They could discernthe full English alphabet (26letters,92%accuracy) after ap-proximately three hours of training, and after five hours par-ticipants were able to interpret words transmitted at an aver-age duration of0.6s per word
Luzhnica Granit, Pammer-Schindler Viktoria, Fessl Angela, Mutlu Belgin, Veas Eduardo Enrique
2016
Especially in lifelong or professional learning, the picture of a continuous learning analytics process emerges. In this proces s, het- erogeneous and changing data source applications provide data relevant to learning, at the same time as questions of learners to data cha nge. This reality challenges designers of analytics tools, as it req uires ana- lytics tools to deal with data and analytics tasks that are unk nown at application design time. In this paper, we describe a generic vi sualiza- tion tool that addresses these challenges by enabling the vis ualization of any activity log data. Furthermore, we evaluate how well parti cipants can answer questions about underlying data given such generic versus custom visualizations. Study participants performed better in 5 out of 10 tasks with the generic visualization tool, worse in 1 out of 1 0 tasks, and without significant difference when compared to the visuali zations within the data-source applications in the remaining 4 of 10 ta sks. The experiment clearly showcases that overall, generic, standalon e visualiza- tion tools have the potential to support analytical tasks suffi ciently well
Luzhnica Granit, Simon Jörg Peter, Lex Elisabeth, Pammer-Schindler Viktoria
2016
This paper explores the recognition of hand gestures based on a dataglove equipped with motion, bending and pressure sensors. We se-lected 31 natural and interaction-oriented hand gestures that canbe adopted for general-purpose control of and communication withcomputing systems. The data glove is custom-built, and contains13 bend sensors, 7 motion sensors, 5 pressure sensors and a magne-tometer. We present the data collection experiment, as well as thedesign, selection and evaluation of a classification algorithm. As weuse a sliding window approach to data processing, our algorithm issuitable for stream data processing. Algorithm selection and featureengineering resulted in a combination of linear discriminant anal-ysis and logistic regression with which we achieve an accuracy ofover 98. 5% on a continuous data stream scenario. When removingthe computationally expensive FFT-based features, we still achievean accuracy of 98. 2%.
Lacic Emanuel, Luzhnica Granit, Simon Jörg Peter, Traub Matthias, Lex Elisabeth, Kowald Dominik
2015
In this paper, we present work-in-progress on a recommender system based on Collaborative Filtering that exploits location information gathered by indoor positioning systems. This approach allows us to provide recommendations for "extreme" cold-start users with absolutely no item interaction data available, where methods based on Matrix Factorization would not work. We simulate and evaluate our proposed system using data from the location-based FourSquare system and show that we can provide substantially better recommender accuracy results than a simple MostPopular baseline that is typically used when no interaction data is available.