Wimmer Michael, Weidinger Nicole, ElSayed Neven, Müller-Putz Gernot, Veas Eduardo Enrique
2023
Error perception is known to elicit distinct brain patterns, which can be used to improve the usability of systems facilitating human-computer interactions, such as brain-computer interfaces. This requires a high-accuracy detection of erroneous events, e.g., misinterpretations of the user’s intention by the interface, to allowfor suitable reactions of the system. In this work, we concentrate on steering-based navigation tasks. We present a combined electroencephalography-virtual reality (VR) study investigating diffferent approaches for error detection and simultaneously exploring the corrective human behavior to erroneous events in a VR flight simulation. We could classify different errors allowing us to analyze neural signatures of unexpected changes in the VR. Moreover, the presented models could detect errors faster than participantsnaturally responded to them. This work could contribute to developing adaptive VR applications that exclusively rely on the user’s physiological information.
Barreiros Carla, Silva Nelson, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2020
Jorge Guerra Torres, Veas Eduardo Enrique, Carlos Catania
2019
Labeling a real network dataset is specially expensive in computer security, as an expert has to ponder several factors before assigning each label. This paper describes an interactive intelligent system to support the task of identifying hostile behavior in network logs. The RiskID application uses visualizations to graphically encode features of network connections and promote visual comparison. In the background, two algorithms are used to actively organize connections and predict potential labels: a recommendation algorithm and a semi-supervised learning strategy. These algorithms together with interactive adaptions to the user interface constitute a behavior recommendation. A study is carried out to analyze how the algo-rithms for recommendation and prediction influence the workflow of labeling a dataset. The results of a study with 16 participants indicate that the behaviour recommendation significantly improves the quality of labels. Analyzing interaction patterns, we identify a more intuitive workflow used when behaviour recommendation isavailable.
Luzhnica Granit, Veas Eduardo Enrique
2019
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing.Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Luzhnica Granit, Veas Eduardo Enrique
2019
Luzhnica Granit, Veas Eduardo Enrique
2019
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Mutlu Belgin, Simic Ilija, Cicchinelli Analia, Sabol Vedran, Veas Eduardo Enrique
2018
Learning dashboards (LD) are commonly applied for monitoring and visual analysis of learning activities. The main purpose of LDs is to increase awareness, to support self assessment and reflection and, when used in collaborative learning platforms (CLP), to improve the collaboration among learners. Collaborative learning platforms serve astools to bring learners together, who share the same interests and ideas and are willing to work and learn together – a process which, ideally, leads to effective knowledge building. However, there are collaborationand communications factors which affect the effectiveness of knowledge creation – human, social and motivational factors, design issues, technical conditions, and others. In this paper we introduce a learning dashboard – the Visualizer – that serves the purpose of (statistically) analyzing andexploring the behaviour of communities and users. Visualizer allows a learner to become aware of other learners with similar characteristics and also to draw comparisons with individuals having similar learninggoals. It also helps a teacher become aware of how individuals working in the groups (learning communities) interact with one another and across groups.
Luzhnica Granit, Veas Eduardo Enrique, Caitlyn Seim
2018
This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.
Luzhnica Granit, Veas Eduardo Enrique
2018
Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2018
In the context of the Internet of Things (IoT), every device have sensing and computing capabilities to enhance many aspects of human life. There are more and more IoT devices in our homes and at our workplaces, and they still depend on human expertise and intervention for tasks as maintenance and (re)configuration. Using biophilic design and calm computing principles, we developed a nature-inspired representation, BioIoT, to communicate sensor information. This visual language contributes to the users’ well-being and performance while being as easy to understand as traditional data representations. Our work is based on the assumption that if machines are perceived to be more like living beings, users will take better care of them, which ideally would translate into a better device maintenance. In addition, the users’ overall well-being can be improved by bringing nature to their lives. In this work, we present two use case scenarios under which the BioIoT concept can be applied and demonstrate its potential benefits in households and at workplaces.
Gursch Heimo, Silva Nelson, Reiterer Bernhard , Paletta Lucas , Bernauer Patrick, Fuchs Martin, Veas Eduardo Enrique, Kern Roman
2018
The project Flexible Intralogistics for Future Factories (FlexIFF) investigates human-robot collaboration in intralogistics teams in the manufacturing industry, which form a cyber-physical system consisting of human workers, mobile manipulators, manufacturing machinery, and manufacturing information systems. The workers use Virtual Reality (VR) and Augmented Reality (AR) devices to interact with the robots and machinery. The right information at the right time is key for making this collaboration successful. Hence, task scheduling for mobile manipulators and human workers must be closely linked with the enterprise’s information systems, offering all actors on the shop floor a common view of the current manufacturing status. FlexIFF will provide useful, well-tested, and sophisticated solutions for cyberphysicals systems in intralogistics, with humans and robots making the most of their strengths, working collaboratively and helping each other.
Cicchinelli Analia, Veas Eduardo Enrique, Pardo Abelardo, Pammer-Schindler Viktoria, Fessl Angela, Barreiros Carla, Lindstaedt Stefanie
2018
This paper aims to identify self-regulation strategies from students' interactions with the learning management system (LMS). We used learning analytics techniques to identify metacognitive and cognitive strategies in the data. We define three research questions that guide our studies analyzing i) self-assessments of motivation and self regulation strategies using standard methods to draw a baseline, ii) interactions with the LMS to find traces of self regulation in observable indicators, and iii) self regulation behaviours over the course duration. The results show that the observable indicators can better explain self-regulatory behaviour and its influence in performance than preliminary subjective assessments.
Silva Nelson, Schreck Tobias, Veas Eduardo Enrique, Sabol Vedran, Eggeling Eva, Fellner Dieter W.
2018
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
di Sciascio Maria Cecilia, Brusilovsky Peter, Veas Eduardo Enrique
2018
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial-and-error and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control have been developed to support exploratory search process. In this work we present our attempt to increase the power of exploratory search interfaces by using ideas of social search, i.e., leveraging information left by past users of information systems. Social search technologies are highly popular nowadays, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This paper presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. In an online study investigating system accuracy and subjective aspects with a structural model we found that, when users actively interacted with all its control features, the hybrid system outperformed a baseline content-based-only tool and users were more satisfied.
Luzhnica Granit, Veas Eduardo Enrique
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
d'Aquin Mathieu , Adamou Alessandro , Dietze Stefan , Fetahu Besnik , Gadiraju Ujwal , Hasani-Mavriqi Ilire, Holz Peter, Kümmerle Joachim, Kowald Dominik, Lex Elisabeth, Lopez Sola Susana, Mataran Ricardo, Sabol Vedran, Troullinou Pinelopi, Veas Eduardo, Veas Eduardo Enrique
2017
More and more learning activities take place online in a self-directed manner. Therefore, just as the idea of self-tracking activities for fitness purposes has gained momentum in the past few years, tools and methods for awareness and self-reflection on one's own online learning behavior appear as an emerging need for both formal and informal learners. Addressing this need is one of the key objectives of the AFEL (Analytics for Everyday Learning) project. In this paper, we discuss the different aspects of what needs to be put in place in order to enable awareness and self-reflection in online learning. We start by describing a scenario that guides the work done. We then investigate the theoretical, technical and support aspects that are required to enable this scenario, as well as the current state of the research in each aspect within the AFEL project. We conclude with a discussion of the ongoing plans from the project to develop learner-facing tools that enable awareness and self-reflection for online, self-directed learners. We also elucidate the need to establish further research programs on facets of self-tracking for learning that are necessarily going to emerge in the near future, especially regarding privacy and ethics.
Müller-Putz G. R., Ofner P., Schwarz Andreas, Pereira J., Luzhnica Granit, di Sciascio Maria Cecilia, Veas Eduardo Enrique, Stein Sebastian, Williamson John, Murray-Smith Roderick, Escolano C., Montesano L., Hessing B., Schneiders M., Rupp R.
2017
The aim of the MoreGrasp project is to develop a non-invasive, multimodal user interface including a brain-computer interface(BCI)for intuitive control of a grasp neuroprosthesisto supportindividuals with high spinal cord injury(SCI)in everyday activities. We describe the current state of the project, including the EEG system, preliminary results of natural movements decoding in people with SCI, the new electrode concept for the grasp neuroprosthesis, the shared control architecture behind the system and the implementation ofa user-centered design.
Mohr Peter, Mandl David, Tatzgern Markus, Veas Eduardo Enrique, Schmalstieg Dieter, Kalkofen Denis
2017
A video tutorial effectively conveys complex motions, butmay be hard to follow precisely because of its restriction toa predetermined viewpoint. Augmented reality (AR) tutori-als have been demonstrated to be more effective. We bringthe advantages of both together by interactively retargetingconventional, two-dimensional videos into three-dimensionalAR tutorials. Unlike previous work, we do not simply overlayvideo, but synthesize 3D-registered motion from the video.Since the information in the resulting AR tutorial is registeredto 3D objects, the user can freely change the viewpoint with-out degrading the experience. This approach applies to manystyles of video tutorials. In this work, we concentrate on aclass of tutorials which alter the surface of an object
Guerra Jorge, Catania Carlos, Veas Eduardo Enrique
2017
This paper presents a graphical interface to identify hostilebehavior in network logs. The problem of identifying andlabeling hostile behavior is well known in the network securitycommunity. There is a lack of labeled datasets, which makeit difficult to deploy automated methods or to test the perfor-mance of manual ones. We describe the process of search-ing and identifying hostile behavior with a graphical tool de-rived from an open source Intrusion Prevention System, whichgraphically encodes features of network connections from alog-file. A design study with two network security expertsillustrates the workflow of searching for patterns descriptiveof unwanted behavior and labeling occurrences therewith.
Veas Eduardo Enrique
2017
In our goal to personalize the discovery of scientific information, we built systems using visual analytics principles for exploration of textual documents [1]. The concept was extended to explore information quality of user generated content [2]. Our interfaces build upon a cognitive model, where awareness is a key step of exploration [3]. In education-related circles, a frequent concern is that people increasingly need to know how to search, and that knowing how to search leads to finding information efficiently. The ever-growing information overabundance right at our fingertips needs a naturalskill to develop and refine search queries to get better search results, or does it?Exploratory search is an investigative behavior we adopt to build knowledge by iteratively selecting interesting features that lead to associations between representative items in the information space [4,5]. Formulating queries was proven more complicated for humans than recognizing information visually [6]. Visual analytics takes the form of an open ended dialog between the user and the underlying analytics algorithms operating on the data [7]. This talk describes studies on exploration and discovery with visual analytics interfaces that emphasize transparency and control featuresto trigger awareness. We will discuss the interface design and the studies of visual exploration behavior.
di Sciascio Maria Cecilia, Mayr Lukas, Veas Eduardo Enrique
2017
Knowledge work such as summarizing related research inpreparation for writing, typically requires the extraction ofuseful information from scientific literature. Nowadays theprimary source of information for researchers comes fromelectronic documents available on the Web, accessible throughgeneral and academic search engines such as Google Scholaror IEEE Xplore. Yet, the vast amount of resources makesretrieving only the most relevant results a difficult task. Asa consequence, researchers are often confronted with loadsof low-quality or irrelevant content. To address this issuewe introduce a novel system, which combines a rich, inter-active Web-based user interface and different visualizationapproaches. This system enables researchers to identify keyphrases matching current information needs and spot poten-tially relevant literature within hierarchical document collec-tions. The chosen context was the collection and summariza-tion of related work in preparation for scientific writing, thusthe system supports features such as bibliography and citationmanagement, document metadata extraction and a text editor.This paper introduces the design rationale and components ofthe PaperViz. Moreover, we report the insights gathered in aformative design study addressing usability
Luzhnica Granit, Veas Eduardo Enrique
2017
This paper investigates sensitivity based prioritisation in the construction of tactile patterns. Our evidence is obtained by three studies using a wearable haptic display with vibrotactile motors (tactors). Haptic displays intended to transmit symbols often suffer the tradeoff between throughput and accuracy. For a symbol encoded with more than one tactor simultaneous onsets (spatial encoding) yields the highest throughput at the expense of the accuracy. Sequential onset increases accuracy at the expense of throughput. In the desire to overcome these issues, we investigate aspects of prioritisation based on sensitivity applied to the encoding of haptics patterns. First, we investigate an encoding method using mixed intensities, where different body locations are simultaneously stimulated with different vibration intensities. We investigate whether prioritising the intensity based on sensitivity improves identification accuracy when compared to simple spatial encoding. Second, we investigate whether prioritising onset based on sensitivity affects the identification of overlapped spatiotemporal patterns. A user study shows that this method significantly increases the accuracy. Furthermore, in a third study, we identify three locations on the hand that lead to an accurate recall. Thereby, we design the layout of a haptic display equipped with eight tactors, capable of encoding 36 symbols with only one or two locations per symbol.
Luzhnica Granit, Veas Eduardo Enrique, Stein Sebastian, Pammer-Schindler Viktoria, Williamson John, Murray-Smith Roderick
2017
Haptic displays are commonly limited to transmitting a dis- crete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the lin- ear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and per- sonalisation through sensitivity adjustment increased decoding precision.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph
2017
In today's digital age with an increasing number of websites, social/learning platforms, and different computer-mediated communication systems, finding valuable information is a challenging and tedious task, regardless from which discipline a person is. However, visualizations have shown to be effective in dealing with huge datasets: because they are grounded on visual cognition, people understand them and can naturally perform visual operations such as clustering, filtering and comparing quantities. But, creating appropriate visual representations of data is also challenging: it requires domain knowledge, understanding of the data, and knowledge about task and user preferences. To tackle this issue, we have developed a recommender system that generates visualizations based on (i) a set of visual cognition rules/guidelines, and (ii) filters a subset considering user preferences. A user places interests on several aspects of a visualization, the task or problem it helps to solve, the operations it permits, or the features of the dataset it represents. This paper concentrates on characterizing user preferences, in particular: i) the sources of information used to describe the visualizations, the content descriptors respectively, and ii) the methods to produce the most suitable recommendations thereby. We consider three sources corresponding to different aspects of interest: a title that describes the chart, a question that can be answered with the chart (and the answer), and a collection of tags describing features of the chart. We investigate user-provided input based on these sources collected with a crowd-sourced study. Firstly, information-theoretic measures are applied to each source to determine the efficiency of the input in describing user preferences and visualization contents (user and item models). Secondly, the practicability of each input is evaluated with content-based recommender system. The overall methodology and results contribute methods for design and analysis of visual recommender systems. The findings in this paper highlight the inputs which can (i) effectively encode the content of the visualizations and user's visual preferences/interest, and (ii) are more valuable for recommending personalized visualizations.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2017
In our research we explore representing the state of production machines using a new nature metaphor, called BioIoT. The underlying rationale is to represent relevant information in an agreeable manner and to increase machines’ appeal to operators. In this paper we describe a study with twelve participants in which sensory information of a coffee machine is encoded in a virtual tree. All participants considered the interaction with the BioIoT pleasant; and most reported to feel more inclined to perform machine maintenance, take “care” for the machine, than given classic state representation. The study highlights as directions for follow-up research personalization, intelligibility vs representational power, limits of the metaphor, and immersive visualization.
Strohmaier David, di Sciascio Maria Cecilia, Errecalde Marcelo, Veas Eduardo Enrique
2017
Innovations in digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success, but also a hindrance to good quality: contributions can be of poor quality because everyone, even anonymous users, can participate. Though Wikipedia has defined guidelines as to what makes the perfect article, authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever growing amount of articles pending review. Great efforts have been invested in algorith-mic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. However, little has been done to support quality assessment of user-generated content through interactive tools that allow for combining automatic methods and human intelligence. We developed WikiLyzer, a toolkit comprising three Web-based interactive graphic tools designed to assist (i) knowledge discovery experts in creating and testing metrics for quality measurement , (ii) users searching for good articles, and (iii) users that need to identify weaknesses to improve a particular article. A case study suggests that experts are able to create complex quality metrics with our tool and a report in a user study on its usefulness to identify high-quality content.
Luzhnica Granit, Öjeling Christoffer, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup.Index Terms: H.5.2 [HCI]: User Interfaces—Input devicesand strategies; H.5.1 [HCI]: Multimedia Information Systems—Artificial, augmented, and virtual realities
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
The movement towards cyberphysical systems and Industry 4.0promises to imbue each and every stage of production with a myr-iad of sensors. The open question is how people are to comprehendand interact with data originating from industrial machinery. Wepropose a metaphor that compares machines with natural beingsthat appeal to people by representing machine states with patternsoccurring in nature. Our approach uses augmented reality (AR)to represent machine states as trees of different shapes and col-ors (BioAR). We performed a study on pre-attentive processing ofvisual features in AR to determine if our BioAR metaphor con-veys fast changes unambiguously and accurately. Our results indi-cate that the visual features in our BioAR metaphor are processedpre-attentively. In contrast to previous research, for the BioARmetaphor, variations in form induced less errors than variations inhue in the target detection task.
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper investigates the communication of natural lan-guage messages using a wearable haptic display. Our re-search spans both the design of the haptic display, as wellas the methods for communication that use it. First, threewearable configurations are proposed basing on haptic per-ception fundamentals. To encode symbols, we devise an over-lapping spatiotemporal stimulation (OST) method, that dis-tributes stimuli spatially and temporally with a minima gap.An empirical study shows that, compared with spatial stimu-lation, OST is preferred in terms of recall. Second, we pro-pose an encoding for the entire English alphabet and a train-ing method for letters, words and phrases. A second study in-vestigates communication accuracy. It puts four participantsthrough five sessions, for an overall training time of approx-imately 5 hours per participant. Results reveal that after onehour of training, participants were able to discern 16 letters,and identify two- and three-letter words. They could discernthe full English alphabet (26letters,92%accuracy) after ap-proximately three hours of training, and after five hours par-ticipants were able to interpret words transmitted at an aver-age duration of0.6s per word
Luzhnica Granit, Pammer-Schindler Viktoria, Fessl Angela, Mutlu Belgin, Veas Eduardo Enrique
2016
Especially in lifelong or professional learning, the picture of a continuous learning analytics process emerges. In this proces s, het- erogeneous and changing data source applications provide data relevant to learning, at the same time as questions of learners to data cha nge. This reality challenges designers of analytics tools, as it req uires ana- lytics tools to deal with data and analytics tasks that are unk nown at application design time. In this paper, we describe a generic vi sualiza- tion tool that addresses these challenges by enabling the vis ualization of any activity log data. Furthermore, we evaluate how well parti cipants can answer questions about underlying data given such generic versus custom visualizations. Study participants performed better in 5 out of 10 tasks with the generic visualization tool, worse in 1 out of 1 0 tasks, and without significant difference when compared to the visuali zations within the data-source applications in the remaining 4 of 10 ta sks. The experiment clearly showcases that overall, generic, standalon e visualiza- tion tools have the potential to support analytical tasks suffi ciently well
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2016
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce uRank and investigate interactive methods for understanding, refining and reorganizing documents on-the-fly as information needs evolve. uRank includes views summarizing the contents of a recommendation set and interactive methods conveying the role of users' interests through a recommendation ranking. A formal evaluation showed that gathering items relevant to a particular topic of interest with uRank incurs in lower cognitive load compared to a traditional ranked list. A second study consisting in an ecological validation reports on usage patterns and usability of the various interaction techniques within a free, more natural setting.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2015
uRankis a Web-based tool combining lightweight text analyticsand visual methods for topic-wise exploration of document sets.It includes a view summarizing the content of the document setin meaningful terms, a dynamic document ranking view and a de-tailed view for further inspection of individual documents. Its ma-jor strength lies in how it supports users in reorganizing documentson-the-fly as their information interests change. We present a pre-liminary evaluation showing that uRank helps to reduce cognitiveload compared to a traditional list-based representation.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2015
Whenever we gather or organize knowledge, the task of searching inevitably takes precedence. As exploration unfolds, it becomes cumbersome to reorganize resources along new interests, as any new search brings new results. Despite huge advances in retrieval and recommender systems from the algorithmic point of view, many real-world interfaces have remained largely unchanged: results appear in an infinite list ordered by relevance with respect to the current query. We introduce uRank, a user-driven visual tool for exploration and discovery of textual document recommendations. It includes a view summarizing the content of the recommendation set, combined with interactive methods for understanding, refining and reorganizing documents on-the-fly as information needs evolve. We provide a formal experiment showing that uRank users can browse the document collection and efficiently gather items relevant to particular topics of interest with significantly lower cognitive load compared to traditional list-based representations.
Veas Eduardo Enrique, di Sciascio Maria Cecilia
2015
This paper presents a visual interface developed on the basis of control and transparency to elicit preferences in the scientific and cultural domain. Preference elicitation is a recognized challenge in user modeling for personalized recommender systems. The amount of feedback the user is willing to provide depends on how trustworthy the system seems to be and how invasive the elicitation process is. Our approach ranks a collection of items with a controllable text analytics model. It integrates control with the ranking and uses it as implicit preference for content based recommendations.
Veas Eduardo Enrique, di Sciascio Maria Cecilia
2015
The ability to analyze and organize large collections,to draw relations between pieces of evidence, to buildknowledge, are all part of an information discovery process.This paper describes an approach to interactivetopic analysis, as an information discovery conversationwith a recommender system. We describe a modelthat motivates our approach, and an evaluation comparinginteractive topic analysis with state-of-the-art topicanalysis methods.
Veas Eduardo Enrique, Sabol Vedran, Singh Santokh, Ulbrich Eva Pauline
2015
An information landscape is commonly used to represent relatedness in large, high-dimensional datasets, such as text document collections. In this paper we present interactive metaphors, inspired in map reading and visual transitions, that enhance the landscape representation for the analysis of topical changes in dynamic text repositories. The goal of interactive visualizations is to elicit insight, to allow users to visually formulate hypotheses about the underlying data and to prove them. We present a user study that investigates how users can elicit information about topics in a large document set. Our study concentrated on building and testing hypotheses using the map reading metaphors. The results show that people indeed relate topics in the document set from spatial relationships shown in the landscape, and capture the changes to topics aided by map reading metaphors.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph, Sabol Vedran
2015
isualizations have a distinctive advantage when dealing with the information overload problem: being grounded in basic visual cognition, many people understand visualizations. However, when it comes to creating them, it requires specific expertise of the domain and underlying data to determine the right representation. Although there are rules that help generate them, the results are too broad as these methods hardly account for varying user preferences. To tackle this issue, we propose a novel recommender system that suggests visualizations based on (i) a set of visual cognition rules and (ii) user preferences collected in Amazon-Mechanical Turk. The main contribution of this paper is the introduction and the evaluation of a novel approach called VizRec that is able suggest an optimal list of top-n visualizations for heterogeneous data sources in a personalized manner.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph, Sabol Vedran
2015
Identifying and using the information from distributed and heterogeneous information sources is a challenging task in many application fields. Even with services that offer welldefined structured content, such as digital libraries, it becomes increasingly difficult for a user to find the desired information. To cope with an overloaded information space, we propose a novel approach – VizRec– combining recommender systems (RS) and visualizations. VizRec suggests personalized visual representations for recommended data. One important aspect of our contribution and a prerequisite for VizRec are user preferences that build a personalization model. We present a crowd based evaluation and show how such a model of preferences can be elicited.
Veas Eduardo Enrique, Mutlu Belgin, di Sciascio Maria Cecilia, Tschinkel Gerwald, Sabol Vedran
2015
Supporting individuals who lack experience or competence to evaluate an overwhelming amout of informationsuch as from cultural, scientific and educational content makes recommender system invaluable to cope withthe information overload problem. However, even recommended information scales up and users still needto consider large number of items. Visualization takes a foreground role, letting the user explore possiblyinteresting results. It leverages the high bandwidth of the human visual system to convey massive amounts ofinformation. This paper argues the need to automate the creation of visualizations for unstructured data adaptingit to the user’s preferences. We describe a prototype solution, taking a radical approach considering bothgrounded visual perception guidelines and personalized recommendations to suggest the proper visualization.
Rauch Manuela, Wozelka Ralph, Veas Eduardo Enrique, Sabol Vedran
2014
Graphs are widely used to represent relationshipsbetween entities. Indeed, their simplicity in depicting connect-edness backed by a mathematical formalism, make graphs anideal metaphor to convey relatedness between entities irrespec-tive of the domain. However, graphs pose several challenges forvisual analysis. A large number of entities or a densely con-nected set quickly render the graph unreadable due to clutter.Typed relationships leading to multigraphs cannot clearly berepresented in hierarchical layout or edge bundling, commonclutter reduction techniques. We propose a novel approach tovisual analysis of complex graphs based on two metaphors:semantic blossom and selective expansion. Instead of showingthe whole graph, we display only a small representative subsetof nodes, each with a compressed summary of relations in asemantic blossom. Users apply selective expansion to traversethe graph and discover the subset of interest. A preliminaryevaluation showed that our approach is intuitive and usefulfor graph exploration and provided insightful ideas for futureimprovements.
Tschinkel Gerwald, Veas Eduardo Enrique, Mutlu Belgin, Sabol Vedran
2014
Providing easy to use methods for visual analysis of LinkedData is often hindered by the complexity of semantic technologies. Onthe other hand, semantic information inherent to Linked Data providesopportunities to support the user in interactively analysing the data. Thispaper provides a demonstration of an interactive, Web-based visualisa-tion tool, the “Vis Wizard”, which makes use of semantics to simplify theprocess of setting up visualisations, transforming the data and, most im-portantly, interactively analysing multiple datasets using brushing andlinking method
Sabol Vedran, Albert Dietrich, Veas Eduardo Enrique, Mutlu Belgin, Granitzer Michael
2014
Linked Data has grown to become one of the largest availableknowledge bases. Unfortunately, this wealth of data remains inaccessi-ble to those without in-depth knowledge of semantic technologies. Wedescribe a toolchain enabling users without semantic technology back-ground to explore and visually analyse Linked Data. We demonstrateits applicability in scenarios involving data from the Linked Open DataCloud, and research data extracted from scientific publications. Our fo-cus is on the Web-based front-end consisting of querying and visuali-sation tools. The performed usability evaluations unveil mainly positiveresults confirming that the Query Wizard simplifies searching, refiningand transforming Linked Data and, in particular, that people using theVisualisation Wizard quickly learn to perform interactive analysis taskson the resulting Linked Data sets. In making Linked Data analysis ef-fectively accessible to the general public, our tool has been integratedin a number of live services where people use it to analyse, discover anddiscuss facts with Linked Data.
Granitzer MIchael, Veas Eduardo Enrique, Seifert C.
2014
In an interconnected world, Linked Data is more importantthan ever before. However, it is still quite dicult to accessthis new wealth of semantic data directly without havingin-depth knowledge about SPARQL and related semantictechnologies. Also, most people are currently used to consumingdata as 2-dimensional tables. Linked Data is by de-nition always a graph, and not that many people are used tohandle data in graph structures. Therefore we present theLinked Data Query Wizard, a web-based tool for displaying,accessing, ltering, exploring, and navigating Linked Datastored in SPARQL endpoints. The main innovation of theinterface is that it turns the graph structure of Linked Datainto a tabular interface and provides easy-to-use interactionpossibilities by using metaphors and techniques from currentsearch engines and spreadsheet applications that regular webusers are already familiar with.
Mutlu Belgin, Tschinkel Gerwald, Veas Eduardo Enrique, Sabol Vedran, Stegmaier Florian, Granitzer Michael
2014
Research papers are published in various digital libraries, which deploy their own meta-models and tech-nologies to manage, query, and analyze scientific facts therein. Commonly they only consider the meta-dataprovided with each article, but not the contents. Hence, reaching into the contents of publications is inherentlya tedious task. On top of that, scientific data within publications are hardcoded in a fixed format (e.g. tables).So, even if one manages to get a glimpse of the data published in digital libraries, it is close to impossibleto carry out any analysis on them other than what was intended by the authors. More effective querying andanalysis methods are required to better understand scientific facts. In this paper, we present the web-basedCODE Visualisation Wizard, which provides visual analysis of scientific facts with emphasis on automatingthe visualisation process, and present an experiment of its application. We also present the entire analyticalprocess and the corresponding tool chain, including components for extraction of scientific data from publica-tions, an easy to use user interface for querying RDF knowledge bases, and a tool for semantic annotation ofscientific data set
Tatzgern Markus, Grasset Raphael, Veas Eduardo Enrique, Kalkofen Denis, Schmalstieg Dieter
2013
Augmented reality (AR) enables users to retrieve additional information about the real world objects and locations.Exploring such location-based information in AR requires physical movement to different viewpoints, which maybe tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric explorationtechniques for handheld AR that allow users to access information freely using a virtual copy metaphor to explorelarge real world objects. We evaluated our interfaces in controlled conditions and collected first experiences in areal world pilot study. Based on our findings, we put forward design recommendations that should be consideredby future generations of location-based AR browsers, 3D tourist guides, or in situated urban plannin
Kalkofen Denis, Veas Eduardo Enrique, Zollmann Stefanie, Steinberger Markus, Schmalstieg Dieter
2013
In Augmented Reality (AR), ghosted views allow a viewer to ex-plore hidden structure within the real-world environment. A bodyof previous work has explored which features are suitable to sup-port the structural interplay between occluding and occluded ele-ments. However, the dynamics of AR environments pose seriouschallenges to the presentation of ghosted views. While a modelof the real world may help determine distinctive structural features,changes in appearance or illumination detriment the composition ofoccluding and occluded structure. In this paper, we present an ap-proach that considers the information value of the scene before andafter generating the ghosted view. Hereby, a contrast adjustment ofpreserved occluding features is calculated, which adaptively variestheir visual saliency within the ghosted view visualization. This al-lows us to not only preserve important features, but to also supporttheir prominence after revealing occluded structure, thus achieving a positive effect on the perception of ghosted views.