Breitfuß Gert, Kaiser Rene_DB, Kern Roman, Kowald Dominik, Lex Elisabeth, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2017
Proceedings of the Workshop Papers of i-Know 2017, co-located with International Conference on Knowledge Technologies and Data-Driven Business 2017 (i-Know 2017), Graz, Austria, October 11-12, 2017.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce an interactive user-driven tool that aims at supporting users in understanding, refining, and reorganizing documents on the fly as information needs evolve. Decisions regarding visual and interactive design aspects are tightly grounded on a conceptual model for exploratory search. In other words, the different views in the user interface address stages of awareness, exploration, and explanation unfolding along the discovery process, supported by a set of text-mining methods. A formal evaluation showed that gathering items relevant to a particular topic of interest with our tool incurs in a lower cognitive load compared to a traditional ranked list. A second study reports on usage patterns and usability of the various interaction techniques within a free, unsupervised setting.
d'Aquin Mathieu , Adamou Alessandro , Dietze Stefan , Fetahu Besnik , Gadiraju Ujwal , Hasani-Mavriqi Ilire, Holz Peter, Kümmerle Joachim, Kowald Dominik, Lex Elisabeth, Lopez Sola Susana, Mataran Ricardo, Sabol Vedran, Troullinou Pinelopi, Veas Eduardo, Veas Eduardo Enrique
2017
More and more learning activities take place online in a self-directed manner. Therefore, just as the idea of self-tracking activities for fitness purposes has gained momentum in the past few years, tools and methods for awareness and self-reflection on one's own online learning behavior appear as an emerging need for both formal and informal learners. Addressing this need is one of the key objectives of the AFEL (Analytics for Everyday Learning) project. In this paper, we discuss the different aspects of what needs to be put in place in order to enable awareness and self-reflection in online learning. We start by describing a scenario that guides the work done. We then investigate the theoretical, technical and support aspects that are required to enable this scenario, as well as the current state of the research in each aspect within the AFEL project. We conclude with a discussion of the ongoing plans from the project to develop learner-facing tools that enable awareness and self-reflection for online, self-directed learners. We also elucidate the need to establish further research programs on facets of self-tracking for learning that are necessarily going to emerge in the near future, especially regarding privacy and ethics.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever we gather or organize knowledge, the task of search-ing inevitably takes precedence. As exploration unfolds, it be-comes cumbersome to reorganize resources along new interests,as any new search brings new results. Despite huge advances inretrieval and recommender systems from the algorithmic point ofview, many real-world interfaces have remained largely unchanged:results appear in an infinite list ordered by relevance with respect tothe current query. We introduceuRank, a user-driven visual tool forexploration and discovery of textual document recommendations.It includes a view summarizing the content of the recommenda-tion set, combined with interactive methods for understanding, re-fining and reorganizing documents on-the-fly as information needsevolve. We provide a formal experiment showing thatuRankuserscan browse the document collection and efficiently gather items rel-evant to particular topics of interest with significantly lower cogni-tive load compared to traditional list-based representations.
Müller-Putz G. R., Ofner P., Schwarz Andreas, Pereira J., Luzhnica Granit, di Sciascio Maria Cecilia, Veas Eduardo Enrique, Stein Sebastian, Williamson John, Murray-Smith Roderick, Escolano C., Montesano L., Hessing B., Schneiders M., Rupp R.
2017
The aim of the MoreGrasp project is to develop a non-invasive, multimodal user interface including a brain-computer interface(BCI)for intuitive control of a grasp neuroprosthesisto supportindividuals with high spinal cord injury(SCI)in everyday activities. We describe the current state of the project, including the EEG system, preliminary results of natural movements decoding in people with SCI, the new electrode concept for the grasp neuroprosthesis, the shared control architecture behind the system and the implementation ofa user-centered design.
Mohr Peter, Mandl David, Tatzgern Markus, Veas Eduardo Enrique, Schmalstieg Dieter, Kalkofen Denis
2017
A video tutorial effectively conveys complex motions, butmay be hard to follow precisely because of its restriction toa predetermined viewpoint. Augmented reality (AR) tutori-als have been demonstrated to be more effective. We bringthe advantages of both together by interactively retargetingconventional, two-dimensional videos into three-dimensionalAR tutorials. Unlike previous work, we do not simply overlayvideo, but synthesize 3D-registered motion from the video.Since the information in the resulting AR tutorial is registeredto 3D objects, the user can freely change the viewpoint with-out degrading the experience. This approach applies to manystyles of video tutorials. In this work, we concentrate on aclass of tutorials which alter the surface of an object
Guerra Jorge, Catania Carlos, Veas Eduardo Enrique
2017
This paper presents a graphical interface to identify hostilebehavior in network logs. The problem of identifying andlabeling hostile behavior is well known in the network securitycommunity. There is a lack of labeled datasets, which makeit difficult to deploy automated methods or to test the perfor-mance of manual ones. We describe the process of search-ing and identifying hostile behavior with a graphical tool de-rived from an open source Intrusion Prevention System, whichgraphically encodes features of network connections from alog-file. A design study with two network security expertsillustrates the workflow of searching for patterns descriptiveof unwanted behavior and labeling occurrences therewith.
Veas Eduardo Enrique
2017
In our goal to personalize the discovery of scientific information, we built systems using visual analytics principles for exploration of textual documents [1]. The concept was extended to explore information quality of user generated content [2]. Our interfaces build upon a cognitive model, where awareness is a key step of exploration [3]. In education-related circles, a frequent concern is that people increasingly need to know how to search, and that knowing how to search leads to finding information efficiently. The ever-growing information overabundance right at our fingertips needs a naturalskill to develop and refine search queries to get better search results, or does it?Exploratory search is an investigative behavior we adopt to build knowledge by iteratively selecting interesting features that lead to associations between representative items in the information space [4,5]. Formulating queries was proven more complicated for humans than recognizing information visually [6]. Visual analytics takes the form of an open ended dialog between the user and the underlying analytics algorithms operating on the data [7]. This talk describes studies on exploration and discovery with visual analytics interfaces that emphasize transparency and control featuresto trigger awareness. We will discuss the interface design and the studies of visual exploration behavior.
di Sciascio Maria Cecilia, Mayr Lukas, Veas Eduardo Enrique
2017
Knowledge work such as summarizing related research inpreparation for writing, typically requires the extraction ofuseful information from scientific literature. Nowadays theprimary source of information for researchers comes fromelectronic documents available on the Web, accessible throughgeneral and academic search engines such as Google Scholaror IEEE Xplore. Yet, the vast amount of resources makesretrieving only the most relevant results a difficult task. Asa consequence, researchers are often confronted with loadsof low-quality or irrelevant content. To address this issuewe introduce a novel system, which combines a rich, inter-active Web-based user interface and different visualizationapproaches. This system enables researchers to identify keyphrases matching current information needs and spot poten-tially relevant literature within hierarchical document collec-tions. The chosen context was the collection and summariza-tion of related work in preparation for scientific writing, thusthe system supports features such as bibliography and citationmanagement, document metadata extraction and a text editor.This paper introduces the design rationale and components ofthe PaperViz. Moreover, we report the insights gathered in aformative design study addressing usability
Luzhnica Granit, Veas Eduardo Enrique
2017
This paper investigates sensitivity based prioritisation in the construction of tactile patterns. Our evidence is obtained by three studies using a wearable haptic display with vibrotactile motors (tactors). Haptic displays intended to transmit symbols often suffer the tradeoff between throughput and accuracy. For a symbol encoded with more than one tactor simultaneous onsets (spatial encoding) yields the highest throughput at the expense of the accuracy. Sequential onset increases accuracy at the expense of throughput. In the desire to overcome these issues, we investigate aspects of prioritisation based on sensitivity applied to the encoding of haptics patterns. First, we investigate an encoding method using mixed intensities, where different body locations are simultaneously stimulated with different vibration intensities. We investigate whether prioritising the intensity based on sensitivity improves identification accuracy when compared to simple spatial encoding. Second, we investigate whether prioritising onset based on sensitivity affects the identification of overlapped spatiotemporal patterns. A user study shows that this method significantly increases the accuracy. Furthermore, in a third study, we identify three locations on the hand that lead to an accurate recall. Thereby, we design the layout of a haptic display equipped with eight tactors, capable of encoding 36 symbols with only one or two locations per symbol.
Luzhnica Granit, Veas Eduardo Enrique, Stein Sebastian, Pammer-Schindler Viktoria, Williamson John, Murray-Smith Roderick
2017
Haptic displays are commonly limited to transmitting a dis- crete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the lin- ear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and per- sonalisation through sensitivity adjustment increased decoding precision.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph
2017
In today's digital age with an increasing number of websites, social/learning platforms, and different computer-mediated communication systems, finding valuable information is a challenging and tedious task, regardless from which discipline a person is. However, visualizations have shown to be effective in dealing with huge datasets: because they are grounded on visual cognition, people understand them and can naturally perform visual operations such as clustering, filtering and comparing quantities. But, creating appropriate visual representations of data is also challenging: it requires domain knowledge, understanding of the data, and knowledge about task and user preferences. To tackle this issue, we have developed a recommender system that generates visualizations based on (i) a set of visual cognition rules/guidelines, and (ii) filters a subset considering user preferences. A user places interests on several aspects of a visualization, the task or problem it helps to solve, the operations it permits, or the features of the dataset it represents. This paper concentrates on characterizing user preferences, in particular: i) the sources of information used to describe the visualizations, the content descriptors respectively, and ii) the methods to produce the most suitable recommendations thereby. We consider three sources corresponding to different aspects of interest: a title that describes the chart, a question that can be answered with the chart (and the answer), and a collection of tags describing features of the chart. We investigate user-provided input based on these sources collected with a crowd-sourced study. Firstly, information-theoretic measures are applied to each source to determine the efficiency of the input in describing user preferences and visualization contents (user and item models). Secondly, the practicability of each input is evaluated with content-based recommender system. The overall methodology and results contribute methods for design and analysis of visual recommender systems. The findings in this paper highlight the inputs which can (i) effectively encode the content of the visualizations and user's visual preferences/interest, and (ii) are more valuable for recommending personalized visualizations.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2017
In our research we explore representing the state of production machines using a new nature metaphor, called BioIoT. The underlying rationale is to represent relevant information in an agreeable manner and to increase machines’ appeal to operators. In this paper we describe a study with twelve participants in which sensory information of a coffee machine is encoded in a virtual tree. All participants considered the interaction with the BioIoT pleasant; and most reported to feel more inclined to perform machine maintenance, take “care” for the machine, than given classic state representation. The study highlights as directions for follow-up research personalization, intelligibility vs representational power, limits of the metaphor, and immersive visualization.
Strohmaier David, di Sciascio Maria Cecilia, Errecalde Marcelo, Veas Eduardo Enrique
2017
Innovations in digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success, but also a hindrance to good quality: contributions can be of poor quality because everyone, even anonymous users, can participate. Though Wikipedia has defined guidelines as to what makes the perfect article, authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever growing amount of articles pending review. Great efforts have been invested in algorith-mic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. However, little has been done to support quality assessment of user-generated content through interactive tools that allow for combining automatic methods and human intelligence. We developed WikiLyzer, a toolkit comprising three Web-based interactive graphic tools designed to assist (i) knowledge discovery experts in creating and testing metrics for quality measurement , (ii) users searching for good articles, and (iii) users that need to identify weaknesses to improve a particular article. A case study suggests that experts are able to create complex quality metrics with our tool and a report in a user study on its usefulness to identify high-quality content.