Chiancone Alessandro, Cuder Gerald, Geiger Bernhard, Harzl Annemarie, Tanzer Thomas, Kern Roman
2019
This paper presents a hybrid model for the prediction of magnetostriction in power transformers by leveraging the strengths of a data-driven approach and a physics-based model. Specifically, a non-linear physics-based model for magnetostriction as a function of the magnetic field is employed, the parameters of which are estimated as linear combinations of electrical coil measurements and coil dimensions. The model is validated in a practical scenario with coil data from two different suppliers, showing that the proposed approach captures the different magnetostrictive properties of the two suppliers and provides an estimation of magnetostriction in agreement with the measurement system in place. It is argued that the combination of a non-linear physics-based model with few parameters and a linear data-driven model to estimate these parameters is attractive both in terms of model accuracy and because it allows training the data-driven part with comparably small datasets.
Stanisavljevic Darko, Cemernek David, Gursch Heimo, Urak Günter, Lechner Gernot
2019
Additive manufacturing becomes a more and more important technology for production, mainly driven by the ability to realise extremely complex structures using multiple materials but without assembly or excessive waste. Nevertheless, like any high-precision technology additive manufacturing responds to interferences during the manufacturing process. These interferences – like vibrations – might lead to deviations in product quality, becoming manifest for instance in a reduced lifetime of a product or application issues. This study targets the issue of detecting such interferences during a manufacturing process in an exemplary experimental setup. Collection of data using current sensor technology directly on a 3D-printer enables a quantitative detection of interferences. The evaluation provides insights into the effectiveness of the realised application-oriented setup, the effort required for equipping a manufacturing system with sensors, and the effort for acquisition and processing the data. These insights are of practical utility for organisations dealing with additive manufacturing: the chosen approach for detecting interferences shows promising results, reaching interference detection rates of up to 100% depending on the applied data processing configuration.
Santos Tiago, Schrunner Stefan, Geiger Bernhard, Pfeiler Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2019
Semiconductor manufacturing is a highly innovative branch of industry, where a high degree of automation has already been achieved. For example, devices tested to be outside of their specifications in electrical wafer test are automatically scrapped. In this paper, we go one step further and analyze test data of devices still within the limits of the specification, by exploiting the information contained in the analog wafermaps. To that end, we propose two feature extraction approaches with the aim to detect patterns in the wafer test dataset. Such patterns might indicate the onset of critical deviations in the production process. The studied approaches are: 1) classical image processing and restoration techniques in combination with sophisticated feature engineering and 2) a data-driven deep generative model. The two approaches are evaluated on both a synthetic and a real-world dataset. The synthetic dataset has been modeled based on real-world patterns and characteristics. We found both approaches to provide similar overall evaluation metrics. Our in-depth analysis helps to choose one approach over the other depending on data availability as a major aspect, as well as on available computing power and required interpretability of the results.
Lacic Emanuel, Reiter-Haas Markus, Duricic Tomislav, Slawicek Valentin, Lex Elisabeth
2019
In this work, we present the findings of an online study, where we explore the impact of utilizing embeddings to recommend job postings under real-time constraints. On the Austrian job platform Studo Jobs, we evaluate two popular recommendation scenarios: (i) providing similar jobs and, (ii) personalizing the job postings that are shown on the homepage. Our results show that for recommending similar jobs, we achieve the best online performance in terms of Click-Through Rate when we employ embeddings based on the most recent interaction. To personalize the job postings shown on a user's homepage, however, combining embeddings based on the frequency and recency with which a user interacts with job postings results in the best online performance.
Duricic Tomislav, Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2019
User-based Collaborative Filtering (CF) is one of the most popular approaches to create recommender systems. CF, however, suffers from data sparsity and the cold-start problem since users often rate only a small fraction of available items. One solution is to incorporate additional information into the recommendation process such as explicit trust scores that are assigned by users to others or implicit trust relationships that result from social connections between users. Such relationships typically form a very sparse trust network, which can be utilized to generate recommendations for users based on people they trust. In our work, we explore the use of regular equivalence applied to a trust network to generate a similarity matrix that is used for selecting k-nearest neighbors used for item recommendation. Two vertices in a network are regularly equivalent if their neighbors are themselves equivalent and by using the iterative approach of calculating regular equivalence, we can study the impact of strong and weak ties on item recommendation. We evaluate our approach on cold start users on a dataset crawled from Epinions and find that by using weak ties in addition to strong ties, we can improve the performance of a trust-based recommender in terms of recommendation accuracy.
Lassnig Markus, Stabauer Petra, Breitfuß Gert, Müller Julian
2019
Zahlreiche Forschungsergebnisse im Bereich Geschäftsmodellinnovationen haben gezeigt, dass über 90 Prozent aller Geschäftsmodelle der letzten 50 Jahre aus einer Rekombination von bestehenden Konzepten entstanden sind. Grundsätzlich gilt das auch für digitale Geschäftsmodellinnovationen. Angesichts der Breite potenzieller digitaler Geschäftsmodellinnovationen wollten die Autoren wissen, welche Modellmuster in der wirtschaftlichen Praxis welche Bedeutung haben. Deshalb wurde die digitale Transformation mit neuen Geschäftsmodellen in einer empirischen Studie basierend auf qualitativen Interviews mit 68 Unternehmen untersucht. Dabei wurden sieben geeignete Geschäftsmodellmuster identifiziert, bezüglich ihres Disruptionspotenzials von evolutionär bis revolutionär klassifiziert und der Realisierungsgrad in den Unternehmen analysiert.Die stark komprimierte Conclusio lautet, dass das Thema Geschäftsmodellinnovationen durch Industrie 4.0 und digitale Transformation bei den Unternehmen angekommen ist. Es gibt jedoch sehr unterschiedliche Geschwindigkeiten in der Umsetzung und im Neuheitsgrad der Geschäftsmodellideen. Die schrittweise Weiterentwicklung von Geschäftsmodellen (evolutionär) wird von den meisten Unternehmen bevorzugt, da hier die grundsätzliche Art und Weise des Leistungsangebots bestehen bleibt. Im Gegensatz dazu gibt es aber auch Unternehmen, die bereits radikale Änderungen vornehmen, die die gesamte Geschäftslogik betreffen (revolutionäre Geschäftsmodellinnovationen). Entsprechend wird im vorliegenden Artikel ein Clustering von Geschäftsmodellinnovatoren vorgenommen – von Hesitator über Follower über Optimizer bis zu Leader in Geschäftsmodellinnovationen.
Wolfbauer Irmtraud
2019
Presentation of PhDUse Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Wolfbauer Irmtraud
2019
Use Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Kowald Dominik, Lex Elisabeth, Schdel Markus
2019
Iacopo Vagliano, Fessl Angela, Franziska Günther, Thomas Köhler, Vasileios Mezaris, Ahmed Saleh, Ansgar Scherp, Simic Ilija
2019
The MOVING platform enables its users to improve their information literacy by training how to exploit data and text mining methods in their daily research tasks. In this paper, we show how it can support researchers in various tasks, and we introduce its main features, such as text and video retrieval and processing, advanced visualizations, and the technologies to assist the learning process.
Fessl Angela, Apaolaza Aitor, Gledson Ann, Pammer-Schindler Viktoria, Vigo Markel
2019
Searching on the web is a key activity for working and learning purposes. In this work, we aimed to motivate users to reflect on their search behaviour, and to experiment with different search functionalities. We implemented a widget that logs user interactions within a search platform, mirrors back search behaviours to users, and prompts users to reflect about it. We carried out two studies to evaluate the impact of such widget on search behaviour: in Study 1 (N = 76), participants received screenshots of the widget including reflection prompts while in Study 2 (N = 15), a maximum of 10 search tasks were conducted by participants over a period of two weeks on a search platform that contained the widget. Study 1 shows that reflection prompts induce meaningful insights about search behaviour. Study 2 suggests that, when using a novel search platform for the first time, those participants who had the widget prioritised search behaviours over time. The incorporation of the widget into the search platform after users had become familiar with it, however, was not observed to impact search behaviour. While the potential to support un-learning of routines could not be shown, the two studies suggest the widget’s usability, perceived usefulness, potential to induce reflection and potential to impact search behaviour.
Kopeinik Simone, Seitlinger Paul, Lex Elisabeth
2019
Kopeinik Simone, Lex Elisabeth, Kowald Dominik, Albert Dietrich, Seitlinger Paul
2019
When people engage in Social Networking Sites, they influence one another through their contributions. Prior research suggests that the interplay between individual differences and environmental variables, such as a person’s openness to conflicting information, can give rise to either public spheres or echo chambers. In this work, we aim to unravel critical processes of this interplay in the context of learning. In particular, we observe high school students’ information behavior (search and evaluation of Web resources) to better understand a potential coupling between confirmatory search and polarization and, in further consequence, improve learning analytics and information services for individual and collective search in learning scenarios. In an empirical study, we had 91 high school students performing an information search in a social bookmarking environment. Gathered log data was used to compute indices of confirmatory search and polarisation as well as to analyze the impact of social stimulation. We find confirmatory search and polarization to correlate positively and social stimulation to mitigate, i.e., reduce the two variables’ relationship. From these findings, we derive practical implications for future work that aims to refine our formalism to compute confirmatory search and polarisation indices and to apply it for depolarizing information services
Fruhwirth Michael, Pammer-Schindler Viktoria, Thalmann Stefan
2019
Data plays a central role in many of today's business models. With the help of advanced analytics, knowledge about real-world phenomena can be discovered from data. This may lead to unintended knowledge spillover through a data-driven offering. To properly consider this risk in the design of data-driven business models, suitable decision support is needed. Prior research on approaches that support such decision-making is scarce. We frame designing business models as a set of decision problems with the lens of Behavioral Decision Theory and describe a Design Science Research project conducted in the context of an automotive company. We develop an artefact that supports identifying knowledge risks, concomitant with design decisions, during the design of data-driven business models and verify knowledge risks as a relevant problem. In further research, we explore the problem in-depth and further design and evaluate the artefact within the same company as well as in other companies.
Silva Nelson, Madureira, Luis
2019
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
Schlager Elke, Gursch Heimo, Feichtinger Gerald
2019
Poster to publish the finally implemented "Data Management System" @ Know-Center for the COMFORT project
Feichtinger Gerald, Gursch Heimo
2019
Poster - allgemeine Projektvorstellung
Monsberger Michael, Koppelhuber Daniela, Sabol Vedran, Gursch Heimo, Spataru Adrian, Prentner Oliver
2019
A lot of research is currently focused on studying user behavior indirectly by analyzing sensor data. However, only little attention has been given to the systematic acquisition of immediate user feedback to study user behavior in buildings. In this paper, we present a novel user feedback system which allows building users to provide feedback on the perceived sense of personal comfort in a room. To this end, a dedicated easy-to-use mobile app has been developed; it is complemented by a supporting infrastructure, including a web page for an at-a-glance overview. The obtained user feedback is compared with sensor data to assess whether building services (e.g., heating, ventilation and air-conditioning systems) are operated in accordance with user requirements. This serves as a basis to develop algorithms capable of optimizing building operation by providing recommendations to facility management staff or by automatic adjustment of operating points of building services. In this paper, we present the basic concept of the novel feedback system for building users and first results from an initial test phase. The results show that building users utilize the developed app to provide both, positive and negative feedback on room conditions. They also show that it is possible to identify rooms with non-ideal operating conditions and that reasonable measures to improve building operation can be derived from the gathered information. The results highlight the potential of the proposed system.
Fuchs Alexandra, Geiger Bernhard, Hobisch Elisabeth, Koncar Philipp, Saric Sanja, Scholger Martina
2019
with contributions from Denis Helic and Jacqueline More
Lindstaedt Stefanie , Geiger Bernhard, Pirker Gerhard
2019
Big Data and data-driven modeling are receiving more and more attention in various research disciplines, where they are often considered as universal remedies. Despite their remarkable records of success, in certain cases a purely data-driven approach has proven to be suboptimal or even insufficient.This extended abstract briefly defines the terms Big Data and data-driven modeling and characterizes scenarios in which a strong focus on data has proven to be promising. Furthermore, it explains what progress can be made by fusing concepts from data science and machine learning with current physics-based concepts to form hybrid models, and how these can be applied successfully in the field of engine pre-simulation and engine control.
di Sciascio Maria Cecilia, Strohmaier David, Errecalde Marcelo Luis, Veas Eduardo Enrique
2019
Digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success but also a hindrance to good quality. Although Wikipedia has established guidelines for the “perfect article,” authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever-growing amount of articles pending review. Great efforts have been invested in algorithmic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. Instead, our contribution is an interactive tool that combines automatic classification methods and human interaction in a toolkit, whereby experts can experiment with new quality metrics and share them with authors that need to identify weaknesses to improve a particular article. A design study shows that experts are able to effectively create complex quality metrics in a visual analytics environment. In turn, a user study evidences that regular users can identify flaws, as well as high-quality content based on the inspection of automatic quality scores.
di Sciascio Maria Cecilia, Brusilovsky Peter, Trattner Christoph, Veas Eduardo Enrique
2019
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial and error, and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control has been developed to support the exploratory search process. In this work, we present our attempt to increase the power of exploratory search interfaces by using ideas of social search—for instance, leveraging information left by past users of information systems. Social search technologies are highly popular today, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This article presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. This work presents the full series of evaluations conducted to, first, assess the value of the social models in contexts independent to the user interface, in terms of objective and perceived accuracy. Then, in a study with the full-fledged system, we investigated system accuracy and subjective aspects with a structural model revealing that when users actively interacted with all of its control features, the hybrid system outperformed a baseline content-based–only tool and users were more satisfied.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Geiger Bernhard, Koch Tobias
2019
In 1959, Rényi proposed the information dimension and the d-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly quantized stochastic process divided by minus the logarithm of the quantizer step size 1/m in the limit as m → ∞. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function R(D) of the stochastic process divided by - log(D) in the limit as D ↓ 0. It is further shown that among all multivariate stationary processes with a given (matrixvalued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate and the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.
Kaiser Rene_DB
2019
Video content and technology is an integral part of our private and professional lives. We consume news and entertainment content, and besides communication and learning there are many more significant application areas. One area, however, where video content and technology is not (yet) utilized and exploited to a large extent are production environments in factories of the producing industries like the semiconductor and electronic components and systems (ECS) industries. This article outlines some of the opportunities and challenges towards better exploitation of video content and technology in such contexts. An understanding of the current situation is the basis for future socio-technical interventions where video technology may be integrated in work processes within factories.
Schweimer Christoph, Geiger Bernhard, Suleimenova Diana, Groen Derek, Gfrerer Christine, Pape David, Elsaesser Robert, Kocsis Albert Tihamér, Liszkai B., Horváth Zoltan
2019
Jorge Guerra Torres, Carlos Catania, Veas Eduardo Enrique
2019
Modern Network Intrusion Detection systems depend on models trained with up-to-date labeled data. Yet, the process of labeling a network traffic dataset is specially expensive, since expert knowledge is required to perform the annotations. Visual analytics applications exist that claim to considerably reduce the labeling effort, but the expert still needs to ponder several factors before issuing a label. And, most often the effect of bad labels (noise) in the final model is not evaluated. The present article introduces a novel active learning strategy that learns to predict labels in (pseudo) real-time as the user performs the annotation. The system called RiskID, presents several innovations: i) a set of statistical methods summarize the information, which is illustrated in a visual analytics application, ii) that interfaces with the active learning strategy forbuilding a random forest model as the user issues annotations; iii) the (pseudo) real-time predictions of the model are fed back visually to scaffold the traffic annotation task. Finally, iv) an evaluation framework is introduced that represents a complete methodology for evaluating active learning solutions, including resilience against noise.
Jorge Guerra Torres, Veas Eduardo Enrique, Carlos Catania
2019
Labeling a real network dataset is specially expensive in computer security, as an expert has to ponder several factors before assigning each label. This paper describes an interactive intelligent system to support the task of identifying hostile behavior in network logs. The RiskID application uses visualizations to graphically encode features of network connections and promote visual comparison. In the background, two algorithms are used to actively organize connections and predict potential labels: a recommendation algorithm and a semi-supervised learning strategy. These algorithms together with interactive adaptions to the user interface constitute a behavior recommendation. A study is carried out to analyze how the algo-rithms for recommendation and prediction influence the workflow of labeling a dataset. The results of a study with 16 participants indicate that the behaviour recommendation significantly improves the quality of labels. Analyzing interaction patterns, we identify a more intuitive workflow used when behaviour recommendation isavailable.
Luzhnica Granit, Veas Eduardo Enrique
2019
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing.Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Barreiros Carla, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2019
We present a visual interface for communicating the internal state of a coffee machine via a tree metaphor. Nature-inspired representations have a positive impact on human well-being. We also hypothesize that representing the coffee machine asa tree stimulates emotional connection to it, which leads to better maintenance performance.The first study assessed the understandability of the tree representation, comparing it with icon-based and chart-based representations. An online survey with 25 participants indicated no significant mean error difference between representations.A two-week field study assessed the maintenance performance of 12 participants, comparing the tree representation with the icon-based representation. Based on 240 interactions with the coffee machine, we concluded that participants understood themachine states significantly better in the tree representation. Their comments and behavior indicated that the tree representation encouraged an emotional engagement with the machine. Moreover, the participants performed significantly more optional maintenance tasks with the tree representation.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Kowald Dominik, Lacic Emanuel, Theiler Dieter, Traub Matthias, Kuffer Lucky, Lindstaedt Stefanie , Lex Elisabeth
2019
Kowald Dominik, Lex Elisabeth, Schedl Markus
2019
Lex Elisabeth, Kowald Dominik
2019
Toller Maximilian, Santos Tiago, Kern Roman
2019
Season length estimation is the task of identifying the number of observations in the dominant repeating pattern of seasonal time series data. As such, it is a common pre-processing task crucial for various downstream applications. Inferring season length from a real-world time series is often challenging due to phenomena such as slightly varying period lengths and noise. These issues may, in turn, lead practitioners to dedicate considerable effort to preprocessing of time series data since existing approaches either require dedicated parameter-tuning or their performance is heavily domain-dependent. Hence, to address these challenges, we propose SAZED: spectral and average autocorrelation zero distance density. SAZED is a versatile ensemble of multiple, specialized time series season length estimation approaches. The combination of various base methods selected with respect to domain-agnostic criteria and a novel seasonality isolation technique, allow a broad applicability to real-world time series of varied properties. Further, SAZED is theoretically grounded and parameter-free, with a computational complexity of O(𝑛log𝑛), which makes it applicable in practice. In our experiments, SAZED was statistically significantly better than every other method on at least one dataset. The datasets we used for the evaluation consist of time series data from various real-world domains, sterile synthetic test cases and synthetic data that were designed to be seasonal and yet have no finite statistical moments of any order.
Toller Maximilian, Geiger Bernhard, Kern Roman
2019
Distance-based classification is among the most competitive classification methods for time series data. The most critical componentof distance-based classification is the selected distance function.Past research has proposed various different distance metrics ormeasures dedicated to particular aspects of real-world time seriesdata, yet there is an important aspect that has not been considered so far: Robustness against arbitrary data contamination. In thiswork, we propose a novel distance metric that is robust against arbitrarily “bad” contamination and has a worst-case computationalcomplexity of O(n logn). We formally argue why our proposedmetric is robust, and demonstrate in an empirical evaluation thatthe metric yields competitive classification accuracy when appliedin k-Nearest Neighbor time series classification.
Breitfuß Gert, Berger Martin, Doerrzapf Linda
2019
The Austrian Federal Ministry for Transport, Innovation and Technology created an initiative to fund the setup and operation of Living Labs to provide a vital innovation ecosystem for mobility and transport. Five Urban Mobility Labs (UML) located in four urban areas have been selected for funding (duration 4 years) and started operation in 2017. In order to cover the risk of a high dependency of public funding (which is mostly limited in time), the lab management teams face the challenge to develop a viable and future-proof UML Business Model. The overall research goal of this paper is to get empirical insights on how a UML Business Model evolves on a long-term perspective and which success factors play a role. To answer the research question, a method mix of desk research and qualitative methods have been selected. In order to get an insight into the UML Business Model, two circles of 10 semi-structured interviews (two responsible persons of each UML) are planned. The first circle of the interviews took place between July 2018 and January 2019. The second circle of interviews is planned for 2020. Between the two rounds of the survey, a Business Model workshop is planned to share and create ideas for future Business Model developments. Based on the gained research insights a comprehensive list of success factors and hands-on recommendations will be derived. This should help UML organizations in developing a viable Business Model in order to support sustainable innovations in transport and mobility.
Geiger Bernhard
2019
joint work with Tobias Koch, Universidad Carlos III de Madrid
Silva Nelson, Blascheck Tanja, Jianu Radu, Rodrigues Nils, Weiskopf Daniel, Raubal Martin, Schreck Tobias
2019
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complexdatasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches.Therefore, we discuss foundations for eye tracking support in VAsystems. We first review and discuss the structure and range oftypical VA systems. Based on a widely used VA model, we presentfive comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used tosystematically explore how concrete VA systems could be extendedwith eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and applicationopportunities, and classify them into research themes. In a call foraction, we map the road for future research to broaden the use ofeye tracking and advance visual analytics.
Kaiser Rene_DB
2019
This paper gives a comprehensive overview of the Virtual Director concept. A Virtual Director is a software component automating the key decision making tasks of a TV broadcast director. It decides how to mix and present the available content streams on a particular playout device, most essentially deciding which camera view to show and when to switch to another. A Virtual Director allows to take decisions respecting individual user preferences and playout device characteristics. In order to take meaningful decisions, a Virtual Director must be continuously informed by real-time sensors which emit information about what is happening in the scene. From such (low-level) 'cues', the Virtual Director infers higher-level events, actions, facts and states which in turn trigger the real-time processes deciding on the presentation of the content. The behaviour of a Virtual Director, the 'production grammar', defines how decisions are taken, generally encompassing two main aspects: selecting what is most relevant, and deciding how to show it, applying cinematographic principles.
Thalmann Stefan, Gursch Heimo, Suschnigg Josef, Gashi Milot, Ennsbrunner Helmut, Fuchs Anna Katharina, Schreck Tobias, Mutlu Belgin, Mangler Jürgen, Huemer Christian, Lindstaedt Stefanie
2019
Current trends in manufacturing lead to more intelligent products, produced in global supply chains in shorter cycles, taking more and complex requirements into account. To manage this increasing complexity, cognitive decision support systems, building on data analytic approaches and focusing on the product life cycle, stages seem a promising approach. With two high-tech companies (world market leader in their domains) from Austria, we are approaching this challenge and jointly develop cognitive decision support systems for three real world industrial use cases. Within this position paper, we introduce our understanding of cognitive decision support and we introduce three industrial use cases, focusing on the requirements for cognitive decision support. Finally, we describe our preliminary solution approach for each use case and our next steps.
Stepputat Kendra, Kienreich Wolfgang, Dick Christopher S.
2019
With this article, we present the ongoing research project “Tango Danceability of Music in European Perspective” and the transdisciplinary research design it is built upon. Three main aspects of tango argentino are in focus—the music, the dance, and the people—in order to understand what is considered danceable in tango music. The study of all three parts involves computer-aided analysis approaches, and the results are examined within ethnochoreological and ethnomusicological frameworks. Two approaches are illustrated in detail to show initial results of the research model. Network analysis based on the collection of online tango event data and quantitative evaluation of data gathered by an online survey showed significant results, corroborating the hypothesis of gatekeeping effects in the shaping of musical preferences. The experiment design includes incorporation of motion capture technology into dance research. We demonstrate certain advantages of transdisciplinary approaches in the study of Intangible Cultural Heritage, in contrast to conventional studies based on methods from just one academic discipline.
Pammer-Schindler Viktoria
2019
This is a commentary of mine, created in the context of an open review process, selected for publication alongside the accepted original paper in a juried process, and published alongside the paper at the given DOI,
Xie Benjamin, Harpstead Erik, DiSalvo Betsy, Slovak Petr, Kharuffa Ahmed, Lee Michael J., Pammer-Schindler Viktoria, Ogan Amy, Williams Joseph Jay
2019
Winter Kevin, Kern Roman
2019
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second,to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved aver-age F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Maritsch Martin, Diana Suleimenova, Geiger Bernhard, Derek Groen
2019
Geiger Bernhard, Schrunner Stefan, Kern Roman
2019
Schrunner and Geiger have contributed equally to this work.
Adolfo Ruiz Calleja, Dennerlein Sebastian, Kowald Dominik, Theiler Dieter, Lex Elisabeth, Tobias Ley
2019
In this paper, we propose the Social Semantic Server (SSS) as a service-based infrastructure for workplace andprofessional Learning Analytics (LA). The design and development of the SSS has evolved over 8 years, startingwith an analysis of workplace learning inspired by knowledge creation theories and its application in differentcontexts. The SSS collects data from workplace learning tools, integrates it into a common data model based ona semantically-enriched Artifact-Actor Network and offers it back for LA applications to exploit the data. Further,the SSS design promotes its flexibility in order to be adapted to different workplace learning situations. Thispaper contributes by systematizing the derivation of requirements for the SSS according to the knowledge creationtheories, and the support offered across a number of different learning tools and LA applications integrated to it.It also shows evidence for the usefulness of the SSS extracted from four authentic workplace learning situationsinvolving 57 participants. The evaluation results indicate that the SSS satisfactorily supports decision making indiverse workplace learning situations and allow us to reflect on the importance of the knowledge creation theoriesfor such analysis.
Renner Bettina, Wesiak Gudrun, Pammer-Schindler Viktoria, Prilla Michael, Müller Lars, Morosini Dalia, Mora Simone, Faltin Nils, Cress Ulrike
2019
Fessl Angela, Simic Ilija, Barthold Sabine, Pammer-Schindler Viktoria
2019
Information literacy, the access to knowledge and use of it are becoming a precondition for individuals to actively take part in social,economic, cultural and political life. Information literacy must be considered as a fundamental competency like the ability to read, write and calculate. Therefore, we are working on automatic learning guidance with respect to three modules of the information literacy curriculum developed by the EU (DigComp 2.1 Framework). In prior work, we havelaid out the essential research questions from a technical side. In this work, we follow-up by specifying the concept to micro learning, and micro learning content units. This means, that the overall intervention that we design is concretized to: The widget is initialized by assessing the learners competence with the help of a knowledge test. This is the basis for recommending suitable micro learning content, adapted to the identified competence level. After the learner has read/worked through the content, the widget asks a reflective question to the learner. The goal of the reflective question is to deepen the learning. In this paper we present the concept of the widget and its integration in a search platform.
Fruhwirth Michael, Breitfuß Gert, Müller Christiana
2019
Die Nutzung von Daten in Unternehmen zur Analyse und Beantwortung vielfältiger Fragestellungen ist “daily business”. Es steckt aber noch viel mehr Potenzial in Daten abseits von Prozessoptimierungen und Business Intelligence Anwendungen. Der vorliegende Beitrag gibt einen Überblick über die wichtigsten Aspekte bei der Transformation von Daten in Wert bzw. bei der Entwicklung datengetriebener Geschäftsmodelle. Dabei werden die Charakteristika von datengetriebenen Geschäftsmodellen und die benötigten Kompetenzen näher beleuchtet. Vier Fallbeispiele österreichischer Unternehmen geben Einblicke in die Praxis und abschließend werden aktuelle Herausforderungen und Entwicklungen diskutiert.
Luzhnica Granit, Veas Eduardo Enrique
2019
Luzhnica Granit, Veas Eduardo Enrique
2019
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Breitfuß Gert, Fruhwirth Michael, Pammer-Schindler Viktoria, Stern Hermann, Dennerlein Sebastian
2019
Increasing digitization is generating more and more data in all areas ofbusiness. Modern analytical methods open up these large amounts of data forbusiness value creation. Expected business value ranges from process optimizationsuch as reduction of maintenance work and strategic decision support to businessmodel innovation. In the development of a data-driven business model, it is usefulto conceptualise elements of data-driven business models in order to differentiateand compare between examples of a data-driven business model and to think ofopportunities for using data to innovate an existing or design a new businessmodel. The goal of this paper is to identify a conceptual tool that supports datadrivenbusiness model innovation in a similar manner: We applied three existingclassification schemes to differentiate between data-driven business models basedon 30 examples for data-driven business model innovations. Subsequently, wepresent the strength and weaknesses of every scheme to identify possible blindspots for gaining business value out of data-driven activities. Following thisdiscussion, we outline a new classification scheme. The newly developed schemecombines all positive aspects from the three analysed classification models andresolves the identified weaknesses.
Clemens Bloechl, Rana Ali Amjad, Geiger Bernhard
2019
We present an information-theoretic cost function for co-clustering, i.e., for simultaneous clustering of two sets based on similarities between their elements. By constructing a simple random walk on the corresponding bipartite graph, our cost function is derived from a recently proposed generalized framework for information-theoretic Markov chain aggregation. The goal of our cost function is to minimize relevant information loss, hence it connects to the information bottleneck formalism. Moreover, via the connection to Markov aggregation, our cost function is not ad hoc, but inherits its justification from the operational qualities associated with the corresponding Markov aggregation problem. We furthermore show that, for appropriate parameter settings, our cost function is identical to well-known approaches from the literature, such as “Information-Theoretic Co-Clustering” by Dhillon et al. Hence, understanding the influence of this parameter admits a deeper understanding of the relationship between previously proposed information-theoretic cost functions. We highlight some strengths and weaknesses of the cost function for different parameters. We also illustrate the performance of our cost function, optimized with a simple sequential heuristic, on several synthetic and real-world data sets, including the Newsgroup20 and the MovieLens100k data sets.
Lovric Mario, Molero Perez Jose Manuel, Kern Roman
2019
The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations