Schmid Fabian, Mukherjee Shibam, Picek Stjepan, Stoettinger Mark, De Santis Fabrizio, Rechberger Christian
2024
Side-channel analysis certification is a process designed to certify theresilience of cryptographic hardware and software implementations against side-channel attacks. In certain cases, third-party evaluations by external companiesor departments are necessary due to limited budget, time, or even expertise withthe penalty of a significant exchange of sensitive information during the evaluationprocess. In this work, we investigate the potential of Homomorphic Encryption(HE) in performing side-channel analysis on HE-encrypted measurements. With HEapplied to side-channel analysis (SCA), a third party can perform SCA on encryptedmeasurement data and provide the outcome of the analysis without gaining insightsabout the actual cryptographic implementation under test. To this end, we evaluate itsfeasibility by analyzing the impact of AI-based side-channel analysis using HE (privateSCA) on accuracy and execution time and compare the results with an ordinaryAI-based side-channel analysis (plain SCA). Our work suggests that both unprotectedand protected cryptographic implementations can be successfully attacked alreadytoday with standard server equipment and modern HE protocols/libraries, while thetraces are HE-encrypted
Müllner Peter , Lex Elisabeth, Schedl Markus, Kowald Dominik
2024
Collaborative filtering-based recommender systems leverage vast amounts of behavioral user data, which poses severe privacy risks. Thus, often random noise is added to the data to ensure Differential Privacy (DP). However, to date it is not well understood in which ways this impacts personalized recommendations. In this work, we study how DP affects recommendation accuracy and popularity bias when applied to the training data of state-of-the-art recommendation models.Our findings are three-fold: First, we observe that nearly all users' recommendations change when DP is applied. Second, recommendation accuracy drops substantially while recommended item popularity experiences a sharp increase, suggesting that popularity bias worsens. Finally, we find that DP exacerbates popularity bias more severely for users who prefer unpopular items than for users who prefer popular items.
Rohrhofer Franz Martin, Posch Stefan, Gößnitzer Clemens, Geiger Bernhard
2023
Physics-informed neural networks (PINNs) have emerged as a promising deep learning method, capable of solving forward and inverse problems governed by differential equations. Despite their recent advance, it is widely acknowledged that PINNs are difficult to train and often require a careful tuning of loss weights when data and physics loss functions are combined by scalarization of a multi-objective (MO) problem. In this paper, we aim to understand how parameters of the physical system, such as characteristic length and time scales, the computational domain, and coefficients of differential equations affect MO optimization and the optimal choice of loss weights. Through a theoretical examination of where these system parameters appear in PINN training, we find that they effectively and individually scale the loss residuals, causing imbalances in MO optimization with certain choices of system parameters. The immediate effects of this are reflected in the apparent Pareto front, which we define as the set of loss values achievable with gradient-based training and visualize accordingly. We empirically verify that loss weights can be used successfully to compensate for the scaling of system parameters, and enable the selection of an optimal solution on the apparent Pareto front that aligns well with the physically valid solution. We further demonstrate that by altering the system parameterization, the apparent Pareto front can shift and exhibit locally convex parts, resulting in a wider range of loss weights for which gradient-based training becomes successful. This work explains the effects of system parameters on MO optimization in PINNs, and highlights the utility of proposed loss weighting schemes.
Ross-Hellauer Anthony, Klebel Thomas, Knoth Petr, Pontika Nancy
2023
There are currently broad moves to reform research assessment, especially to better incentivize open and responsible research and avoid problematic use of inappropriate quantitative indicators. This study adds to the evidence base for such decision-making by investigating researcher perceptions of current processes of research assessment in institutional review, promotion, and tenure processes. Analysis of an international survey of 198 respondents reveals a disjunct between personal beliefs and perceived institutional priorities (‘value dissonance’), with practices of open and responsible research, as well as ‘research citizenship’ comparatively poorly valued by institutions at present. Our findings hence support current moves to reform research assessment. But we also add crucial nuance to the debate by discussing the relative weighting of open and responsible practices and suggesting that fostering research citizenship activities like collegiality and mentorship may be an important way to rebalance criteria towards environments, which better foster quality, openness, and responsibility
Wimmer Michael, Weidinger Nicole, ElSayed Neven, Müller-Putz Gernot, Veas Eduardo Enrique
2023
Error perception is known to elicit distinct brain patterns, which can be used to improve the usability of systems facilitating human-computer interactions, such as brain-computer interfaces. This requires a high-accuracy detection of erroneous events, e.g., misinterpretations of the user’s intention by the interface, to allowfor suitable reactions of the system. In this work, we concentrate on steering-based navigation tasks. We present a combined electroencephalography-virtual reality (VR) study investigating diffferent approaches for error detection and simultaneously exploring the corrective human behavior to erroneous events in a VR flight simulation. We could classify different errors allowing us to analyze neural signatures of unexpected changes in the VR. Moreover, the presented models could detect errors faster than participantsnaturally responded to them. This work could contribute to developing adaptive VR applications that exclusively rely on the user’s physiological information.
Razouk Houssam, Liu Xinglan, Kern Roman
2023
The Failure Mode Effect Analysis process (FMEA) is widely used in industry for risk assessment, as it effectively captures and documents domain-specific knowledge. This process is mainly concerned with causal domain knowledge. In practical applications, FMEAs encounter challenges in terms of comprehensibility, particularly related to inadequate coverage of listed failure modes and their corresponding effects and causes. This can be attributed to the limitations of traditional brainstorming approaches typically employed in the FMEA process. Depending on the size and diversity in terms of disciplines of the team conducting the analysis, these approaches may not adequately capture a comprehensive range of failure modes, leading to gaps in coverage. To this end, methods for improving FMEA knowledge comprehensibility are highly needed. A potential approach to address this gap is rooted in recent advances in common-sense knowledge graph completion, which have demonstrated the effectiveness of text-aware graph embedding techniques. However, the applicability of such methods in an industrial setting is limited. This paper addresses this issue on FMEA documents in an industrial environment. Here, the application of common-sense knowledge graph completion methods on FMEA documents from semiconductor manufacturing is studied. These methods achieve over 20% MRR on the test set and 70% of the top 10 predictions were manually assessed to be plausible by domain experts. Based on the evaluation, this paper confirms that text-aware knowledge graph embedding for common-sense knowledge graph completion are more effective than structure-only knowledge graph embedding for improving FMEA knowledge comprehensibility. Additionally we found that language model in domain fine-tuning is beneficial for extracting more meaningful embedding, thus improving the overall model performance.
Krajnc Aleksandra, Iacono Lucas, Kirschbichler Stephan, Klein Christoph, Breitfuss D, Steidl T, Pucher J
2023
This study investigates the kinematics of vehicle occupants on the passenger seat in reclined and upright seated positions. Thirty-nine volunteers (12 female and 27 male) were tested in 30 kph and 50 kph braking and steering manoeuvres. Eleven manoeuvres were conducted with each volunteer in aware and unaware states. A sedan modified with a belt integrated seat was used. The kinematics was recorded with a video-based system and (additionally) with acceleration / angular velocity sensors. Interaction with the seat was measured with pressure mats and the muscle activity was recorded in the upper body and in the lower body muscles. This publication focuses on the occupant kinematics and its processing with linear mathematical model. Kinematics and respective corridors are predicted for certain age, gender, and anthropometric data.
Malinverno Luca, Barros Vesna, Ghisoni Francesco, Visonà Giovanni, Kern Roman, Nickel Philip , Ventura Barbara Elvira, Simic Ilija, Stryeck Sarah, Manni Francesca , Ferri Cesar , Jean-Quartier Clair, Genga Laura , Schweikert Gabriele, Lovric Mario, Rosen-Zvi Michal
2023
Understanding the inner working of machine-learning models has become a crucial point of discussion in fairness and reliability of artificial intelligence (AI). In this perspective, we reveal insights from recently published scientific works on explainable AI (XAI) within the biomedical sciences. Specifically, we speculate that the COVID-19 pandemic is associated with the rate of publications in the field. Current research efforts seem to be directed more toward explaining black-box machine-learning models than designing novel interpretable architecture. Notably, an inflection period in the publication rate was observed in October 2020, when the quantity of XAI research in biomedical sciences surged upward significantly.While a universally accepted definition of explainability is unlikely, ongoing research efforts are pushing the biomedical field toward improving the robustness and reliability of applied machine learning, which we consider a positive trend.
Repolusk Tristan, Veas Eduardo Enrique
2023
Suzipu notation, also called banzipu notation, is a notation which was predominantly used in Song dynasty in China, and is still actively performed in the Xi’an Guyue music tradition. In this paper, the first tool for creating a machine-readable digital representation of suzipu notation with focus on optical music recognition (OMR) is proposed. This contribution serves two purposes: i) creating the basis for the future development of OMR methods with respect to suzipu notation; and ii) the facilitated digitization of musical sources written in suzipu notation. In summary, these purposes promote the preservation and understanding of cultural heritage through digitization
Geiger Bernhard, Schuppler Barbara
2023
Given the development of automatic speech recognition based techniques for creating phonetic annotations of large speech corpora, there has been a growing interest in investigating the frequencies of occurrence of phonological and reduction processes. Given that most studies have analyzed these processes separately, they did not provide insights about their cooccurrences. This paper contributes with introducing graph theory methods for the analysis of pronunciation variation in a large corpus of Austrian German conversational speech. More specifically, we investigate how reduction processes that are typical for spontaneous German in general co-occur with phonological processes typical for the Austrian German variety. Whereas our concrete findings are of special interest to scientists investigating variation in German, the approach presented opens new possibilities to analyze pronunciation variation in large corpora of different speaking styles in any language.
Grill-Kiefer Gerhard, Schröcker Stefan, Krasser Hannes, Körner Stefan
2023
Die optimale und nachhaltige Gestaltung komplexer Prozesse in Produktionsunternehmen setzt ein strukturiertes Vorgehen in der Problemlösung voraus. Durch das breite Aufgabenspektrum und die miteinander in Konkurrenz stehenden Zielsetzungen gilt diese Rahmenbedingung insbesondere für das Supply Chain Management in der Automobilindustrie. Mit einem in mehrere Schritte gegliederten Prozess gelingen die Entwicklung und Anwendung eines Rechenmodells zur Optimierung der Gesamtkosten im Teileversorgungsprozess. Dieses stellt unter Einbindung der beteiligten Fachbereiche die ganzheitliche Optimierung der Versorgungsgesamtkosten und die Durchführung effizienter Planungsschleifen im operativen Betrieb sicher. Der Datenqualität kommt hierbei eine besondere Bedeutung zu.
Siddiqi Shafaq, Qureshi Faiza, Lindstaedt Stefanie , Kern Roman
2023
Outlier detection in non-independent and identically distributed (non-IID) data refers to identifying unusual or unexpected observations in datasets that do not follow an independent and identically distributed (IID) assumption. This presents a challenge in real-world datasets where correlations, dependencies, and complex structures are common. In recent literature, several methods have been proposed to address this issue and each method has its own strengths and limitations, and the selection depends on the data characteristics and application requirements. However, there is a lack of a comprehensive categorization of these methods in the literature. This study addresses this gap by systematically reviewing methods for outlier detection in non-IID data published from 2015 to 2023. This study focuses on three major aspects; data characteristics, methods, and evaluation measures. In data characteristics, we discuss the differentiating properties of non-IID data. Then we review the recent methods proposed for outlier detection in non-IID data, covering their theoretical foundations and algorithmic approaches. Finally, we discuss the evaluation metrics proposed to measure the performance of these methods. Additionally, we present a taxonomy for organizing these methods and highlight the application domain of outlier detection in non-IID categorical data, outlier detection in federated learning, and outlier detection in attribute graphs. We provide a comprehensive overview of datasets used in the selected literature. Moreover, we discuss open challenges in outlier detection for non-IID to shed light on future research directions. By synthesizing the existing literature, this study contributes to advancing the understanding and development of outlier detection techniques in non-IID data settings.
Müllner Peter , Lex Elisabeth, Schedl Markus, Kowald Dominik
2023
State-of-the-art recommender systems produce high-quality recommendations to support users in finding relevant content. However, through the utilization of users' data for generating recommendations, recommender systems threaten users' privacy. To alleviate this threat, often, differential privacy is used to protect users' data via adding random noise. This, however, leads to a substantial drop in recommendation quality. Therefore, several approaches aim to improve this trade-off between accuracy and user privacy. In this work, we first overview threats to user privacy in recommender systems, followed by a brief introduction to the differential privacy framework that can protect users' privacy. Subsequently, we review recommendation approaches that apply differential privacy, and we highlight research that improves the trade-off between recommendation quality and user privacy. Finally, we discuss open issues, e.g., considering the relation between privacy and fairness, and the users' different needs for privacy. With this review, we hope to provide other researchers an overview of the ways in which differential privacy has been applied to state-of-the-art collaborative filtering recommender systems.
Duricic Tomislav, Kowald Dominik, Emanuel Lacic, Lex Elisabeth
2023
By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction. This review paper focuses on addressing these dimensions in GNN-based recommender systems, going beyond the conventional accuracy-centric perspective. We begin by reviewing recent developments in approaches that improve not only the accuracy-diversity trade-off, but also promote serendipity and fairness in GNN-based recommender systems. We discuss different stages of model development including data preprocessing, graph construction, embedding initialization, propagation layers, embedding fusion, score computation, and training methodologies. Furthermore, we present a look into the practical difficulties encountered in assuring diversity, serendipity, and fairness, while retaining high accuracy. Finally, we discuss potential future research directions for developing more robust GNN-based recommender systems that go beyond the unidimensional perspective of focusing solely on accuracy. This review aims to provide researchers and practitioners with an in-depth understanding of the multifaceted issues that arise when designing GNN-based recommender systems.
Müllner Peter , Lex Elisabeth, Schedl Markus, Kowald Dominik
2023
User-based KNN recommender systems (UserKNN) utilize the rating data of a target user’s k nearest neighbors in the recommendation process. This, however, increases the privacy risk of the neighbors since their rating data might be exposed to other users or malicious parties. To reduce this risk, existing work applies differential privacy by adding randomness to the neighbors’ ratings, which reduces the accuracy of UserKNN. In this work, we introduce ReuseKNN, a novel differentially-private KNN-based recommender system. The main idea is to identify small but highly reusable neighborhoods so that (i) only a minimal set of users requires protection with differential privacy, and (ii) most users do not need to be protected with differential privacy, since they are only rarely exploited as neighbors. In our experiments on five diverse datasets, we make two key observations: Firstly, ReuseKNN requires significantly smaller neighborhoods, and thus, fewer neighbors need to be protected with differential privacy compared to traditional UserKNN. Secondly, despite the small neighborhoods, ReuseKNN outperforms UserKNN and a fully differentially private approach in terms of accuracy. Overall, ReuseKNN leads to significantly less privacy risk for users than in the case of UserKNN.
Marta Moscati, Christian Wallman, Markus Reiter-Haas, Kowald Dominik, Elisabeth Lex, Markus Schedl
2023
Integrating the ACT-R Framework with Collaborative Filtering for Explainable Sequential Music Recommendati
Geiger Bernhard, Jahani Alireza, Hussain Hussain, Groen Derek
2023
In this work, we investigate Markov aggregation for agent-based models (ABMs). Specifically, if the ABM models agent movements on a graph, if its ruleset satisfies certain assumptions, and if the aim is to simulate aggregate statistics such as vertex populations, then the ABM can be replaced by a Markov chain on a comparably small state space. This equivalence between a function of the ABM and a smaller Markov chain allows to reduce the computational complexity of the agent-based simulation from being linear in the number of agents, to being constant in the number of agents and polynomial in the number of locations.We instantiate our theory for a recent ABM for forced migration (Flee). We show that, even though the rulesets of Flee violate some of our necessary assumptions, the aggregated Markov chain-based model, MarkovFlee, achieves comparable accuracy at substantially reduced computational cost. Thus, MarkovFlee can help NGOs and policy makers forecast forced migration in certain conflict scenarios in a cost-effective manner, contributing to fast and efficient delivery of humanitarian relief.
Rohrhofer Franz Martin, Posch Stefan, Gößnitzer Clemens, Geiger Bernhard
2023
This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Networks (PINNs) on dynamical systems.Our results indicate that fixed points which are inherent to these systems play a key role in the optimization of the in PINNs embedded physics loss function.We observe that the loss landscape exhibits local optima that are shaped by the presence of fixed points.We find that these local optima contribute to the complexity of the physics loss optimization which can explain common training difficulties and resulting nonphysical predictions.Under certain settings, e.g., initial conditions close to fixed points or long simulations times, we show that those optima can even become better than that of the desired solution.
Posch Stefan, Gößnitzer Clemens, Rohrhofer Franz Martin, Geiger Bernhard, Wimmer Andreas
2023
The turbulent jet ignition concept using prechambers is a promising solution to achieve stable combustion at lean conditions in large gas engines, leading to high efficiency at low emission levels. Due to the wide range of design and operating parameters for large gas engine prechambers, the preferred method for evaluating different designs is computational fluid dynamics (CFD), as testing in test bed measurement campaigns is time-consuming and expensive. However, the significant computational time required for detailed CFD simulations due to the complexity of solving the underlying physics also limits its applicability. In optimization settings similar to the present case, i.e., where the evaluation of the objective function(s) is computationally costly, Bayesian optimization has largely replaced classical design-of-experiment. Thus, the present study deals with the computationally efficient Bayesian optimization of large gas engine prechambers design using CFD simulation. Reynolds-averaged-Navier-Stokes simulations are used to determine the target values as a function of the selected prechamber design parameters. The results indicate that the chosen strategy is effective to find a prechamber design that achieves the desired target values.
Rohrhofer Franz Martin, Posch Stefan, Gößnitzer Clemens, García-Oliver José M., Geiger Bernhard
2023
Flamelet models are widely used in computational fluid dynamics to simulate thermochemical processes in turbulent combustion. These models typically employ memory-expensive lookup tables that are predetermined and represent the combustion process to be simulated.Artificial neural networks (ANNs) offer a deep learning approach that can store this tabular data using a small number of network weights, potentially reducing the memory demands of complex simulations by orders of magnitude.However, ANNs with standard training losses often struggle with underrepresented targets in multivariate regression tasks, e.g., when learning minor species mass fractions as part of lookup tables.This paper seeks to improve the accuracy of an ANN when learning multiple species mass fractions of a hydrogen (\ce{H2}) combustion lookup table. We assess a simple, yet effective loss weight adjustment that outperforms the standard mean-squared error optimization and enables accurate learning of all species mass fractions, even of minor species where the standard optimization completely fails. Furthermore, we find that the loss weight adjustment leads to more balanced gradients in the network training, which explains its effectiveness.
Hoffer Johannes G., Ranftl Sascha, Geiger Bernhard
2023
We consider the problem of finding an input to a stochastic black box function such that the scalar output of the black box function is as close as possible to a target value in the sense of the expected squared error. While the optimization of stochastic black boxes is classic in (robust) Bayesian optimization, the current approaches based on Gaussian processes predominantly focus either on (i) maximization/minimization rather than target value optimization or (ii) on the expectation, but not the variance of the output, ignoring output variations due to stochasticity in uncontrollable environmental variables. In this work, we fill this gap and derive acquisition functions for common criteria such as the expected improvement, the probability of improvement, and the lower confidence bound, assuming that aleatoric effects are Gaussian with known variance. Our experiments illustrate that this setting is compatible with certain extensions of Gaussian processes, and show that the thus derived acquisition functions can outperform classical Bayesian optimization even if the latter assumptions are violated. An industrial use case in billet forging is presented.
Disch Leonie, Pammer-Schindler Viktoria
2023
Many knowledge-intensive tasks - where learning is required and expected - are now computer-supported. Subsequently, interaction design has the opportunity to support the learning that is necessary to complete a task. In our work, we specifically use knowledge construction theory to model learning. In this position paper, we elaborate on three overarching goals: I) identifying (computational) measurement methods that operationalize knowledge construction theory, II) using these measurement methods to evaluate and compare user interface design elements, and III) user interface adaptation using knowledge about which design elements support what step of knowledge construction - gained through II) together with user models. Our prior and ongoing work targets two areas, namely open science (knowledge construction is necessary to understand scientific texts) and data analytics (knowledge construction is necessary to develop insights based on data)
Wolfbauer Irmtraud, Bangerl Mia Magdalena, Maitz Katharina, Pammer-Schindler Viktoria
2023
In Rebo at Work, chatbot Rebo helps apprentices to reflect on a work experience and associate it with their training’s learning objectives. Rebo poses questions that motivate the apprentice to look at a work experience from different angles, pondering how it went, the problems they encountered, what they learned from it, and what they take away for the future. We present preliminary results of a 9-month field study (analysis of 90 interactions of the first 6 months) with 51 apprentices in the fields of metal technology, mechatronics, and electrical engineering. During reflection with Rebo at Work, 98% of apprentices were able to identify their work experience as a learning opportunity and reflect on that, and 83% successfully connected it with a learning objective. This shows that self-monitoring of learning objectives and reflection on work tasks can be guided by a conversational agent and motivates further research in this area.
Adilova Linara, Geiger Bernhard, Fischer Asja
2023
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
Berger Katharina, Rusch Magdalena, Pohlmann Antonia, Popowicz Martin, Geiger Bernhard, Gursch Heimo, Schöggl Josef-Peter, Baumgartner Rupert J.
2023
Digital product passports (DPPs) are an emerging technology and are considered as enablers of sustainable and circular value chains as they support sustainable product management (SPM) by gathering and containing product life cycle data. However, some life cycle data are considered sensitive by stakeholders, resulting in a reluctance to share such data. This contribution provides a concept illustrating how data science and machine learning approaches enable electric vehicle battery (EVB) value chain stakeholders to carry out confidentiality-preserving data exchange via a DPP. This, in turn, can support overcoming data sharing reluctances, consequently facilitating sustainability data management on a DPP for an EVB. The concept development comprised a literature review to identify data needs for sustainable EVB management, data management challenges, and potential data science approaches for data management support. Furthermore, three explorative focus group workshops and follow-up consultations with data scientists were conducted to discuss identified data sciences approaches. This work complements the emerging literature on digitalization and SPM by exploring the specific potential of data science, and machine learning approaches enabling sustainability data management and reducing data sharing reluctance. Furthermore, practical relevance is given, as this concept may provide practitioners with new impulses regarding DPP development and implementation.
Hobisch Elisabeth, Völkl Yvonne, Geiger Bernhard, Saric Sanja, Scholger Martina, Helic Denis, Koncar Philipp, Glatz Christina
2023
(extended abstract)
Kowald Dominik, Gregor Mayr, Markus Schedl, Elisabeth Lex
2023
A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendation
Iacono Lucas, Pacios David, Vázquez-Poletti José Luis
2023
A sustainable agricultural system focuses on technologies and methodologies applied to supply a variety of sufficient, nutritious, and safe foods at an affordable price to feed the world population. To meet this goal, farmers and agronomists need crop health metrics to monitor the farms and to early-detect problems such as diseases or droughts. Then, they can apply the necessary measures to correct crops' problems and maximize yields. Large datasets of multispectral images and cloud computing is a must to obtain such metrics. Cameras placed in Drones and Satellites collect large multispectral image datasets. The Cloud allows for storing the image datasets and execution services that extract crops' health metrics such as the Normalized Difference Vegetation Index (NDVI). NDVI cloud computation generates new research challenges, such as which cloud service would allow paying the minimum cost to compute a certain amount of images. This article presents erverless NDVI (SNDVI) a novel serverless computing-based framework for NDVI computation. The main goal of our framework is to minimize the economic costs related to the use of a Public Cloud while computing NDVI from large datasets. We deployed our application using Amazon Lambda and Amazon S3, and then we performed a validation experiment. The experiment consisted of the execution of the framework to extract NDVI from a dataset of multispectral images collected with the Landsat 8 satellite. Then, we evaluate the overall framework performance in terms of; execution time and economic costs. Finally, the experiment results allowed us to determine that the framework fulfils its objective and that Serverless computing Services are a potentially convenient option for NDVI computation from large image datasets.
Jantscher Michael, Gunzer Felix, Kern Roman, Hassler Eva, Tschauner Sebastian, Reishofer Gernot
2023
Recent advances in deep learning and natural language processing (NLP) have opened many new opportunities for automatic text understanding and text processing in the medical field. This is of great benefit as many clinical downstream tasks rely on information from unstructured clinical documents. However, for low-resource languages like German, the use of modern text processing applications that require a large amount of training data proves to be difficult, as only few data sets are available mainly due to legal restrictions. In this study, we present an information extraction framework that was initially pre-trained on real-world computed tomographic (CT) reports of head examinations, followed by domain adaptive fine-tuning on reports from different imaging examinations. We show that in the pre-training phase, the semantic and contextual meaning of one clinical reporting domain can be captured and effectively transferred to foreign clinical imaging examinations. Moreover, we introduce an active learning approach with an intrinsic strategic sampling method to generate highly informative training data with low human annotation cost. We see that the model performance can be significantly improved by an appropriate selection of the data to be annotated, without the need to train the model on a specific downstream task. With a general annotation scheme that can be used not only in the radiology field but also in a broader clinical setting, we contribute to a more consistent labeling and annotation process that also facilitates the verification and evaluation of language models in the German clinical setting
Gabler Philipp, Geiger Bernhard, Schuppler Barbara, Kern Roman
2023
Superficially, read and spontaneous speech—the two main kinds of training data for automatic speech recognition—appear as complementary, but are equal: pairs of texts and acoustic signals. Yet, spontaneous speech is typically harder for recognition. This is usually explained by different kinds of variation and noise, but there is a more fundamental deviation at play: for read speech, the audio signal is produced by recitation of the given text, whereas in spontaneous speech, the text is transcribed from a given signal. In this review, we embrace this difference by presenting a first introduction of causal reasoning into automatic speech recognition, and describing causality as a tool to study speaking styles and training data. After breaking down the data generation processes of read and spontaneous speech and analysing the domain from a causal perspective, we highlight how data generation by annotation must affect the interpretation of inference and performance. Our work discusses how various results from the causality literature regarding the impact of the direction of data generation mechanisms on learning and prediction apply to speech data. Finally, we argue how a causal perspective can support the understanding of models in speech processing regarding their behaviour, capabilities, and limitations.
Trügler Andreas, Scher Sebastian, Kopeinik Simone, Kowald Dominik
2023
The use of data-driven decision support by public agencies is becoming more widespread and already influences the allocation of public resources. This raises ethical concerns, as it has adversely affected minorities and historically discriminated groups. In this paper, we use an approach that combines statistics and data-driven approaches with dynamical modeling to assess long-term fairness effects of labor market interventions. Specifically, we develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers through targeted help. The selection of who receives what help is based on a data-driven intervention model that estimates an individual’s chances of finding a job in a timely manner and rests upon data that describes a population in which skills relevant to the labor market are unevenly distributed between two groups (e.g., males and females). The intervention model has incomplete access to the individual’s actual skills and can augment this with knowledge of the individual’s group affiliation, thus using a protected attribute to increase predictive accuracy. We assess this intervention model’s dynamics—especially fairness-related issues and trade-offs between different fairness goals- over time and compare it to an intervention model that does not use group affiliation as a predictive feature. We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
Edtmayer, Hermann, Brandl, Daniel, Mach, Thomas, Schlager Elke, Gursch Heimo, Lugmair, Maximilian, Hochenauer, Christoph
2023
Increasing demands on indoor comfort in buildings and urgently needed energy efficiency measures require optimised HVAC systems in buildings. To achieve this, more extensive and accurate input data are required. This is difficult or impossible to accomplish with physical sensors. Virtual sensors, in turn, can provide these data; however, current virtual sensors are either too slow or too inaccurate to do so. The aim of our research was to develop a novel digital-twin workflow providing fast and accurate virtual sensors to solve this problem. To achieve a short calculation time and accurate virtual measurement results, we coupled a fast building energy simulation and an accurate computational fluid dynamics simulation. We used measurement data from a test facility as boundary conditions for the models and managed the coupling workflow with a customised simulation and data management interface. The corresponding simulation results were extracted for the defined virtual sensors and validated with measurement data from the test facility. In summary, the results showed that the total computation time of the coupled simulation was less than 10 min, compared to 20 h of the corresponding CFD models. At the same time, the accuracy of the simulation over five consecutive days was at a mean absolute error of 0.35 K for the indoor air temperature and at 1.2 % for the relative humidity. This shows that the novel coupled digital-twin workflow for virtual sensors is fast and accurate enough to optimise HVAC control systems in buildings.
Müllner Peter
2023
Recommender systems process abundances of user data to generate recommendations that fit well to each individual user. This utilization of user data can pose severe threats to user privacy, e.g., the inadvertent leakage of user data to untrusted parties or other users. Moreover, this data can be used to reveal a user’s identity, or to infer very private information as, e.g., gender. Instead of the plain application of privacy-enhancing techniques, which could lead to decreased accuracy, we tackle the problem itself, i.e., the utilization of user data. With this, we aim to equip recommender systems with means to provide high-quality recommendations that respect users’ privacy.
Lacic Emanuel, Duricic Tomislav, Fadljevic Leon, Theiler Dieter, Kowald Dominik
2023
Uptrendz: API-Centric Real-Time Recommendations in Multi-Domain Settings
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2023
This research presents an approach that combines stacked Gaussian processes (stacked GP) with target vector Bayesian optimization (BO) to solve multi-objective inverse problems of chained manufacturing processes. In this context, GP surrogate models represent individual manufacturing processes and are stacked to build a unified surrogate model that represents the entire manufacturing process chain. Using stacked GPs, epistemic uncertainty can be propagated through all chained manufacturing processes. To perform target vector BO, acquisition functions make use of a noncentral χ-squared distribution of the squared Euclidean distance between a given target vector and surrogate model output. In BO of chained processes, there are the options to use a single unified surrogate model that represents the entire joint chain, or that there is a surrogate model for each individual process and the optimization is cascaded from the last to the first process. Literature suggests that a joint optimization approach using stacked GPs overestimates uncertainty, whereas a cascaded approach underestimates it. For improved target vector BO results of chained processes, we present an approach that combines methods which under- or overestimate uncertainties in an ensemble for rank aggregation. We present a thorough analysis of the proposed methods and evaluate on two artificial use cases and on a typical manufacturing process chain: preforming and final pressing of an Inconel 625 superalloy billet.
Martina Mara, Ratz Linda, Klara Krieg, Markus Schedl, Navid Rekabsa
2023
Biases in algorithmic systems have led to discrimination against historically disadvantaged groups, including reinforcing outdated gender stereotypes. While a substantial body of research addresses biases in algorithms and underlying data, little is known about if and how users themselves bring in bias. We contribute to the latter strand of research by investigating users’ replication of stereotypical gender representations in online search queries. Following Prototype Theory, we define the disproportionate mention of a gender that does not conform to the prototypical representative of a searched domain (e.g., “male nurse”) as an indication of bias. In a pilot study with 224 US participants and an online experiment with 400 UK participants, we find clear evidence of gender biases in formulating search queries. We also report the effects of an educative text on user behaviour and highlight the wish of users to learn about bias-mitigating strategies in their interactions with search engines.
Žlabravec Veronika, Strbad Dejan, Dogan Anita, Lovric Mario, Janči Tibor, Vidaček Filipec Sanja
2022
Evangelidis Thomas, Giassa Ilektra-Chara , Lovric Mario
2022
Identifying hit compounds is a principal step in early-stage drug discovery. While many machine learning (ML) approaches have been proposed, in the absence of binding data, molecular docking is the most widely used option to predict binding modes and score hundreds of thousands of compounds for binding affinity to the target protein. Docking's effectiveness is critically dependent on the protein-ligand (P-L) scoring function (SF), thus re-scoring with more rigorous SFs is a common practice. In this pilot study, we scrutinize the PM6-D3H4X/COSMO semi-empirical quantum mechanical (SQM) method as a docking pose re-scoring tool on 17 diverse receptors and ligand decoy sets, totaling 1.5 million P-L complexes. We investigate the effect of explicitly computed ligand conformational entropy and ligand deformation energy on SQM P-L scoring in a virtual screening (VS) setting, as well as molecular mechanics (MM) versus hybrid SQM/MM structure optimization prior to re-scoring. Our results proclaim that there is no obvious benefit from computing ligand conformational entropies or deformation energies and that optimizing only the ligand's geometry on the SQM level is sufficient to achieve the best possible scores. Instead, we leverage machine learning (ML) to include implicitly the missing entropy terms to the SQM score using ligand topology, physicochemical, and P-L interaction descriptors. Our new hybrid scoring function, named SQM-ML, is transparent and explainable, and achieves in average 9\% higher AUC-ROC than PM6-D3H4X/COSMO and 3\% higher than Glide SP, but with consistent and predictable performance across all test sets, unlike the former two SFs, whose performance is considerably target-dependent and sometimes resembles that of a random classifier. The code to prepare and train SQM-ML models is available at \url{https://github.com/tevang/sqm-ml.git} and we believe that will pave the way for a new generation of hybrid SQM/ML protein-ligand scoring functions.
Steger Sophie, Rohrhofer Franz Martin, Geiger Bernhard
2022
Despite extensive research, physics-informed neural networks (PINNs) are still difficult to train, especially when the optimization relies heavily on the physics loss term. Convergence problems frequently occur when simulating dynamical systems with high-frequency components, chaotic or turbulent behavior. In this work, we discuss whether the traditional PINN framework is able to predict chaotic motion by conducting experiments on the undamped double pendulum. Our results demonstrate that PINNs do not exhibit any sensitivity to perturbations in the initial condition. Instead, the PINN optimization consistently converges to physically correct solutions that violate the initial condition only marginally, but diverge significantly from the desired solution due to the chaotic nature of the system. In fact, the PINN predictions primarily exhibit low-frequency components with a smaller magnitude of higher-order derivatives, which favors lower physics loss values compared to the desired solution. We thus hypothesize that the PINNs "cheat" by shifting the initial conditions to values that correspond to physically correct solutions that are easier to learn. Initial experiments suggest that domain decomposition combined with an appropriate loss weighting scheme mitigates this effect and allows convergence to the desired solution.
Gursch Heimo, Körner Stefan, Thaler Franz, Waltner Georg, Ganster Harald, Rinnhofer Alfred, Oberwinkler Christian, Meisenbichler Reinhard, Bischof Horst, Kern Roman
2022
Refuse separation and sorting is currently done by recycling plants that are manually optimised for a fixed refuse composition. Since the refuse compositions constantly change, these plants deliver either suboptimal sorting performances or require constant monitoring and adjustments by the plant operators. Image recognition offers the possibility to continuously monitor the refuse composition on the conveyor belts in a sorting facility. When information about the refuse composition is combined with parameters and measurements of the sorting machinery, the sorting performance of a plant can be continuously monitored, problems detected, optimisations suggested and trends predicted. This article describes solutions for multispectral and 3D image capturing of refuse streams and evaluates the performance of image segmentation models. The image segmentation models are trained with synthetic training data to reduce the manual labelling effort thus reducing the costs of the image recognition introduction. Furthermore, an outlook on the combination of image recognition data with parameters and measurements of the sorting machinery in a combined time series analysis is provided.
Xue Yani, Li Miqing, Arabnejad Hamid, Suleimenova, Geiger Bernhard, Jahani Alireza, Groen Derek
2022
In the context of humanitarian support for forcibly displaced persons, camps play an important role in protecting people and ensuring their survival and health. A challenge in this regard is to find optimal locations for establishing a new asylum-seeker/unrecognized refugee or IDPs (internally displaced persons) camp. In this paper we formulate this problem as an instantiation of the well-known facility location problem (FLP) with three objectives to be optimized. In particular, we show that AI techniques and migration simulations can be used to provide decision support on camp placement.
Pammer-Schindler Viktoria, Lindstaedt Stefanie
2022
Digitale Kompetenzen sind im Bereich des strategischen Managements selbstverständlich, AI Literacy allerdings nicht. In diesem Artikel diskutieren wir, welches grundlegende Verständnis über künstliche Intelligenz (Artificial Intelligence – AI) für Entscheidungsträger:Innen im strategischen Management wichtig ist und welches darüber hinausgehende kontextspezifische und strategische Wissen.Digitale Kompetenzen für einen Großteil von beruflichen Tätigkeitsgruppen sind in aller Munde, zu Recht. Auf der Ebene von Entscheidungsträger:Innen im strategischen Management allerdings greifen diese zu kurz; sie sind größtenteils selbstverständlich im notwendigen Ausmaß: digitales Informationsmanagement, die Fähigkeit zur Kommunikation und Zusammenarbeit im Digitalen wie auch die Fähigkeiten, digitale Technologien zum Wissenserwerb und Lernen und zur Unterstützung bei kreativen Prozessen einzusetzen (Liste dieser typischen digitalen Kompetenzen aus [1]).Anders stellt sich die Sache dar, wenn es um spezialisiertes Wissen über moderne Computertechnologien geht, wie Methoden der automatischen Datenauswertung (Data Analytics) und der künstlichen Intelligenz, Internet of Things, Blockchainverfahren etc. (Auflistung in Anlehnung an Abb. 3 in [2]). Dieses Wissen wird in der Literatur durchaus als in Organisationen notwendiges Wissen behandelt [2]; allerdings üblicherweise mit dem Fokus darauf, dass dieses von Spezialist:Innen abgedeckt werden soll.Zusätzlich, und das ist die erste Hauptthese in diesem Kommentar, argumentieren wir, dass Entscheidungsträger:Innen im strategischen Management Grundlagenwissen in diesen technischen Bereichen brauchen, um in der Lage zu sein, diese Technologien in Bezug auf ihre Auswirkungen auf das eigene Unternehmen bzw. dessen Geschäftsumfeld einschätzen zu können. In diesem Artikel wird genauer das nötige Grundlagenwissen in Bezug auf künstliche Intelligenz (Artificial Intelligence – AI) betrachtet, das wir hier als „AI Literacy“ bezeichnen.
Rüdisser Hannah, Windisch Andreas, Amerstorfer U. V., Möstl C., Amerstorfer T., Bailey R. L., Reiss M. A.
2022
Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past, different approaches have been used to automatically detect events in existing time series resulting from solar wind in situ observations. However, accurate and fast detection still remains a challenge when facing the large amount of data from different instruments. For the automatic detection of ICMEs we propose a pipeline using a method that has recently proven successful in medical image segmentation. Comparing it to an existing method, we find that while achieving similar results, our model outperforms the baseline regarding training time by a factor of approximately 20, thus making it more applicable for other datasets. The method has been tested on in situ data from the Wind spacecraft between 1997 and 2015 with a True Skill Statistic of 0.64. Out of the 640 ICMEs, 466 were detected correctly by our algorithm, producing a total of 254 false positives. Additionally, it produced reasonable results on datasets with fewer features and smaller training sets from Wind, STEREO-A, and STEREO-B with TSSs of 0.56, 0.57, and 0.53, respectively. Our pipeline manages to find the start of an ICME with a mean absolute error (MAE) of around 2 hr and 56 min, and the end time with a MAE of 3 hr and 20 min. The relatively fast training allows straightforward tuning of hyperparameters and could therefore easily be used to detect other structures and phenomena in solar wind data, such as corotating interaction regions.
Stipanicev Drazenka, Repec Sinisa, Vucic Matej, Lovric Mario, Klobucar Goran
2022
In order to prevent the spread of COVID-19, contingency measures in the form of lockdowns were implemented all over the world, including in Croatia. The aim of this study was to detect if those severe, imposed restrictions of social interactions reflected on the water quality of rivers receiving wastewaters from urban areas. A total of 18 different pharmaceuticals (PhACs) and illicit drugs (IDrgs), as well as their metabolites, were measured for 16 months (January 2020–April 2021) in 12 different locations at in the Sava and Drava Rivers, Croatia, using UHPLC coupled to LCMS. This period encompassed two major Covid lockdowns (March–May 2020 and October 2020–March 2021). Several PhACs more than halved in river water mass flow during the lockdowns. The results of this study confirm that Covid lockdowns caused lower cumulative concentrations and mass flow of measured PhACs/IDrgs in the Sava and Drava Rivers. This was not influenced by the increased use of drugs for the treatment of the COVID-19, like antibiotics and steroidal anti-inflammatory drugs. The decreases in measured PhACs/IDrgs concentrations and mass flows were more pronounced during the first lockdown, which was stricter than the second.
Maitz Katharina, Fessl Angela, Pammer-Schindler Viktoria, Kaiser Rene_DB, Lindstaedt Stefanie
2022
Artificial intelligence (AI) is by now used in many different work settings, including construction industry. As new technologies change business and work processes, one important aspect is to understand how potentially affected workers perceive and understand the existing and upcoming AI in their work environment. In this work, we present the results of an exploratory case study with 20 construction workers in a small Austrian company about their knowledge of and attitudes toward AI. Our results show that construction workers’ understanding of AI as a concept is rather superficial, diffuse, and vague, often linked to physical and tangible entities such as robots, and often based on inappropriate sources of information which can lead to misconceptions about AI and AI anxiety. Learning opportunities for promoting (future) construction workers’ AI literacy should be accessible and understandable for learners at various educational levels and encompass aspects such as i) conveying the basics of digitalization, automation, and AI to enable a clear distinction of these concepts, ii) building on the learners’ actual experience realm, i.e., taking into account their focus on physical, tangible, and visible entities, and iii) reducing AI anxiety by elaborating on the limits of AI.
Fessl Angela, Maitz Katharina, Paleczek Lisa, Divitini Monica, Rouhani Majid, Köhler Thomas
2022
At the beginning of the COVID-19 pandemic, a sudden shift from mainly face-to-face teaching andlearning to exclusively online teaching and learning took place and posed challenges especially for inservice teachers at all types of schools. But also for pre-service teachers, i.e. students who are preparing themselves in order to become future teachers, are challenged by the new profile of competencies demanded. Suddenly, alle teachers had to orient themselves in a completely digital world of teaching in which acquiring digital competences was no longer an option but a real necessity.We are investigating which digital competences are necessary as a prerequisite for pre- and in-serviceteachers in the current COVID-19 pandemic to ensure high quality teaching and learning(Schaarschmidt et al., 2021). Based upon the European DigComp 2.1 (Carretero et al., 2017) andDigCompEdu (Redecker, 2017) frameworks and the Austrian Digi.kompP (Virtuelle PH, 2021) framework, we adopted a curriculum from most recent research, tailored to the specific needs of our Euroepan level target group. That curriculum addresses the individual digital media competence (two modules), and the media didactic competence (three modules). For each of these modules, we developedcompetence-based learning goals (Bloom et al., 1956; Krathwohl & Anderson, 2010; Fessl et al., 2021)that serve as a focal point of what the learner should be able to do after his/her specific learning experience. The learning content will be prepared as micro learning units to be lightweight and flexible as time constraints are known to be challenging for any professional development.In three sequentially conducted workshops (Sept. 2021, Nov. 2021, Feb. 2022), we discuss with different stakeholders (researchers, teachers, teacher-students, education administrators) the curriculum andthe learning goals. Preliminary results of the first two workshops show that our developed curriculumand the digital competences specified are crucial for successful online teaching. In our presentation, wewill summarize the results of all three workshops, discuss the theoretical underpinnings of our overallapproach, and provide insights on how we plan to convey the digital competences developed to educators using learning strategies such as micro learning and reflective learning.
Fessl Angela, Maitz Katharina, Pleczek Lisa, Köhler Thomas , Irnleitner Selina, Divitini Monica
2022
The COVID-19 pandemic initiated a fundamental change in learning and teaching in (higher-) education [HE]. On short notice, traditional teaching in HE suddenly had to be transformed into online teaching. This shift into the digital world posed a great challenge to in-service teachers at schools and universities, and pre-service teachers, as the acquisition of digital competences was no longer an option but a real necessity. The previously rather hidden or even neglected importance of teachers’ digital competences for successful teaching and learning became manifest and clearly visible. In this work, we investigate necessary digital competences to ensure high quality teaching and learning in and beyond the current COVID-19 pandemic. Based upon the European DigComp 2.1 (Carretero et al., 2017), DigCompEdu (Redecker, 2017) frameworks, the Austrian Digi.kompP framework (Virtuelle PH, 2021), and the recommendations given by German Education authorities (KMK 2017; KMK 2021; HRK 2022), we developed a curriculum consisting of 5 modules: 2 for individual digital media competence, and 3 for media didactic competence. For each module, competence-oriented learning goals and corresponding micro-learning contents were defined to meet the needs of teachers while considering their time constraints.Based on three online workshops, the curriculum and the corresponding learning goals were discussed with university teachers, pre-service teachers, and policymakers. The content of the curriculum was perceived as highly relevant for these target groups; however, some adaptations were required. From the university teachers’ perspective, we got feedback that they were overwhelmed with the situation and urgently needed digital competences. Policymakers suggested that further education regarding digital competences needs to offer a systematic exchange of experiences with peers. From the perspective of in-service teachers, it was stated that teacher education should focus more on digital competences and tools.In this paper, we will present the result of the workshop series that informed the design process of the DIGIVID curriculum for teaching professionals.
Disch Leonie, Fessl Angela, Pammer-Schindler Viktoria
2022
The uptake of open science resources needs knowledge construction on the side of the readers/receivers of scientific content. The design of technologies surrounding open science resources can facilitate such knowledge construction, but this has not been investigated yet. To do so, we first conducted a scoping review of literature, from which we draw design heuristics for knowledge construction in digital environments. Subsequently, we grouped the underlying technological functionalities into three design categories: i) structuring and supporting collaboration, ii) supporting the learning process, and iii) structuring, visualising and navigating (learning) content. Finally, we mapped the design categories and associated design heuristics to core components of popular open science platforms. This mapping constitutes a design space (design implications), which informs researchers and designers in the HCI community about suitable functionalities for supporting knowledge construction in existing or new digital open science platforms.
Santa Maria Gonzalez Tomas, Vermeulen Walter J.V., Baumgartner Rupert J.,
2022
The process of developing sustainable and circular business models is quite complex and thus hinders their wider implementation in the market. Further understanding and guidelines for firms are needed. Design thinking is a promising problem solving approach capable of facilitating the innovation process. However, design thinking does not necessarily include sustainability considerations, and it has not been sufficiently explored for application in business model innovation. Given the additional challenges posed by the need for time-efficiency and a digital environment, we have therefore developed a design thinking-based framework to guide the early development of circular business models in an online and efficient manner. We propose a new process framework called the Circular Sprint. This encompasses seven phases and contains twelve purposefully adapted activities. The framework development follows an Action Design Research approach, iteratively combining four streams of literature, feedback from sixteen experts and six workshops, and involved a total of 107 participants working in fourteen teams. The present paper describes the framework and its activities, together with evaluations of its usefulness and ease-of-use. The research shows that, while challenging, embedding sustainability, circularity and business model innovation within a design thinking process is indeed possible. We offer a flexible framework and a set of context-adaptable activities that can support innovators and practitioners in the complex process of circular business model innovation. These tools can also be used for training and educational purposes. We invite future researchers to build upon and modify our framework and its activities by adapting it to their required scenarios and purposes. A detailed step-by-step user guide is provided in the supplementary material.
Hochstrasser Carina, Herburger Michael, Plasch Michael, Lackner Ulrike, Breitfuß Gert
2022
Kurzfristige Störungen und langfristige Veränderungen führen zunehmend zu Störungen in der interorganisatorischen Logistik. Daher müssen widerstandsfähige Strukturen aufgebaut und durch datengestützte Entscheidungen überwacht werden. Da es im aktuellen Geschäftsumfeld jedoch nicht ausreicht, eigene Informations- und Datensätze zu generieren und zu verarbeiten, müssen Datenaustauschkonzepte wie Datenkreise entwickelt werden. Ziel dieses Beitrags ist es, die Bedürfnisse und Anforderungen der Stakeholder an einen Datenkreis in den Anwendungsbereichen Logistik und Resilienz zu untersuchen. Zu diesem Zweck wurde ein Mixed-Methods-Ansatz durchgeführt, der eine Stakeholder-Analyse und die Entwicklung von Anwendungsfällen mittels qualitativer (Workshops und Experteninterviews) und quantitativer (Online-Befragung) Methoden umfasst.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2022
Large-scale learning scenarios as well as the ongoing pandemic situation underline the importance of educational technology in order to support scalability and spatial as well as temporal flexibility in all kinds of learning and teaching settings. Educational conversational agents build on a long research tradition in intelligent tutoring systems and other adaptive learning technologies but build for interaction on the more recent interaction paradigm of conversational interaction. In this paper, we describe a tutorial conversational agent, called GDPRAgent, which teaches a lesson on the European General Data Protection Regulation (GDPR). This regulation governs how personal data must be treated in Europe. Instructionally, the agent’s dialogue structure follows a basic GDPR curriculum and uses Bloom’s revised taxonomy of learning objectives in order to teach GDPR topics. This overall design of the dialogue structure allows inserting more specific adaptive tutorial strategies. From a learner perspective, the learners experience a completely one-on-one tutorial session in which they receive relevant content (is “being taught”) as well as experiences active learning parts such as doing quizzes or summarising content. Our prototype, therefore, illustrates a move away from the dichotomy between content and the activity of teaching/learning in educational technology.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2022
This paper reports a between-subjects experiment (treatment group N = 42, control group N = 53) evaluating the effect of a conversational agent that teaches users to give a complete argument. The agent analyses a given argument for whether it contains a claim, a warrant and evidence, which are understood to be essential elements in a good argument. The agent detects which of these elements is missing, and accordingly scaffolds the argument completion. The experiment includes a treatment task (Task 1) in which participants of the treatment group converse with the agent, and two assessment tasks (Tasks 2 and 3) in which both the treatment and the control group answer an argumentative question. We find that in Task 1, 36 out of 42 conversations with the agent are coherent. This indicates good interaction quality. We further find that in Tasks 2 and 3, the treatment group writes a significantly higher percentage of argumentative sentences (task 2: t(94) = 1.73, p = 0.042, task 3: t(94) = 1.7, p = 0.045). This shows that participants of the treatment group used the scaffold, taught by the agent in Task 1, outside the tutoring conversation (namely in the assessment Tasks 2 and 3) and across argumentation domains (Task 3 is in a different domain of argumentation than Tasks 1 and 2). The work complements existing research on adaptive and conversational support for teaching argumentation in essays.
Breitfuß Gert, Disch Leonie, Santa Maria Gonzalez Tomas
2022
The present paper aims to validate commonly used business analysis methods to obtain input for an early phase business model regarding feasibility, desirability, and viability. The research applies a case study approach, exploring the early-phase development of an economically sustainable business model for an open science discovery platform.
Martin Ebel, Santa Maria Gonzalez Tomas, Breitfuß Gert
2022
Business model patterns are a common tool in business model design. We provide a theoretical foundation for their use within the framework of analogical reasoning as an important cognitive skill for business model innovation. Based on 12 innovation workshops with students and practitioners, we discuss scenarios of pattern card utilization and provide insights on its evaluation.Martin Ebel, Tomas Santa Maria Gert Breitfuss PUBLISHED
Wolfbauer Irmtraud, Pammer-Schindler Viktoria, Maitz Katharina, Rosé Carolyn P.
2022
We present a script for conversational reflection guidance embedded in reflective practice. Rebo Junior, a non-adaptive conversational agent, was evaluated in a 12-week field study with apprentices. We analysed apprentices’ interactions with Rebo Junior in terms of reflectivity, and measured the development of their reflection competence via reflective essays at three points in time during the field study. Reflection competence, a key competency for lifelong professional learning, becomes significantly higher by the third essay, after repeated interactions with Rebo Junior (paired-samples t-test t13=3.00, p=.010 from Essay 1 to Essay 3). However, we also observed a significant decrease in reflectivity in the Rebo Junior interactions over time (paired-samples t-test between the first and eighth interaction: t7=2.50, p=.041). We attribute this decline to i) the novelty of Rebo Junior wearing off (novelty effect) and ii) the apprentices learning the script and experiencing subsequent frustration due to the script not fading over time. Overall, this work i) informs future design through the observation of consistent decreases in engagement over 8 interactions with static scaffolding, and ii) contributes a reflection script applicable for reflection on tasks that resemble future expected work tasks, a typical setting in lifelong professional learning, and iii) indicates increased reflection competence after repeated reflection guided by a conversational agent.
Liu Xinglan, Hussain Hussain, Razouk Houssam, Kern Roman
2022
Graph embedding methods have emerged as effective solutions for knowledge graph completion. However, such methods are typically tested on benchmark datasets such as Freebase, but show limited performance when applied on sparse knowledge graphs with orders of magnitude lower density. To compensate for the lack of structure in a sparse graph, low dimensional representations of textual information such as word2vec or BERT embeddings have been used. This paper proposes a BERT-based method (BERT-ConvE), to exploit transfer learning of BERT in combination with a convolutional network model ConvE. Comparing to existing text-aware approaches, we effectively make use of the context dependency of BERT embeddings through optimizing the features extraction strategies. Experiments on ConceptNet show that the proposed method outperforms strong baselines by 50% on knowledge graph completion tasks. The proposed method is suitable for sparse graphs as also demonstrated by empirical studies on ATOMIC and sparsified-FB15k-237 datasets. Its effectiveness and simplicity make it appealing for industrial applications.
De Freitas Joao Pedro, Berg Sebastian, Geiger Bernhard, Mücke Manfred
2022
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representation learning problem, with one task-agnostic and multiple task-specific latent representations. Drawing inspiration from the information bottleneck principle and assuming an additive independent noise model between the task-agnostic and task-specific latent representations, we limit the information contained in each task-specific representation. It is shown that our resulting representations yield competitive performance for several MTL benchmarks. Furthermore, for certain setups, we show that the trained parameters of the additive noise model are closely related to the similarity of different tasks. This indicates that our approach yields a task-agnostic representation that is disentangled in the sense that its individual dimensions may be interpretable from a task-specific perspective.
Müllner Peter , Schmerda Stefan, Theiler Dieter, Lindstaedt Stefanie , Kowald Dominik
2022
Data and algorithm sharing is an imperative part of data- and AI-driven economies. The efficient sharing of data and algorithms relies on the active interplay between users, data providers, and algorithm providers. Although recommender systems are known to effectively interconnect users and items in e-commerce settings, there is a lack of research on the applicability of recommender systems for data and algorithm sharing. To fill this gap, we identify six recommendation scenarios for supporting data and algorithm sharing, where four of these scenarios substantially differ from the traditional recommendation scenarios in e-commerce applications. We evaluate these recommendation scenarios using a novel dataset based on interaction data of the OpenML data and algorithm sharing platform, which we also provide for the scientific community. Specifically, we investigate three types of recommendation approaches, namely popularity-, collaboration-, and content-based recommendations. We find that collaboration-based recommendations provide the most accurate recommendations in all scenarios. Plus, the recommendation accuracy strongly depends on the specific scenario, e.g., algorithm recommendations for users are a more difficult problem than algorithm recommendations for datasets. Finally, the content-based approach generates the least popularity-biased recommendations that cover the most datasets and algorithms.
Salhofer Eileen, Liu Xinglan, Kern Roman
2022
State of the art performances for entity extrac-tion tasks are achieved by supervised learning,specifically, by fine-tuning pretrained languagemodels such as BERT. As a result, annotatingapplication specific data is the first step in manyuse cases. However, no practical guidelinesare available for annotation requirements. Thiswork supports practitioners by empirically an-swering the frequently asked questions (1) howmany training samples to annotate? (2) whichexamples to annotate? We found that BERTachieves up to 80% F1 when fine-tuned on only70 training examples, especially on biomedicaldomain. The key features for guiding the selec-tion of high performing training instances areidentified to be pseudo-perplexity and sentence-length. The best training dataset constructedusing our proposed selection strategy shows F1score that is equivalent to a random selectionwith twice the sample size. The requirementof only a small number of training data im-plies cheaper implementations and opens doorto wider range of applications.
Jean-Quartier Claire, Mazón Miguel Rey, Lovric Mario, Stryeck Sarah
2022
Research and development are facilitated by sharing knowledge bases, and the innovation process benefits from collaborative efforts that involve the collective utilization of data. Until now, most companies and organizations have produced and collected various types of data, and stored them in data silos that still have to be integrated with one another in order to enable knowledge creation. For this to happen, both public and private actors must adopt a flexible approach to achieve the necessary transition to break data silos and create collaborative data sharing between data producers and users. In this paper, we investigate several factors influencing cooperative data usage and explore the challenges posed by the participation in cross-organizational data ecosystems by performing an interview study among stakeholders from private and public organizations in the context of the project IDE@S, which aims at fostering the cooperation in data science in the Austrian federal state of Styria. We highlight technological and organizational requirements of data infrastructure, expertise, and practises towards collaborative data usage.
Malev Olga, Babic Sanja, Cota Anja Sima, Stipaničev Draženka, Repec Siniša, Drnić Martina, Lovric Mario, Bojanić Krunoslav, Radić Brkanac Sandra, Čož-Rakovac Rozelindra, Klobučar Göran
2022
This study focused on the short-term whole organism bioassays (WOBs) on fish (Danio rerio) and crustaceans (Gammarus fossarum and Daphnia magna) to assess the negative biological effects of water from the major European River Sava and the comparison of the obtained results with in vitro toxicity data (ToxCast database) and Risk Quotient (RQ) methodology. Pollution profiles of five sampling sites along the River Sava were assessed by simultaneous chemical analysis of 562 organic contaminants (OCs) of which 476 were detected. At each sampling site, pharmaceuticals/illicit drugs category was mostly represented by their cumulative concentration, followed by categories industrial chemicals, pesticides and hormones. An exposure-activity ratio (EAR) approach based on ToxCast data highlighted steroidal anti-inflammatory drugs, antibiotics, antiepileptics/neuroleptics, industrial chemicals and hormones as compounds with the highest biological potential. Summed EAR-based prediction of toxicity showed a good correlation with the estimated toxicity of assessed sampling sites using WOBs. WOBs did not exhibit increased mortality but caused various sub-lethal biological responses that were dependant relative to the sampling site pollution intensity as well as species sensitivity. Exposure of G. fossarum and D. magna to river water-induced lower feeding rates increased GST activity and TBARS levels. Zebrafish D. rerio embryo exhibited a significant decrease in heartbeat rate, failure in pigmentation formation, as well as inhibition of ABC transporters. Nuclear receptor activation was indicated as the biological target of greatest concern based on the EAR approach. A combined approach of short-term WOBs, with a special emphasis on sub-lethal endpoints, and chemical characterization of water samples compared against in vitro toxicity data from the ToxCast database and RQs can provide a comprehensive insight into the negative effect of pollutants on aquatic organisms.
Razouk Houssam, Kern Roman
2022
Digitalization of causal domain knowledge is crucial. Especially since the inclusion of causal domain knowledge in the data analysis processes helps to avoid biased results. To extract such knowledge, the Failure Mode Effect Analysis (FMEA) documents represent a valuable data source. Originally, FMEA documents were designed to be exclusively produced and interpreted by human domain experts. As a consequence, these documents often suffer from data consistency issues. This paper argues that due to the transitive perception of the causal relations, discordant and merged information cases are likely to occur. Thus, we propose to improve the consistency of FMEA documents as a step towards more efficient use of causal domain knowledge. In contrast to other work, this paper focuses on the consistency of causal relations expressed in the FMEA documents. To this end, based on an explicit scheme of types of inconsistencies derived from the causal perspective, novel methods to enhance the data quality in FMEA documents are presented. Data quality improvement will significantly improve downstream tasks, such as root cause analysis and automatic process control.
Lovric Mario, Antunović Mario, Šunić Iva, Vuković Matej, Kecorius Simon, Kröll Mark, Bešlić Ivan, Godec Ranka, Pehnec Gordana, Geiger Bernhard, Grange Stuart K, Šimić Iva
2022
In this paper, the authors investigated changes in mass concentrations of particulate matter (PM) during the Coronavirus Disease of 2019 (COVID-19) lockdown. Daily samples of PM1, PM2.5 and PM10 fractions were measured at an urban background sampling site in Zagreb, Croatia from 2009 to late 2020. For the purpose of meteorological normalization, the mass concentrations were fed alongside meteorological and temporal data to Random Forest (RF) and LightGBM (LGB) models tuned by Bayesian optimization. The models’ predictions were subsequently de-weathered by meteorological normalization using repeated random resampling of all predictive variables except the trend variable. Three pollution periods in 2020 were examined in detail: January and February, as pre-lockdown, the month of April as the lockdown period, as well as June and July as the “new normal”. An evaluation using normalized mass concentrations of particulate matter and Analysis of variance (ANOVA) was conducted. The results showed that no significant differences were observed for PM1, PM2.5 and PM10 in April 2020—compared to the same period in 2018 and 2019. No significant changes were observed for the “new normal” as well. The results thus indicate that a reduction in mobility during COVID-19 lockdown in Zagreb, Croatia, did not significantly affect particulate matter concentration in the long-term
Sousa Samuel, Kern Roman
2022
Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union’s General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.
Koutroulis Georgios, Mutlu Belgin, Kern Roman
2022
In prognostics and health management (PHM), the task of constructing comprehensive health indicators (HI)from huge amounts of condition monitoring data plays a crucial role. HIs may influence both the accuracyand reliability of remaining useful life (RUL) prediction, and ultimately the assessment of system’s degradationstatus. Most of the existing methods assume apriori an oversimplified degradation law of the investigatedmachinery, which in practice may not appropriately reflect the reality. Especially for safety–critical engineeredsystems with a high level of complexity that operate under time-varying external conditions, degradationlabels are not available, and hence, supervised approaches are not applicable. To address the above-mentionedchallenges for extrapolating HI values, we propose a novel anticausal-based framework with reduced modelcomplexity, by predicting the cause from the causal models’ effects. Two heuristic methods are presented forinferring the structural causal models. First, the causal driver is identified from complexity estimate of thetime series, and second, the set of the effect measuring parameters is inferred via Granger Causality. Once thecausal models are known, off-line anticausal learning only with few healthy cycles ensures strong generalizationcapabilities that helps obtaining robust online predictions of HIs. We validate and compare our framework onthe NASA’s N-CMAPSS dataset with real-world operating conditions as recorded on board of a commercial jet,which are utilized to further enhance the CMAPSS simulation model. The proposed framework with anticausallearning outperforms existing deep learning architectures by reducing the average root-mean-square error(RMSE) across all investigated units by nearly 65%.
Steger Sophie, Geiger Bernhard, Smieja Marek
2022
We connect the problem of semi-supervised clustering to constrained Markov aggregation, i.e., the task of partitioning the state space of a Markov chain. We achieve this connection by considering every data point in the dataset as an element of the Markov chain's state space, by defining the transition probabilities between states via similarities between corresponding data points, and by incorporating semi-supervision information as hard constraints in a Hartigan-style algorithm. The introduced Constrained Markov Clustering (CoMaC) is an extension of a recent information-theoretic framework for (unsupervised) Markov aggregation to the semi-supervised case. Instantiating CoMaC for certain parameter settings further generalizes two previous information-theoretic objectives for unsupervised clustering. Our results indicate that CoMaC is competitive with the state-of-the-art
Schweimer Christoph, Gfrerer Christine, Lugstein Florian, Pape David, Velimsky Jan, Elsässer Robert, Geiger Bernhard
2022
Online social networks are a dominant medium in everyday life to stay in contact with friends and to share information. In Twitter, users can connect with other users by following them, who in turn can follow back. In recent years, researchers studied several properties of social networks and designed random graph models to describe them. Many of these approaches either focus on the generation of undirected graphs or on the creation of directed graphs without modeling the dependencies between reciprocal (i.e., two directed edges of opposite direction between two nodes) and directed edges. We propose an approach to generate directed social network graphs that creates reciprocal and directed edges and considers the correlation between the respective degree sequences.Our model relies on crawled directed graphs in Twitter, on which information w.r.t.\ a topic is exchanged or disseminated. While these graphs exhibit a high clustering coefficient and small average distances between random node pairs (which is typical in real-world networks), their degree sequences seem to follow a $\chi^2$-distribution rather than power law. To achieve high clustering coefficients, we apply an edge rewiring procedure that preserves the node degrees.We compare the crawled and the created graphs, and simulate certain algorithms for information dissemination and epidemic spreading on them. The results show that the created graphs exhibit very similar topological and algorithmic properties as the real-world graphs, providing evidence that they can be used as surrogates in social network analysis. Furthermore, our model is highly scalable, which enables us to create graphs of arbitrary size with almost the same properties as the corresponding real-world networks.
Hoffer Johannes Georg, Ofner Andreas Benjamin, Rohrhofer Franz Martin, Lovric Mario, Kern Roman, Lindstaedt Stefanie , Geiger Bernhard
2022
Most engineering domains abound with models derived from first principles that have beenproven to be effective for decades. These models are not only a valuable source of knowledge, but they also form the basis of simulations. The recent trend of digitization has complemented these models with data in all forms and variants, such as process monitoring time series, measured material characteristics, and stored production parameters. Theory-inspired machine learning combines the available models and data, reaping the benefits of established knowledge and the capabilities of modern, data-driven approaches. Compared to purely physics- or purely data-driven models, the models resulting from theory-inspired machine learning are often more accurate and less complex, extrapolate better, or allow faster model training or inference. In this short survey, we introduce and discuss several prominent approaches to theory-inspired machine learning and show how they were applied in the fields of welding, joining, additive manufacturing, and metal forming.
Reichel Robert, Gursch Heimo, Kröll Mark
2022
Der Trend, im Gesundheitswesen von Aufzeichnungen in Papierform auf digitale Formen zu wechseln, legt die Basis für eine elektronische Verarbeitung von Gesundheitsdaten. Dieser Artikel beschreibt die technischen Grundlagen für die semantische Aufbereitung und Analyse von textuellen Inhalten in der medizinischen Domäne. Die speziellen Eigenschaften medizinischer Texte gestalten die Extraktion sowie Aggregation relevanter Information herausfordernder als in anderen Anwendungsgebieten. Zusätzlich gibt es Bedarf an spezialisierten Methoden gerade im Bereich der Anonymisierung bzw. Pseudonymisierung personenbezogener Daten. Der Einsatz von Methoden der Computerlinguistik in Kombination mit der fortschreitenden Digitalisierung birgt dennoch enormes Potential, das Personal im Gesundheitswesen zu unterstützen.
Windisch Andreas, Gallien Thomas, Schwarzmueller Christopher
2022
Dyson-Schwinger equations (DSEs) are a non-perturbative way to express n-point functions in quantum field theory. Working in Euclidean space and in Landau gauge, for example, one can study the quark propagator Dyson-Schwinger equation in the real and complex domain, given that a suitable and tractable truncation has been found. When aiming for solving these equations in the complex domain, that is, for complex external momenta, one has to deform the integration contour of the radial component in the complex plane of the loop momentum expressed in hyper-spherical coordinates. This has to be done in order to avoid poles and branch cuts in the integrand of the self-energy loop. Since the nature of Dyson-Schwinger equations is such, that they have to be solved in a self-consistent way, one cannot analyze the analytic properties of the integrand after every iteration step, as this would not be feasible. In these proceedings, we suggest a machine learning pipeline based on deep learning (DL) approaches to computer vision (CV), as well as deep reinforcement learning (DRL), that could solve this problem autonomously by detecting poles and branch cuts in the numerical integrand after every iteration step and by suggesting suitable integration contour deformations that avoid these obstructions. We sketch out a proof of principle for both of these tasks, that is, the pole and branch cut detection, as well as the contour deformation.
Gashi Milot, Gursch Heimo, Hinterbichler Hannes, Pichler Stefan, Lindstaedt Stefanie , Thalmann Stefan
2022
Predictive Maintenance (PdM) is one of the most important applications of advanced data science in Industry 4.0, aiming to facilitate manufacturing processes. To build PdM models, sufficient data, such as condition monitoring and maintenance data of the industrial application, are required. However, collecting maintenance data is complex and challenging as it requires human involvement and expertise. Due to time constrains, motivating workers to provide comprehensive labeled data is very challenging, and thus maintenance data are mostly incomplete or even completely missing. In addition to these aspects, a lot of condition monitoring data-sets exist, but only very few labeled small maintenance data-sets can be found. Hence, our proposed solution can provide additional labels and offer new research possibilities for these data-sets. To address this challenge, we introduce MEDEP, a novel maintenance event detection framework based on the Pruned Exact Linear Time (PELT) approach, promising a low false-positive (FP) rate and high accuracy results in general. MEDEP could help to automatically detect performed maintenance events from the deviations in the condition monitoring data. A heuristic method is proposed as an extension to the PELT approach consisting of the following two steps: (1) mean threshold for multivariate time series and (2) distribution threshold analysis based on the complexity-invariant metric. We validate and compare MEDEP on the Microsoft Azure Predictive Maintenance data-set and data from a real-world use case in the welding industry. The experimental outcomes of the proposed approach resulted in a superior performance with an FP rate of around 10% on average and high sensitivity and accuracy results.
Lacic Emanuel, Kowald Dominik
2022
In this industry talk at ECIR'2022, we illustrate how to build a modern recommender system that can serve recommendations in real-time for a diverse set of application domains. Specifically, we present our system architecture that utilizes popular recommendation algorithms from the literature such as Collaborative Filtering, Content-based Filtering as well as various neural embedding approaches (e.g., Doc2Vec, Autoencoders, etc.). We showcase the applicability of our system architecture using two real-world use-cases, namely providing recommendations for the domains of (i) job marketplaces, and (ii) entrepreneurial start-up founding. We strongly believe that our experiences from both research- and industry-oriented settings should be of interest for practitioners in the field of real-time multi-domain recommender systems.
Lacic Emanuel, Fadljevic Leon, Weissenböck Franz, Lindstaedt Stefanie , Kowald Dominik
2022
Personalized news recommender systems support readers in finding the right and relevant articles in online news platforms. In this paper, we discuss the introduction of personalized, content-based news recommendations on DiePresse, a popular Austrian online news platform, focusing on two specific aspects: (i) user interface type, and (ii) popularity bias mitigation. Therefore, we conducted a two-weeks online study that started in October 2020, in which we analyzed the impact of recommendations on two user groups, i.e., anonymous and subscribed users, and three user interface types, i.e., on a desktop, mobile and tablet device. With respect to user interface types, we find that the probability of a recommendation to be seen is the highest for desktop devices, while the probability of interacting with recommendations is the highest for mobile devices. With respect to popularity bias mitigation, we find that personalized, content-based news recommendations can lead to a more balanced distribution of news articles' readership popularity in the case of anonymous users. Apart from that, we find that significant events (e.g., the COVID-19 lockdown announcement in Austria and the Vienna terror attack) influence the general consumption behavior of popular articles for both, anonymous and subscribed users
Kowald Dominik, Lacic Emanuel
2022
Multimedia recommender systems suggest media items, e.g., songs, (digital) books and movies, to users by utilizing concepts of traditional recommender systems such as collaborative filtering. In this paper, we investigate a potential issue of such collaborative-filtering based multimedia recommender systems, namely popularity bias that leads to the underrepresentation of unpopular items in the recommendation lists. Therefore, we study four multimedia datasets, i.e., LastFm, MovieLens, BookCrossing and MyAnimeList, that we each split into three user groups differing in their inclination to popularity, i.e., LowPop, MedPop and HighPop. Using these user groups, we evaluate four collaborative filtering-based algorithms with respect to popularity bias on the item and the user level. Our findings are three-fold: firstly, we show that users with little interest into popular items tend to have large user profiles and thus, are important data sources for multimedia recommender systems. Secondly, we find that popular items are recommended more frequently than unpopular ones. Thirdly, we find that users with little interest into popular items receive significantly worse recommendations than users with medium or high interest into popularity.
Ofner Andreas Benjamin, Kefalas Achilles, Posch Stefan, Geiger Bernhard
2022
This article introduces a method for the detection of knock occurrences in an internal combustion engine (ICE) using a 1-D convolutional neural network trained on in-cylinder pressure data. The model architecture is based on expected frequency characteristics of knocking combustion. All cycles were reduced to 60° CA long windows with no further processing applied to the pressure traces. The neural networks were trained exclusively on in-cylinder pressure traces from multiple conditions, with labels provided by human experts. The best-performing model architecture achieves an accuracy of above 92% on all test sets in a tenfold cross-validation when distinguishing between knocking and non-knocking cycles. In a multiclass problem where each cycle was labeled by the number of experts who rated it as knocking, 78% of cycles were labeled perfectly, while 90% of cycles were classified at most one class from ground truth. They thus considerably outperform the broadly applied maximum amplitude of pressure oscillation (MAPO) detection method, as well as references reconstructed from previous works. Our analysis indicates that the neural network learned physically meaningful features connected to engine-characteristic resonances, thus verifying the intended theory-guided data science approach. Deeper performance investigation further shows remarkable generalization ability to unseen operating points. In addition, the model proved to classify knocking cycles in unseen engines with increased accuracy of 89% after adapting to their features via training on a small number of exclusively non-knocking cycles. The algorithm takes below 1 ms to classify individual cycles, effectively making it suitable for real-time engine control.
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2022
The avoidance of scrap and the adherence to tolerances is an important goal in manufacturing. This requires a good engineering understanding of the underlying process. To achieve this, real physical experiments can be conducted. However, they are expensive in time and resources, and can slow down production. A promising way to overcome these drawbacks is process exploration through simulation, where the finite element method (FEM) is a well-established and robust simulation method. While FEM simulation can provide high-resolution results, it requires extensive computing resources to do so. In addition, the simulation design often depends on unknown process properties. To circumvent these drawbacks, we present a Gaussian Process surrogate model approach that accounts for real physical manufacturing process uncertainties and acts as a substitute for expensive FEM simulation, resulting in a fast and robust method that adequately depicts reality. We demonstrate that active learning can be easily applied with our surrogate model to improve computational resources. On top of that, we present a novel optimization method that treats aleatoric and epistemic uncertainties separately, allowing for greater flexibility in solving inverse problems. We evaluate our model using a typical manufacturing use case, the preforming of an Inconel 625 superalloy billet on a forging press.
Ross-Hellauer Anthony, Cole Nicki Lisa, Fessl Angela, Klebel Thomas, Pontika, Nancy, Reichmann Stefan
2022
Open Science holds the promise to make scientific endeavours more inclusive, participatory, understandable, accessible and re-usable for large audiences. However, making processes open will not per se drive wide reuse or participation unless also accompanied by the capacity (in terms of knowledge, skills, financial resources, technological readiness and motivation) to do so. These capacities vary considerably across regions, institutions and demographics. Those advantaged by such factors will remain potentially privileged, putting Open Science's agenda of inclusivity at risk of propagating conditions of ‘cumulative advantage’. With this paper, we systematically scope existing research addressing the question: ‘What evidence and discourse exists in the literature about the ways in which dynamics and structures of inequality could persist or be exacerbated in the transition to Open Science, across disciplines, regions and demographics?’ Aiming to synthesize findings, identify gaps in the literature and inform future research and policy, our results identify threats to equity associated with all aspects of Open Science, including Open Access, Open and FAIR Data, Open Methods, Open Evaluation, Citizen Science, as well as its interfaces with society, industry and policy. Key threats include: stratifications of publishing due to the exclusionary nature of the author-pays model of Open Access; potential widening of the digital divide due to the infrastructure-dependent, highly situated nature of open data practices; risks of diminishing qualitative methodologies as ‘reproducibility’ becomes synonymous with quality; new risks of bias and exclusion in means of transparent evaluation; and crucial asymmetries in the Open Science relationships with industry and the public, which privileges the former and fails to fully include the latter.
BDVA Task Force, Duricic Tomislav
2022
The session will explore the importance of data-driven AI for the financial sector by comparing the highly innovative and revolutionary world of Fintech companies with Financial Institutions, highlighting the peculiarities of the sector such as the paradigm of ethical AI. The session will cover topics related to Open Innovation Hubs and acceleration programs, to highlight the importance of innovation and the opportunities of Fintechs mentioning as well the VDIH (Virtualized Digital Innovation Hub), an innovative service developed within the INFINITECH project, a digital finance flagship H2020 project. Moreover, the findings and insights of the Whitepaper of the Task Force “AI and Big Data for the Financial Sector” will be presented, emphasizing market trends, vision, and the innovation impact of novel technologies on the financial sector. The session will end with a key-note speech by a representative from the Fintech District, the largest open ecosystem within the Italian fintech community, deepening the evolution of the fintech sector and sharing future insights and opportunities.
Amjad Rana Ali, Liu Kairen, Geiger Bernhard
2022
In this work, we investigate the use of three information-theoretic quantities--entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler (KL) divergence--to understand and study the behavior of already trained fully connected feedforward neural networks (NNs). We analyze the connection between these information-theoretic quantities and classification performance on the test set by cumulatively ablating neurons in networks trained on MNIST, FashionMNIST, and CIFAR-10. Our results parallel those recently published by Morcos et al., indicating that class selectivity is not a good indicator for classification performance. However, looking at individual layers separately, both mutual information and class selectivity are positively correlated with classification performance, at least for networks with ReLU activation functions. We provide explanations for this phenomenon and conclude that it is ill-advised to compare the proposed information-theoretic quantities across layers. Furthermore, we show that cumulative ablation of neurons with ascending or descending information-theoretic quantities can be used to formulate hypotheses regarding the joint behavior of multiple neurons, such as redundancy and synergy, with comparably low computational cost. We also draw connections to the information bottleneck theory for NNs.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2021
This article discusses the usefulness of Toulmin’s model of arguments as structuring an assessment of different types of wrongness in an argument. We discuss the usability of the model within a conversational agent that aims to support users to develop a good argument. Within the article, we present a study and the development of classifiers that identify the existence of structural components in a good argument, namely a claim, a warrant (underlying understanding), and evidence. Based on a dataset (three sub-datasets with 100, 1,026, 211 responses in each) in which users argue about the intelligence or non-intelligence of entities, we have developed classifiers for these components: The existence and direction (positive/negative) of claims can be detected a weighted average F1 score over all classes (positive/negative/unknown) of 0.91. The existence of a warrant (with warrant/without warrant) can be detected with a weighted F1 score over all classes of 0.88. The existence of evidence (with evidence/without evidence) can be detected with a weighted average F1 score of 0.80. We argue that these scores are high enough to be of use within a conditional dialogue structure based on Bloom’s taxonomy of learning; and show by argument an example conditional dialogue structure that allows us to conduct coherent learning conversations. While in our described experiments, we show how Toulmin’s model of arguments can be used to identify structural problems with argumentation, we also discuss how Toulmin’s model of arguments could be used in conjunction with content-wise assessment of the correctness especially of the evidence component to identify more complex types of wrongness in arguments, where argument components are not well aligned. Owing to having progress in argument mining and conversational agents, the next challenges could be the developing agents that support learning argumentation. These agents could identify more complex type of wrongness in arguments that result from wrong connections between argumentation components.
Geiger Bernhard
2021
(extended abstract)
Gursch Heimo, Pramhas Martin, Bernhard Knopper, Daniel Brandl, Markus Gratzl, Schlager Elke, Kern Roman
2021
Im Projekt COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) wird die Behaglichkeit von Büroräumen mit Simulationen und datengetriebenen Verfahren untersucht. Während die datengetriebenen Verfahren auf Messdaten setzen, benötigt die Simulation umfangreiche Beschreibungen der Büroräume, welche sich vielfach mit im Building Information Model (BIM) erfassten Informationen decken. Trotz großer Fortschritte in den letzten Jahren, ist die Integration von BIM und Simulation noch nicht vollständig automatisiert. An dem Fallbeispiel der Aufstockung eines Bürogebäudes der Thomas Lorenz ZT GmbH wird die Übergabe von BIM-Daten an Building Energy Simulation (BES) und Computational Fluid Dynamics (CFD) Simulationen untersucht. Beim untersuchten Gebäude wurde der gesamte Planungsprozess anhand des BIM durchgeführt. Damit konnten Einreichplanung, Ausschreibungsplanung für sämtliche Gewerke inkl. Massenableitung, Ausführungspläne wie Polier-, Schalungs- und Bewehrungspläne aus dem Modell abgeleitet werden und das Haustechnikmodell frühzeitig mit Architektur- und Tragwerksplanungsmodell verknüpft werden.Ausgehend vom BIM konnten die nötigen Daten im IFC-Format an die BES übergeben werden. Die verwendete Software konnte aber noch keine automatische Übergabe durchführen, weshalb eine manuelle Nachbearbeitung der Räume erforderlich war. Für die CFD-Simulation wurden nur ausgewählte Räume betrachtet, denn der Zusatzaufwand zur Übergabe im STEP-Format ist bei normaler Bearbeitung des BIM immer noch sehr groß. Dabei muss der freie Luftraum im BIM separat modelliert und bestimmte geometrischen Randbedingungen erfüllt werden. Ebenso müssen Angaben zu Wärmequellen und Möbel in einer sehr hohen Planungstiefe vorliegen. Der Austausch von Randbedingungen an den Grenzflächen zwischen Luft und Hülle musste noch manuell geschehen.Die BES- und CFD-Simulationsergebnisse sind bezüglich ihrer Aussagekraft mit denen aus herkömmlichen, manuell erstellten Simulationsmodellen als identisch zu betrachten. Eine automatische Übernahme von Parameterwerten scheitert momentan noch an der mangelnden Interpretier- bzw. Zuordenbarkeit in der Simulationssoftware. In Zukunft sollen es die Etablierung von IFC 4 und zusätzlicher Industry Foundation Class (IFC) Parameter einfacher machen die benötigten Daten im Modell strukturiert zu hinterlegen. Besonderes Augenmerk ist dabei auf die Integration von Raumbuchdaten in BIM zu legen, da diese Informationen nicht nur für die Simulation von großem Nutzen sind. Diese Informationsintegrationen sind nicht auf eine einmalige Übermittlung beschränkt, sondern zielen auf eine Integration zur automatischen Übernahme von Änderungen zwischen BIM, Simulation und anknüpfenden Bereichen ab.
Wolfbauer Irmtraud
2021
Use Case & Motivation:Styrian SME’s need an online learning platform for their apprentices in mechatronics, metal and electrical engineering. Research opportunities: * Apprentices as target group are under-researched* Designing a computer-mediated learning intervention in the overlap between workplace learning and educational setting* Contributing to research on reflection guidance technologies* Developing the first reflection guidance chatbot
Mirzababaei Behzad, Pammer-Schindler Viktoria
2021
This article discusses the usefulness of Toulmin’s model of arguments as structuring an assessment of different types of wrongness in an argument. We discuss the usability of the model within a conversational agent that aims to support users to develop a good argument. Within the article, we present a study and the development of classifiers that identify the existence of structural components in a good argument, namely a claim, a warrant (underlying understanding), and evidence. Based on a dataset (three sub-datasets with 100, 1,026, 211 responses in each) in which users argue about the intelligence or non-intelligence of entities, we have developed classifiers for these components: The existence and direction (positive/negative) of claims can be detected a weighted average F1 score over all classes (positive/negative/unknown) of 0.91. The existence of a warrant (with warrant/without warrant) can be detected with a weighted F1 score over all classes of 0.88. The existence of evidence (with evidence/without evidence) can be detected with a weighted average F1 score of 0.80. We argue that these scores are high enough to be of use within a conditional dialogue structure based on Bloom’s taxonomy of learning; and show by argument an example conditional dialogue structure that allows us to conduct coherent learning conversations. While in our described experiments, we show how Toulmin’s model of arguments can be used to identify structural problems with argumentation, we also discuss how Toulmin’s model of arguments could be used in conjunction with content-wise assessment of the correctness especially of the evidence component to identify more complex types of wrongness in arguments, where argument components are not well aligned. Owing to having progress in argument mining and conversational agents, the next challenges could be the developing agents that support learning argumentation. These agents could identify more complex type of wrongness in arguments that result from wrong connections between argumentation components.
Reiter-Haas Markus, Kopeinik Simone, Lex Elisabeth
2021
In this paper, we study the moral framing of political content on Twitter. Specifically, we examine differences in moral framing in two datasets: (i) tweets from US-based politicians annotated with political affiliation and (ii) COVID-19 related tweets in German from followers of the leaders of the five major Austrian political parties. Our research is based on recent work that introduces an unsupervised approach to extract framing bias and intensity in news using a dictionary of moral virtues and vices. In this paper, we use a more extensive dictionary and adapt it to German-language tweets. Overall, in both datasets, we observe a moral framing that is congruent with the public perception of the political parties. In the US dataset, democrats have a tendency to frame tweets in terms of care, while loyalty is a characteristic frame for republicans. In the Austrian dataset, we find that the followers of the governing conservative party emphasize care, which is a key message and moral frame in the party’s COVID-19 campaign slogan. Our work complements existing studies on moral framing in social media. Also, our empirical findings provide novel insights into moral-based framing on COVID19 in Austria
Oana Inel, Duricic Tomislav, Harmanpreet Kaur, Lex Elisabeth, Nava Tintarev
2021
Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users’ awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users’ alignment with the video’s message and how strong their belief is about the topic. Our results indicate that respondents’ alignment with the video’s message is critical to evaluate the video’s usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video’s message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video’s message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.
Duricic Tomislav, Volker Seiser, Lex Elisabeth
2021
We perform a cross-platform analysis in which we study how does linking YouTube content on Reddit conspiracy forum impact language used in user comments on YouTube. Our findings show a slight change in user language in that it becomes more similar to language used on Reddit.
Duricic Tomislav, Kowald Dominik, Schedl Markus, Lex Elisabeth
2021
Homophily describes the phenomenon that similarity breeds connection, i.e., individuals tend to form ties with other people who are similar to themselves in some aspect(s). The similarity in music taste can undoubtedly influence who we make friends with and shape our social circles. In this paper, we study homophily in an online music platform Last.fm regarding user preferences towards listening to mainstream (M), novel (N), or diverse (D) content. Furthermore, we draw comparisons with homophily based on listening profiles derived from artists users have listened to in the past, i.e., artist profiles. Finally, we explore the utility of users' artist profiles as well as features describing M, N, and D for the task of link prediction. Our study reveals that: (i) users with a friendship connection share similar music taste based on their artist profiles; (ii) on average, a measure of how diverse is the music two users listen to is a stronger predictor of friendship than measures of their preferences towards mainstream or novel content, i.e., homophily is stronger for D than for M and N; (iii) some user groups such as high-novelty-seekers (explorers) exhibit strong homophily, but lower than average artist profile similarity; (iv) using M, N and D achieves comparable results on link prediction accuracy compared with using artist profiles, but the combination of features yields the best accuracy results, and (v) using combined features does not add value if graph-based features such as common neighbors are available, making M, N, and D features primarily useful in a cold-start user recommendation setting for users with few friendship connections. The insights from this study …
Egger Jan, Pepe Antonio, Gsaxner Christina, Jin Yuan, Li Jianning, Kern Roman
2021
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category
Pammer-Schindler Viktoria, Prilla Michael
2021
A substantial body of human-computer interaction literature investigates tools that are intended to support reflection, e.g. under the header of quantified self or in computer-mediated learning. These works describe the issues that are reflected on by users in terms of examples, such as reflecting on financial expenditures, lifestyle, professional growth, etc. A coherent concept is missing. In this paper, the reflection object is developed based on activity theory, reflection theory and related design-oriented research. The reflection object is both what is reflected on and what is changed through reflection. It constitutes the link between reflection and other activities in which the reflecting person participates. By combining these two aspects—what is reflected on and what is changed—into a coherent conceptual unit, the concept of the reflection object provides a frame to focus on how to support learning, change and transformation, which is a major challenge when designing technologies for reflection.
Leski Florian, Fruhwirth Michael, Pammer-Schindler Viktoria
2021
The increasing volume of available data and the advances in analytics and artificial intelligence hold the potential for new business models also in offline-established organizations. To successfully implement a data-driven business model, it is crucial to understand the environment and the roles that need to be fulfilled by actors in the business model. This partner perspective is overlooked by current research on data-driven business models. In this paper, we present a structured literature review in which we identified 33 relevant publications. Based on this literature, we developed a framework consisting of eight roles and two attributes that can be assigned to actors as well as three classes of exchanged values between actors. Finally, we evaluated our framework through three cases from one automotive company collected via interviews in which we applied the framework to analyze data-driven business models for which our interviewees are responsible.
Lovric Mario, Duricic Tomislav, Tran Thi Ngoc Han, Hussain Hussain, Lacic Emanuel, Morten A. Rasmussen, Kern Roman
2021
Methods for dimensionality reduction are showing significant contributions to knowledge generation in high-dimensional modeling scenarios throughout many disciplines. By achieving a lower dimensional representation (also called embedding), fewer computing resources are needed in downstream machine learning tasks, thus leading to a faster training time, lower complexity, and statistical flexibility. In this work, we investigate the utility of three prominent unsupervised embedding techniques (principal component analysis—PCA, uniform manifold approximation and projection—UMAP, and variational autoencoders—VAEs) for solving classification tasks in the domain of toxicology. To this end, we compare these embedding techniques against a set of molecular fingerprint-based models that do not utilize additional pre-preprocessing of features. Inspired by the success of transfer learning in several fields, we further study the performance of embedders when trained on an external dataset of chemical compounds. To gain a better understanding of their characteristics, we evaluate the embedders with different embedding dimensionalities, and with different sizes of the external dataset. Our findings show that the recently popularized UMAP approach can be utilized alongside known techniques such as PCA and VAE as a pre-compression technique in the toxicology domain. Nevertheless, the generative model of VAE shows an advantage in pre-compressing the data with respect to classification accuracy.
Hoffer Johannes Georg, Geiger Bernhard, Ofner Patrick, Kern Roman
2021
The technical world of today fundamentally relies on structural analysis in the form of design and structural mechanic simulations.A traditional and robust simulation method is the physics-based Finite Element Method (FEM) simulation. FEM simulations in structural mechanics are known to be very accurate, however, the higher the desired resolution, the more computational effort is required. Surrogate modeling provides a robust approach to address this drawback. Nonetheless, finding the right surrogate model and its hyperparameters for a specific use case is not a straightforward process.In this paper, we discuss and compare several classes of mesh-free surrogate models based on traditional and thriving Machine Learning (ML) and Deep Learning (DL) methods.We show that relatively simple algorithms (such as $k$-nearest neighbor regression) can be competitive in applications with low geometrical complexity and extrapolation requirements. With respect to tasks exhibiting higher geometric complexity, our results show that recent DL methods at the forefront of literature (such as physics-informed neural networks), are complicated to train and to parameterize and thus require further research before they can be put to practical use. In contrast, we show that already well-researched DL methods such as the multi-layer perceptron are superior with respect to interpolation use cases and can be easily trained with available tools.With our work, we thus present a basis for selection and practical implementation of surrogate models.
Iacono Lucas, Veas Eduardo Enrique
2021
AVL RACING and the Knowledge Visualization group of Know-Center GmbH, are evaluating the performance of racing drivers using the latest wearables technologies, data analytics and vehicle dynamics simulation software from AVL. The goal is to measure human factors related with biosensors synchronized with vehicle data at a Driver-in-the-Loop (DiL) and vehicle dynamics simulation software AVL VSM™ RACE
Iacono Lucas, Veas Eduardo Enrique
2021
Know-Center is developing human-centered intelligent systems that detect cognitive, emotional and health related states by action, perception and by means of cognitive and health metrics. Models of human behavior and intention allow to be derived during different activities. The innovative set-up is reflected by linking the human telemetry (HT) system with activity monitors and by synchronizing the data. This article details our system composed of several wearable sensors, such as EEG, eye-tracker, ECG, EMG, and a data-logger, and methodology used to perform our studies
Pammer-Schindler Viktoria, Rosé Carolyn
2021
Professional and lifelong learning are a necessity for workers. This is true both for re-skilling from disappearing jobs, as well as for staying current within a professional domain. AI-enabled scaffolding and just-in-time and situated learning in the workplace offer a new frontier for future impact of AIED. The hallmark of this community’s work has been i) data-driven design of learning technology and ii) machine-learning enabled personalized interventions. In both cases, data are the foundation of AIED research and data-related ethics are thus central to AIED research. In this paper we formulate a vision how AIED research could address data-related ethics issues in informal and situated professional learning. The foundation of our vision is a secondary analysis of five research cases that offer insights related to data-driven adaptive technologies for informal professional learning. We describe the encountered data-related ethics issues. In our interpretation, we have developed three themes: Firstly, in informal and situated professional learning, relevant data about professional learning – to be used as a basis for learning analytics and reflection or as a basis for adaptive systems - is not only about learners. Instead, due to the situatedness of learning, relevant data is also about others (colleagues, customers, clients) and other objects from the learner’s context. Such data may be private, proprietary, or both. Secondly, manual tracking comes with high learner control over data. Thirdly, learning is not necessarily a shared goal in informal professional learning settings. From an ethics perspective, this is particularly problematic as much data that would be relevant for use within learning technologies hasn’t been collected for the purposes of learning. These three themes translate into challenges for AIED research that need to be addressed in order to successfully investigate and develop AIED technology for informal and situated professional learning. As an outlook of this paper, we connect these challenges to ongoing research directions within AIED – natural language processing, socio-technical design, and scenario-based data collection - that might be leveraged and aimed towards addressing data-related ethics challenges.
Müllner Peter , Lex Elisabeth, Kowald Dominik
2021
In this position paper, we discuss the merits of simulating privacy dynamics in recommender systems. We study this issue at hand from two perspectives: Firstly, we present a conceptual approach to integrate privacy into recommender system simulations, whose key elements are privacy agents. These agents can enhance users' profiles with different privacy preferences, e.g., their inclination to disclose data to the recommender system. Plus, they can protect users' privacy by guarding all actions that could be a threat to privacy. For example, agents can prohibit a user's privacy-threatening actions or apply privacy-enhancing techniques, e.g., Differential Privacy, to make actions less threatening. Secondly, we identify three critical topics for future research in privacy-aware recommender system simulations: (i) How could we model users' privacy preferences and protect users from performing any privacy-threatening actions? (ii) To what extent do privacy agents modify the users' document preferences? (iii) How do privacy preferences and privacy protections impact recommendations and privacy of others? Our conceptual privacy-aware simulation approach makes it possible to investigate the impact of privacy preferences and privacy protection on the micro-level, i.e., a single user, but also on the macro-level, i.e., all recommender system users. With this work, we hope to present perspectives on how privacy-aware simulations could be realized, such that they enable researchers to study the dynamics of privacy within a recommender system.
Geiger Bernhard, Kubin Gernot
2021
This Special Issue aims to investigate the properties of the information bottleneck (IB) functional in its new context in deep learning and to propose learning mechanisms inspired by the IB framework. More specifically, we invited authors to submit manuscripts that provide novel insight into the properties of the IB functional that apply the IB principle for training deep, i.e., multi-layer machine learning structures such as NNs and that investigate the learning behavior of NNs using the IBframework. To cover the breadth of the current literature, we also solicited manuscripts that discuss frameworks inspired by the IB principle, but that depart from them in a well-motivated manner.
Gursch Heimo, Ganster Harald, Rinnhofer Alfred, Waltner Georg, Payer Christian, Oberwinkler Christian, Meisenbichler Reinhard, Kern Roman
2021
Refuse sorting is a key technology to increase the recycling rate and reduce the growths of landfills worldwide. The project KI-Waste combines image recognition with time series analysis to monitor and optimise processes in sorting facilities. The image recognition captures the refuse category distribution and particle size of the refuse streams in the sorting facility. The time series analysis focuses on insights derived from machine parameters and sensor values. The combination of results from the image recognition and the time series analysis creates a new holistic view of the complete sorting process and the performance of a sorting facility. This is the basis for comprehensive monitoring, data-driven optimisations, and performance evaluations supporting workers in sorting facilities. Digital solutions allowing the workers to monitor the sorting process remotely are very desirable since the working conditions in sorting facilities are potentially harmful due to dust, bacteria, and fungal spores. Furthermore, the introduction of objective sorting performance measures enables workers to make informed decisions to improve the sorting parameters and react quicker to changes in the refuse composition. This work describes ideas and objectives of the KI-Waste project, summarises techniques and approaches used in KI-Waste, gives preliminary findings, and closes with an outlook on future work.
Smieja Marek, Wolczyk Maciej, Tabor Jacek, Geiger Bernhard
2021
We propose a semi-supervised generative model, SeGMA, which learns a joint probability distribution of data and their classes and is implemented in a typical Wasserstein autoencoder framework. We choose a mixture of Gaussians as a target distribution in latent space, which provides a natural splitting of data into clusters. To connect Gaussian components with correct classes, we use a small amount of labeled data and a Gaussian classifier induced by the target distribution. SeGMA is optimized efficiently due to the use of the Cramer-Wold distance as a maximum mean discrepancy penalty, which yields a closed-form expression for a mixture of spherical Gaussian components and, thus, obviates the need of sampling. While SeGMA preserves all properties of its semi-supervised predecessors and achieves at least as good generative performance on standard benchmark data sets, it presents additional features: 1) interpolation between any pair of points in the latent space produces realistically looking samples; 2) combining the interpolation property with disentangling of class and style information, SeGMA is able to perform continuous style transfer from one class to another; and 3) it is possible to change the intensity of class characteristics in a data point by moving the latent representation of the data point away from specific Gaussian components.
Geiger Bernhard
2021
We review the current literature concerned with information plane (IP) analyses of neural network (NN) classifiers. While the underlying information bottleneck theory and the claim that information-theoretic compression is causally linked to generalization are plausible, empirical evidence was found to be both supporting and conflicting. We review this evidence together with a detailed analysis of how the respective information quantities were estimated. Our survey suggests that compression visualized in IPs is not necessarily information-theoretic but is rather often compatible with geometric compression of the latent representations. This insight gives the IP a renewed justification. Aside from this, we shed light on the problem of estimating mutual information in deterministic NNs and its consequences. Specifically, we argue that, even in feedforward NNs, the data processing inequality needs not to hold for estimates of mutual information. Similarly, while a fitting phase, in which the mutual information is between the latent representation and the target increases, is necessary (but not sufficient) for good classification performance, depending on the specifics of mutual information estimation, such a fitting phase needs to not be visible in the IP.
Rekabsaz Navi, Kopeinik Simone, Schedl Markus
2021
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation of BERT Ranker
Lex Elisabeth, Kowald Dominik, Seitlinger Paul, Tran Tran, Felfernig Alexander, Schedl Markus
2021
Psychology-informed Recommender Systems
Ruiz-Calleja Adolfo, Prieto Luis P., Ley Tobias, Rodrıguez-Triana Marıa Jesus, Dennerlein Sebastian
2021
Despite the ubiquity of learning in workplace and professional settings, the learning analytics (LA) community has paid significant attention to such settings only recently. This may be due to the focus on researching formal learning, as workplace learning is often informal, hard to grasp and not unequivocally defined. This paper summarizes the state of the art of Workplace Learning Analytics (WPLA), extracted from a two-iteration systematic literature review. Our in-depth analysis of 52 existing proposals not only provides a descriptive view of the field, but also reflects on researcher conceptions of learning and their influence on the design, analytics and technology choices made in this area. We also discuss the characteristics of workplace learning that make WPLA proposals different from LA in formal education contexts and the challenges resulting from this. We found that WPLA is gaining momentum, especially in some fields, like healthcare and education. The focus on theory is generally a positive feature in WPLA, but we encourage a stronger focus on assessing the impact of WPLA in realistic settings.
Wolf-Brenner Christof
2021
In his book Superintelligence, Nick Bostrom points to several ways the development of Artificial Intelligence (AI) might fail, turn out to be malignant or even induce an existential catastrophe. He describes ‘Perverse Instantiations’ (PI) as cases, in which AI figures out how to satisfy some goal through unintended ways. For instance, AI could attempt to paralyze human facial muscles into constant smiles to achieve the goal of making humans smile. According to Bostrom, cases like this ought to be avoided since they include a violation of human designer’s intentions. However, AI findingsolutions that its designers have not yet thought of and therefore could also not have intended is arguably one of the main reasons why we are so eager to use it on a variety of problems. In this paper, I aim to show that the conceptof PI is quite vague, mostly due to ambiguities surrounding the term ‘intention’. Ultimately, this text aims to serve as a starting point for a further discussion of the research topic, the development of a research agenda and future improvement of the terminology
Fessl Angela, Maitz Katharina, Dennerlein Sebastian, Pammer-Schindler Viktoria
2021
Clear formulation and communication of learning goals is an acknowledged best practice in instruction at all levels. Typically, in curricula and course management systems, dedicated places for specifying learning goals at course-level exist. However, even in higher education, learning goals are typically formulated in a very heterogeneous manner. They are often not concrete enough to serve as guidance for students to master a lecture or to foster self-regulated learning. In this paper, we present a systematics for formulating learning goals for university courses, and a web-based widget that visualises these learning goals within a university's learning management system. The systematics is based on the revised version of Bloom's taxonomy of educational objectives by Anderson and Krathwohl. We evaluated both the learning goal systematics and the web-based widget in three lectures at our university.The participating lecturers perceived the systematics as easy-to-use and as helpful to structure their course and the learning content. Students' perceived benets lay in getting a quick overview of the lecture and its content as well as clear information regarding the requirements for passing the exam. By analysing the widget's activity log data, we could show that the widget helps students to track their learning progress and supports them in planning and conducting their learning in a self-regulated way. This work highlights how theory-based best practice in teaching can be transferred into a digital learning environment; at the same time it highlights that good non-technical systematics for formulating learning goals positively impacts on teaching and learning.
Basirat Mina, Geiger Bernhard, Roth Peter
2021
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.
Schweimer Christoph, Geiger Bernhard, Wang Meizhu, Gogolenko Sergiy, Gogolenko Sergiy, Mahmood Imran, Jahani Alireza, Suleimenova Diana, Groen Derek
2021
Automated construction of location graphs is instrumental but challenging, particularly in logistics optimisation problems and agent-based movement simulations. Hence, we propose an algorithm for automated construction of location graphs, in which vertices correspond to geographic locations of interest and edges to direct travelling routes between them. Our approach involves two steps. In the first step, we use a routing service to compute distances between all pairs of L locations, resulting in a complete graph. In the second step, we prune this graph by removing edges corresponding to indirect routes, identified using the triangle inequality. The computational complexity of this second step is O(L3), which enables the computation of location graphs for all towns and cities on the road network of an entire continent. To illustrate the utility of our algorithm in an application, we constructed location graphs for four regions of different size and road infrastructures and compared them to manually created ground truths. Our algorithm simultaneously achieved precision and recall values around 0.9 for a wide range of the single hyperparameter, suggesting that it is a valid approach to create large location graphs for which a manual creation is infeasible.
Geiger Bernhard, Al-Bashabsheh Ali
2021
We derive two sufficient conditions for a function of a Markov random field (MRF) on a given graph to be a MRF on the same graph. The first condition is information-theoretic and parallels a recent information-theoretic characterization of lumpability of Markov chains. The second condition, which is easier to check, is based on the potential functions of the corresponding Gibbs field. We illustrate our sufficient conditions at the hand of several examples and discuss implications for practical applications of MRFs. As a side result, we give a partial characterization of functions of MRFs that are information preserving.
Kowald Dominik, Müllner Peter , Zangerle Eva, Bauer Christine, Schedl Markus, Lex_KC Elisabeth
2021
Support the Underground: Characteristics of Beyond-Mainstream Music Listeners. EPJ Data Scienc
Schedl Markus, Bauer Christine, Reisinger Wolfgang, Kowald Dominik, Lex_KC Elisabeth
2021
Listener Modeling and Context-Aware Music Recommendation Based on Country Archetyp
Schweimer Christoph, Geiger Bernhard, Wang Meizhu, Gogolenko Sergiy, Mahmood Imran, Jahani Alireza, Suleimenova Diana, Groen Derek
2021
Müllner Peter , Kowald Dominik, Lex Elisabeth
2021
In this paper, we explore the reproducibility of MetaMF, a meta matrix factorization framework introduced by Lin et al. MetaMF employs meta learning for federated rating prediction to preserve users' privacy. We reproduce the experiments of Lin et al. on five datasets, i.e., Douban, Hetrec-MovieLens, MovieLens 1M, Ciao, and Jester. Also, we study the impact of meta learning on the accuracy of MetaMF's recommendations. Furthermore, in our work, we acknowledge that users may have different tolerances for revealing information about themselves. Hence, in a second strand of experiments, we investigate the robustness of MetaMF against strict privacy constraints. Our study illustrates that we can reproduce most of Lin et al.'s results. Plus, we provide strong evidence that meta learning is essential for MetaMF's robustness against strict privacy constraints.
Kefalas Achilles, Ofner Andreas Benjamin, Pirker Gerhard, Posch Stefan, Geiger Bernhard, Wimmer Andreas
2021
The phenomenon of knock is an abnormal combustion occurring in spark-ignition (SI) engines and forms a barrier that prevents an increase in thermal efficiency while simultaneously reducing CO2 emissions. Since knocking combustion is highly stochastic, a cyclic analysis of in-cylinder pressure is necessary. In this study we propose an approach for efficient and robust detection and identification of knocking combustion in three different internal combustion engines. The proposed methodology includes a signal processing technique, called continuous wavelet transformation (CWT), which provides a simultaneous analysis of the in-cylinder pressure traces in the time and frequency domains with coefficients. These coefficients serve as input for a convolutional neural network (CNN) which extracts distinctive features and performs an image recognition task in order to distinguish between non-knock and knock. The results revealed the following: (i) The CWT delivered a stable and effective feature space with the coefficients that represents the unique time-frequency pattern of each individual in-cylinder pressure cycle; (ii) the proposed approach was superior to the state-of-the-art threshold value exceeded (TVE) method with a maximum amplitude pressure oscillation (MAPO) criterion improving the overall accuracy by 6.15 percentage points (up to 92.62%); and (iii) The CWT + CNN method does not require calibrating threshold values for different engines or operating conditions as long as enough and diverse data is used to train the neural network.
Lucija Krusic, Barbara Schuppler, Martin Hagmüller, Kopeinik Simone
2021
Due to recent advances in digitalisation and an emergence of new technologies, the STEM job market is further growing. This leads to higher salaries and lower unemployment rates. Despite these advantages, a pressing economic need for qualified STEM personal and many initiatives for increasing interest in STEM subjects, Austrian technical universities have consistently had issues with recruiting engineering students. Particularly women remain strongly underrepresented in STEM careers. Possible causes of this gender gap can be found in the effects of stereotype threat and the influence of role models, as stereotypical representations affect young people in various phases of their personal and professional development. As a part of the project proposal “Gender differences in career choices: Does the language matter?“, we investigated gender biases that potential students of Austrian STEM universities might face, and conducted two pilot studies: i) the analysis of EFL textbooks used in Austrian high schools, and ii) viewbooks used as promotional material for Austrian universities. EFL (English as a foreign language) textbooks are often used in teaching. We consider them as particularly relevant, since each of these books includes a dedicated section on careers. In the course of the first pilot study, we conducted a content analysis of eight textbooks for gender biases of personas in the context of careers and jobs. While results point to a nearly equal distribution of male and female personifications i.e., we found 9% more male characters, they were, however, not equally distributed among the different careers. Female personas were commonly associated with traditionally female careers (“stay-at-home mom”, “housewife”), which can be classified as indoor and domestic, while male personas tended to be associated with more prestigious, outdoor occupations (“doctor”, “police officer”). STEM occupations were predominantly (80%) associated with the male gender. Thus, the analysis of the Austrian EFL textbooks clearly points to the existence of gender stereotyping and gender bias as to the relationship of gender and career choice. In the second pilot study, we analyzed the symbolic portrayal of gender diversity in 52 Austrian university viewbooks, one for each bachelor programme at five Universities covering fields such as STEM, economy and law. As part of the analysis, we compared the representations of male and female students and professors with the actual student and faculty body. Results show a rather equal numeric gender representation in the non- technical university viewbooks but not in those of technical universities analysed. The comparison to real-life students’ gender distribution, revealed instances of underrepresentation of the male student body and overrepresentation of the female student body in technical university viewbooks (e.g., 15.4% underrepresentation of male students and 15.3% overrepresentation of female students in TUGraz viewbooks). We consider this a positive finding, as we believe that a diverse and gender neutral representation of people in educational and career information materials has the potential to entice a desired change in prospective students’ perception towards STEM subjects and engineering sciences.
Kaiser Rene_DB
2021
Request for quotation (RFQ) is a process that typically requires a company to inspect specification documents shared by a potential customer. In order to create an offer, requirements need to be extracted from the specifications. In a collaborative research project, we investigate methods to support the document-centric knowledge work offer engineers conduct when processing RFQs, and started to develop a software tool including artificial/assistive intelligence features, several of which are based on natural language processing (NLP). Based on our concrete application case, we have identified three aspects towards which intelligent, adaptive user interfaces may contribute: adaptation to specific workflow approaches, adaptation to user-specific annotation behaviour with respect to the automatic provision of suggestions, and support for the user to maintain concentration while conducting an everyday routine task. In a preliminary, conceptual research phase, we seek to discuss these ideas and develop them further.
Lovric Mario, Kern Roman, Fadljevic Leon, Gerdenitsch, Johann, Steck, Thomas, Peche, Ernst
2021
In industrial electro galvanizing lines, the performance of the dimensionally stable anodes (Ti +IrOx) is a crucial factor for product quality. Ageing of the anodes causes worsened zinc coatingdistribution on the steel strip and a significant increase in production costs due to a higher resistivityof the anodes. Up to now, the end of the anode lifetime has been detected by visual inspectionevery several weeks. The voltage of the rectifiers increases much earlier, indicating the deteriorationof anode performance. Therefore monitoring rectifier voltage has the potential for a prematuredetermination of the end of anode lifetime. Anode condition is only one of many parameters affectingthe rectifier voltage. In this work we employed machine learning to predict expected baseline rectifiervoltages for a variety of steel strips and operating conditions at an industrial electro galvanizingline. In the plating section the strip passes twelve “Gravitel” cells and zinc from the electrolyte isdeposited on the surface at high current densities. Data, collected on one exemplary rectifier unitequipped with two anodes, have been studied for a period of two years. The dataset consists of onetarget variable (rectifier voltage) and nine predictive variables describing electrolyte, current andsteel strip characteristics. For predictive modelling, we used selected Random Forest Regression.Training was conducted on intervals after the plating cell was equipped with new anodes. Our resultsshow a Normalized Root Mean Square Error of Prediction (NRMSEP) of 1.4 % for baseline rectifiervoltage during good anode condition. When anode condition was estimated as bad (by manualinspection), we observe a large distinctive deviation in regard to the predicted baseline voltage. Thegained information about the observed deviation can be used for early detection resp. classificationof anode ageing to recognize the onset of damage and reduce total operation cost
Kraus Pavel, Bornemann Manfred, Alwert Kay, Matern, Andreas, Reimer, Ulrich, Kaiser Rene_DB
2020
Wissensmanagement (WM) hatte bis 2007 keinen allgemein gleich verstandenen Begriffs- und Definitionsunterbau. Gerade in wirtschaftlich schwierigen Zeiten muss WM als Disziplin für seine eigene Klarheit und Stringenz sorgen – eine Zersplitterung in verschiedene Denkschulen schwächt WM-Kommunikation, -Einsatz und -Weiterentwicklung. Das DACH-WM-Glossar erscheint in einer neuen Form und zwar aus einer pragmatischen Synthese der Glossare Praxishandbuch des WM-Forums Graz von 2007 und des DACH-WM-Glossars von 2009, ergänzt durch zusätzliche Quellen.
Velimsky Jan, Schweimer Christoph, Tran Thi Ngoc Han, Gfrerer Christine
2020
In this paper, we investigate the information sharing patterns via Twitter for the social media networks of two ideologically divergent political parties, the Freedom Party (FPOE) and the NEOS, in the lead-up to and during the 2019 Austrian National Council Elections and ask: 1) To what extent do the associated networks differ in their structure?2) Which determinants affect the spreading behaviour of messages in the two networks, and which factors explain these differences? 3) What type of political news and information did verified users (e.g., news media or politicians) share ahead of the vote and which role do these users play in the dissemination of messages in the respective networks. Analysing approximately 200,000 tweets, the study relies on qualitative and quantitative text analysis including sentiment analysis, on supervised classification of relevant attributes for the message spread combined with neural network models retrieving the retweet probabilities for source tweets and on network analysis. In addition to notable differences between the two parties in network structure and Twitter usage, we find that verified users, as well as URLs, other media elements (videos or photos) and hashtags play an important role in the spreading of messages. We also reveal that negative sentiments have a higher retweetability compared to other sentiments. Interestingly, gender seems to matter in the network related to the FPOE, where male users get more retweets than female users.
Geiger Bernhard, Kubin Gernot
2020
guest editorial for a special issue
Gursch Heimo, Schlager Elke, Feichtinger Gerald, Brandl Daniel
2020
The comfort humans perceive in rooms depends on many influencing factors and is currently only poorly recorded and maintained. This is due to circumstances like the subjective nature of perceived comfort, lack of sensors or data processing infrastructure. Project COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) researches the modelling of perceived thermal comfort of humans in office rooms. This begins at extensive and long-term measurements taking in a laboratory test chamber and in real-world office rooms. Data is collected from the installed building services engineering systems, from high-accurate reference measurement equipment and from weather services describing the outside conditions. All data is stored in a specially developed central Data Management System (DMS) creating the basis for all research and studies in project COMFORT.The collected data is the key enabler for the creation of soft sensors describing comfort relevant indices like predicted mean vote (PMV), predicted percentage of dissatisfied (PPD) and operative temperature (OT). Two different approaches are conducted complementing and extending each other in the realisation of soft sensors. Firstly, a purely data-driven modelling approach generates models for soft sensors by learning the relations between explanatory and target variables in the collected data. Secondly, simulation-based soft sensors are derived from Building Energy Simulation (BES) and Computational Fluid Dynamic (CFD) simulations.The first result of the data-driven analysis is a solar Radiation Modelling (RM) component, capable of splitting global radiation into its direct horizontal and diffuse components. This is needed, since only global radiation data is available for the investigated locations, but the global radiation needs to be divided into direct and diffuse radiation due to their hugely differences in their thermal impact on buildings. The current BES and CFD simulation provide as their results soft sensors for comfort relevant indices, which will be complemented by data-driven soft sensors in the remainder of the project.
Dumouchel Suzane, Blotiere Emilie, Breitfuß Gert, Chen Yin, Di Donato Francesca, Eskevich Maria, Forbes Paula, Georgiadis Haris, Gingold Arnaud, Gorgaini Elisa, Morainville Yoann, de Paoli Stefano, Petitfils Clara, Pohle Stefanie, Toth-Czifra Erzebeth
2020
Social sciences and humanities (SSH) research is divided across a wide array of disciplines, sub-disciplines and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. The TRIPLE project brings answers to these issues by developing an innovative discovery platform for SSH data, researchers’ projects and profiles. Having started in October 2019, the project has already three main achievements that are presented in this paper: 1) the definition of main features of the GOTRIPLE platform; 2) its interoperability; 3) its multilingual, multicultural and interdisciplinary vocation. These results have been achieved thanks to different methodologies such as a co-design process, market analysis and benchmarking, monitoring and co-building. These preliminary results highlight the need of respecting diversity of practices and communities through coordination and harmonisation.
Ciura Krzesimir, Fedorowicz Joanna, Zuvela Petar, Lovric Mario, Kapica Hanna, Baranowski Pawel, Sawicki Wieslaw, Wong Ming Wah, Sączewski Jaroslaw
2020
Currently, rapid evaluation of the physicochemical parameters of drug candidates, such as lipophilicity, is in high demand owing to it enabling the approximation of the processes of absorption, distribution, metabolism, and elimination. Although the lipophilicity of drug candidates is determined using the shake flash method (n-octanol/water system) or reversed phase liquid chromatography (RP-LC), more biosimilar alternatives to classical lipophilicity measurement are currently available. One of the alternatives is immobilized artificial membrane (IAM) chromatography. The present study is a continuation of our research focused on physiochemical characterization of biologically active derivatives of isoxazolo[3,4-b]pyridine-3(1H)-ones. The main goal of this study was to assess the affinity of isoxazolones to phospholipids using IAM chromatography and compare it with the lipophilicity parameters established by reversed phase chromatography. Quantitative structure–retention relationship (QSRR) modeling of IAM retention using differential evolution coupled with partial least squares (DE-PLS) regression was performed. The results indicate that in the studied group of structurally related isoxazolone derivatives, discrepancies occur between the retention under IAM and RP-LC conditions. Although some correlation between these two chromatographic methods can be found, lipophilicity does not fully explain the affinities of the investigated molecules to phospholipids. QSRR analysis also shows common factors that contribute to retention under IAM and RP-LC conditions. In this context, the significant influences of WHIM and GETAWAY descriptors in all the obtained models should be highlighted
Lovric Mario, Meister Richard, Steck Thomas, Fadljevic Leon, Gerdenitsch Johann, Schuster Stefan, Schiefermüller Lukas, Lindstaedt Stefanie , Kern Roman
2020
In industrial electro galvanizing lines aged anodes deteriorate zinc coating distribution over the strip width, leading to an increase in electricity and zinc cost. We introduce a data-driven approach in predictive maintenance of anodes to replace the cost- and labor-intensive manual inspection, which is still common for this task. The approach is based on parasitic resistance as an indicator of anode condition which might be aged or mis-installed. The parasitic resistance is indirectly observable via the voltage difference between the measured and baseline (theoretical) voltage for healthy anode. Here we calculate the baseline voltage by means of two approaches: (1) a physical model based on electrical and electrochemical laws, and (2) advanced machine learning techniques including boosting and bagging regression. The data was collected on one exemplary rectifier unit equipped with two anodes being studied for a total period of two years. The dataset consists of one target variable (rectifier voltage) and nine predictive variables used in the models, observing electrical current, electrolyte, and steel strip characteristics. For predictive modelling, we used Random Forest, Partial Least Squares and AdaBoost Regression. The model training was conducted on intervals where the anodes were in good condition and validated on other segments which served as a proof of concept that bad anode conditions can be identified using the parasitic resistance predicted by our models. Our results show a RMSE of 0.24 V for baseline rectifier voltage with a mean ± standard deviation of 11.32 ± 2.53 V for the best model on the validation set. The best-performing model is a hybrid version of a Random Forest which incorporates meta-variables computed from the physical model. We found that a large predicted parasitic resistance coincides well with the results of the manual inspection. The results of this work will be implemented in online monitoring of anode conditions to reduce operational cost at a production site
Obermeier, Melanie Maria, Wicaksono, Wisnu Adi, Taffner, Julian, Bergna, Alessandro, Poehlein, Anja, Cernava, Tomislav, Lindstaedt Stefanie , Lovric Mario, Müller Bogota, Christina Andrea, Berg, Gabriele
2020
The expanding antibiotic resistance crisis calls for a more in depth understanding of the importance of antimicrobial resistance genes (ARGs) in pristine environments. We, therefore, studied the microbiome associated with Sphagnum moss forming the main vegetation in undomesticated, evolutionary old bog ecosystems. In our complementary analysis of culture collections, metagenomic data and a fosmid library from different geographic sites in Europe, we identified a low abundant but highly diverse pool of resistance determinants, which targets an unexpectedly broad range of 29 antibiotics including natural and synthetic compounds. This derives both, from the extraordinarily high abundance of efflux pumps (up to 96%), and the unexpectedly versatile set of ARGs underlying all major resistance mechanisms. Multi-resistance was frequently observed among bacterial isolates, e.g. in Serratia, Rouxiella, Pandoraea, Paraburkholderia and Pseudomonas. In a search for novel ARGs, we identified the new class A β-lactamase Mm3. The native Sphagnum resistome comprising a highly diversified and partially novel set of ARGs contributes to the bog ecosystem´s plasticity. Our results reinforce the ecological link between natural and clinically relevant resistomes and thereby shed light onto this link from the aspect of pristine plants. Moreover, they underline that diverse resistomes are an intrinsic characteristic of plant-associated microbial communities, they naturally harbour many resistances including genes with potential clinical relevance
Rauter Romana, Lerch Anita, Lederer-Hutsteiner Thomas, Klinger Sabine, Mayr Andrea, Gutounig Robert, Pammer-Schindler Viktoria
2020
Barreiros Carla, Silva Nelson, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2020
Kern Roman, Al-Ubaidi Tarek, Sabol Vedran, Krebs Sarah, Khodachenko Maxim, Scherf Manuel
2020
Scientific progress in the area of machine learning, in particular advances in deep learning, have led to an increase in interest in eScience and related fields. While such methods achieve great results, an in-depth understanding of these new technologies and concepts is still often lacking and domain knowledge and subject matter expertise play an important role. In regard to space science there are a vast variety of application areas, in particular with regard to analysis of observational data. This chapter aims at introducing a number of promising approaches to analyze time series data, via the introduction query by example, i.e., any signal can be provided to the system, which then responds with a ranked list of datasets containing similar signals. Building on top of this ability the system can then be trained using annotations provided by expert users, with the goal of detecting similar features and hence provide a semiautomated analysis and classification. A prototype built to work on MESSENGER data based on existing background implementations by the Know-Center in cooperation with the Space Research Institute in Graz is presented. Further, several representations of time series data that demonstrated to be required for analysis tasks, as well as techniques for preprocessing, frequent pattern mining, outlier detection, and classification of segmented and unsegmented data, are discussed. Screen shots of the developed prototype, detailing various techniques for the presentation of signals, complete the discussion.
Dennerlein Sebastian, Wolf-Brenner Christof, Gutounig Robert, Schweiger Stefan, Pammer-Schindler Viktoria
2020
Künstliche Intelligenz (KI) ist zum Gegenstand gesellschaftlicher Debatten geworden. Die Beratung durch KI unterstützt uns in der Schule, im Alltag beim Einkauf, bei der Urlaubsplanung und beim Medienkonsum, manipuliert uns allerdings auch gezielt bei Entscheidungen oder führt durch Filter-Bubble-Phänomene zur Realitätsverzerrung.Eine der jüngsten Aufregungen hierzulande galt der Nutzung moderner Algorithmik durch das österreichische Arbeitsmarktservice AMS. Der sogenannte "AMS-Algorithmus" soll Beratende bei der Entscheidung über Fördermaßnahmen unterstützen.Wenn KI in einem so erheblichen Ausmaß in menschliches Handeln eingreift, bedarf sie im Hinblick auf ethische Prinzipien einer sorgfältigen Bewertung. Das ist notwendig, um unethische Folgen zu vermeiden. Üblicherweise wird gefordert, KI bzw. Algorithmen sollen fair sein, was bedeutet, sie sollen nicht diskriminieren und transparent sollen sie sein, also Einblick in ihre Funktionsweise ermöglichen
Fessl Angela, Pammer-Schindler_TU Viktoria, Kai Pata, Mati Mõttus, Jörgen Janus, Tobias Ley
2020
This paper presents cooperative design as method to address the needs of SMEs to gain sufficient knowledge about new technologies in order for them to decide about adoption for knowledge management. We developed and refined a cooperative design method iteratively over nine use cases. In each use case, the goal was to match the SME’s knowledge management needs with offerings of new (to the SMEs) technologies. Where traditionally, innovation adoption and diffusion literature assume new knowledge to be transferred from knowledgeable stakeholders to less knowledgeable stakeholders, our method is built on cooperative design. In this, the relevant knowledge is constructed by the SMEs who wish to decide upon the adoption of novel technologies through the cooperative design process. The presented method is constituted of an analysis stage based on activity theory and a design stage based on paper prototyping and design workshops. In all nine cases, our method led to a good understanding a) of the domain by researchers – validated by the creation of meaningful first-version paper prototypes and b) of new technologies – validated by meaningful input to design and plausible assessment of technologies’ benefit for the respective SME. Practitioners and researchers alike are invited to use the here documented tools to cooperatively match the domain needs of practitioners with the offerings of new technologies. The value of our work lies in providing a concrete implementation of the cooperative design paradigm that is based on an established theory (activity theory) for work analysis and established tools of cooperative design (paper prototypes and design workshops as media of communication); and a discussion based on nine heterogeneous use cases.
Geiger Bernhard, Fischer Ian
2020
In this short note, we relate the variational bounds proposed in Alemi et al. (2017) and Fischer (2020) for the information bottleneck (IB) and the conditional entropy bottleneck (CEB) functional, respectively. Although the two functionals were shown to be equivalent, it was empirically observed that optimizing bounds on the CEB functional achieves better generalization performance and adversarial robustness than optimizing those on the IB functional. This work tries to shed light on this issue by showing that, in the most general setting, no ordering can be established between these variational bounds, while such an ordering can be enforced by restricting the feasible sets over which the optimizations take place. The absence of such an ordering in the general setup suggests that the variational bound on the CEB functional is either more amenable to optimization or a relevant cost function for optimization in its own regard, i.e., without justification from the IB or CEB functionals.
Tschinkel Gerwald
2020
One classic issue associated with being a researcher nowadays is the multitude and magnitude of search results for a given topic. Recommender systems can help to fix this problem by directing users to the resources most relevant to their specific research focus. However, sets of automatically generated recommendations are likely to contain irrelevant resources, making user interfaces that provide effective filtering mechanisms necessary.This problem is exacerbated when users resume a previously interrupted research task, or when different users attempt to tackle one extensive list of results, as confusion as to what resources should be consulted can be overwhelming.The presented recommendation dashboard uses micro-visualisations to display the state of multiple filters in a data type-specific manner. This paper describes the design and geometry of micro-visualisations and presents results from an evaluation of their readability and memorability in the context of exploring recommendation results. Based on that, this paper also proposes applying micro-visualisations for extending the use of a desktop-based dashboard to the needs of small-screen, mobile multi-touch devices, such as smartphones. A small-scale heuristic evaluation was conducted using a first prototype implementation.
Žuvela, Petar, Lovric Mario, Yousefian-Jazi, Ali, Liu, J. Jay
2020
Numerous industrial applications of machine learning feature critical issues that need to be addressed. This work proposes a framework to deal with these issues, such as competing objectives and class imbalance in designing a machine vision system for the in-line detection of surface defects on glass substrates of thin-film transistor liquid crystal displays (TFT-LCDs). The developed inspection system composes of (i) feature engineering: extraction of only the defect-relevant features from images using two-dimensional wavelet decomposition and (ii) training ensemble classifiers (proof of concept with a C5.0 ensemble, random forests (RF), and adaptive boosting (AdaBoost)). The focus is on cost sensitivity, increased generalization, and robustness to handle class imbalance and address multiple competing manufacturing objectives. Comprehensive performance evaluation was conducted in terms of accuracy, sensitivity, specificity, and the Matthews correlation coefficient (MCC) by calculating their 12,000 bootstrapped estimates. Results revealed significant differences (p < 0.05) between the three developed diagnostic algorithms. RFR (accuracy of 83.37%, sensitivity of 60.62%, specificity of 89.72%, and MCC of 0.51) outperformed both AdaBoost (accuracy of 81.14%, sensitivity of 69.23%, specificity of 84.48%, and MCC of 0.50) and the C5.0 ensemble (accuracy of 78.35%, sensitivity of 65.35%, specificity of 82.03%, and MCC of 0.44) in all the metrics except sensitivity. AdaBoost exhibited stronger performance in detecting defective TFT-LCD glass substrates. These promising results demonstrated that the proposed ensemble approach is a viable alternative to manual inspections when applied to an industrial case study with issues such as competing objectives and class imbalance.
Malev, Olga, Lovric Mario, Stipaničev, Draženka, Repec, Siniša, Martinović-Weigelt, Dalma, Zanella, Davor, Đuretec, Valnea Sindiči, Barišić, Josip, Li, Mei, Klobučar, Göran
2020
Chemical analysis of plasma samples of wild fish from the Sava River (Croatia) revealed the presence of 90 different pharmaceuticals/illicit drugs and their metabolites (PhACs/IDrgs). The concentrations of these PhACs/IDrgs in plasma were 10 to 1, 000 times higher than their concentrations in river water. Antibiotics, allergy/cold medications and analgesics were categories with the highest plasma concentrations. Fifty PhACs/IDrgs were identified as chemicals of concern based on the fish plasma model (FPM) effect ratios (ER) and their potential to activate evolutionary conserved biological targets. Chemicals of concern were also prioritized by calculating exposure-activity ratios (EARs) where plasma concentrations of chemicals were compared to their bioactivities in comprehensive ToxCast suite of in vitro assays. Overall, the applied prioritization methods indicated stimulants (nicotine, cotinine) and allergy/cold medications (prednisolone, dexamethasone) as having the highest potential biological impact on fish. The FPM model pointed to psychoactive substances (hallucinogens/stimulants and opioids) and psychotropic substances in the cannabinoids category (i.e. CBD and THC). EAR confirmed above and singled out additional chemicals of concern - anticholesteremic simvastatin and antiepileptic haloperidol. Present study demonstrates how the use of a combination of chemical analyses, and bio-effects based risk predictions with multiple criteria can help identify priority contaminants in freshwaters. The results reveal a widespread exposure of fish to complex mixtures of PhACs/IDrgs, which may target common molecular targets. While many of the prioritized chemicals occurred at low concentrations, their adverse effect on aquatic communities, due to continuous chronic exposure and additive effects, should not be neglected.
Duricic Tomislav, Hussain Hussain, Lacic Emanuel, Kowald Dominik, Lex Elisabeth, Helic Denis
2020
In this work, we study the utility of graph embeddings to generate latent user representations for trust-based collaborative filtering. In a cold-start setting, on three publicly available datasets, we evaluate approaches from four method families:(i) factorization-based,(ii) random walk-based,(iii) deep learning-based, and (iv) the Large-scale Information Network Embedding (LINE) approach. We find that across the four families, random-walk-based approaches consistently achieve the best accuracy. Besides, they result in highly novel and diverse recommendations. Furthermore, our results show that the use of graph embeddings in trust-based collaborative filtering significantly improves user coverage
Havaš Auguštin, Dubravka, Šarac, Jelena, Lovric Mario, Živković, Jelena, Malev, Olga, Fuchs, Nives, Novokmet, Natalija, Turkalj, Mirjana, Missoni, Saša
2020
Maternal nutrition and lifestyle in pregnancy are important modifiable factors for both maternal and offspring’s health. Although the Mediterranean diet has beneficial effects on health, recent studies have shown low adherence in Europe. This study aimed to assess the Mediterranean diet adherence in 266 pregnant women from Dalmatia, Croatia and to investigate their lifestyle habits and regional differences. Adherence to the Mediterranean diet was assessed through two Mediterranean diet scores. Differences in maternal characteristics (diet, education, income, parity, smoking, pre-pregnancy body mass index (BMI), physical activity, contraception) with regards to location and dietary habits were analyzed using the non-parametric Mann–Whitney U test. The machine learning approach was used to reveal other potential non-linear relationships. The results showed that adherence to the Mediterranean diet was low to moderate among the pregnant women in this study, with no significant mainland–island differences. The highest adherence was observed among wealthier women with generally healthier lifestyle choices. The most significant mainland–island differences were observed for lifestyle and socioeconomic factors (income, education, physical activity). The machine learning approach confirmed the findings of the conventional statistical method. We can conclude that adverse socioeconomic and lifestyle conditions were more pronounced in the island population, which, together with the observed non-Mediterranean dietary pattern, calls for more effective intervention strategies
Reiter-Haas Markus, Wittenbrink Davi, Lacic Emanuel
2020
Finding the right job is a difficult task for anyone as it usually depends on many factors like salary, job description, or geographical location. Students with almost no prior experience, especially, have a hard time on the job market, which is very competitive in nature. Additionally, students often suffer a lack of orientation, as they do not know what kind of job is suitable for their education. At Talto1, we realized this and have built a platform to help Austrian university students with finding their career paths as well as providing them with content that is relevant to their career possibilities. This is mainly achieved by guiding the students toward different types of entities that are related to their career, i.e., job postings, company profiles, and career-related articles.In this talk, we share our experiences with solving the recommendation problem for university students. One trait of the student-focused job domain is that behaviour of the students differs depending on their study progression. At the beginning of their studies, they need study-specific career information and part-time jobs to earn additional money. Whereas, when they are nearing graduation, they require information about their potential future employers and entry-level full-time jobs. Moreover, we can observe seasonal patterns in user activity in addition to the need of handling both logged-in and anonymous session users at the same time.To cope with the requirements of the job domain, we built hybrid models based on a microservice architecture that utilizes popular algorithms from the literature such as Collaborative Filtering, Content-based Filtering as well as various neural embedding approaches (e.g., Doc2Vec, Autoencoders, etc.). We further adapted our architecture to calculate relevant recommendations in real-time (i.e., after a recommendation is requested) as individual user sessions in Talto are usually short-lived and context-dependent. Here we found that the online performance of the utilized approach also depends on the location context [1]. Hence, the current location of a user on the mobile or web application impacts the expected recommendations.One optimization criterion on the Talto career platform is to provide relevant cross-entity recommendations as well as explain why those were shown. Recently, we started to tackle this by learning embeddings of entities that lie in the same embedding space [2]. Specifically, we pre-train word embeddings and link different entities by shared concepts, which we use for training the network embeddings. This embeds both the concepts and the entities into a common vector space, where the common vector space is a result of considering the textual content, as well as the network information (i.e., links to concepts). This way, different entity types (e.g., job postings, company profiles, and articles) are directly comparable and are suited for a real-time recommendation setting. Interestingly enough, with such an approach we also end up with individual words sharing the same embedding space. This, in turn, can be leveraged to enhance the textual search functionality of a platform, which is most commonly based just on a TF-IDF model.Furthermore, we found that such embeddings allow us to tackle the problem of explainability in an algorithm-agnostic way. Since the Talto platform utilizes various recommendation algorithms as well as continuously conducts AB tests, an algorithm-agnostic explainability model would be best suited to provide the students with meaningful explanations. As such, we will also go into the details on how we can adapt our explanation model to not rely on the utilized recommendation algorithm.
Lacic Emanuel, Markus Reiter-Haas, Kowald Dominik, Reddy Dareddy Mano, Cho Junghoo, Lex Elisabeth
2020
In this work, we address the problem of providing job recommendations in an online session setting, in which we do not have full user histories. We propose a recom-mendation approach, which uses different autoencoder architectures to encode ses-sions from the job domain. The inferred latent session representations are then used in a k-nearest neighbor manner to recommend jobs within a session. We evaluate our approach on three datasets, (1) a proprietary dataset we gathered from the Austrian student job portal Studo Jobs, (2) a dataset released by XING after the RecSys 2017 Challenge and (3) anonymized job applications released by CareerBuilder in 2012. Our results show that autoencoders provide relevant job recommendations as well as maintain a high coverage and, at the same time, can outperform state-of-the-art session-based recommendation techniques in terms of system-based and session-based novelty
Dennerlein Sebastian, Wolf-Brenner Christof, Gutounig Robert, Schweiger Stefan, Pammer-Schindler Viktoria
2020
In society and politics, there is a rising interest in considering ethical principles in technological innovation, especially in the intersection of education and technology. We propose a first iteration of a theory-derived framework to analyze ethical issues in technology-enhanced learning (TEL) software development. The framework understands ethical issues as an expression of the overall socio-technical system that are rooted in the interactions of human actors with technology, so-called socio-technical interactions (STIs). For guiding ethical reflection, the framework helps to explicate this human involvement, and to elicit discussions of ethical principles on these STIs. Prompts in the form of reflection questions can be inferred to reflect on the technology functionality from relevant human perspectives, and in relation to a list of fundamental ethical principles. We illustrate the framework and discuss its implications for TEL
Gayane Sedrakya, Dennerlein Sebastian, Pammer-Schindler Viktoria, Lindstaedt Stefanie
2020
Our earlier research attempts to close the gap between learning behavior analytics based dashboard feedback and learning theories by grounding the idea of dashboard feedback onto learning science concepts such as feedback, learning goals, (socio-/meta-) cognitive mechanisms underlying learning processes. This work extends the earlier research by proposing mechanisms for making those concepts and relationships measurable. The outcome is a complementary framework that allows identifying feedback needs and timing for their provision in a generic context that can be applied to a certain subject in a given LMS. The research serves as general guidelines for educators in designing educational dashboards, as well as a starting research platform in the direction of systematically matching learning sciences concepts with data and analytics concepts
Klimashevskaia Anastasia, Geiger Bernhard, Hagmüller Martin, Helic Denis, Fischer Frank
2020
(extended abstract)
Hobisch Elisbeth, Scholger Martina, Fuchs Alexandra, Geiger Bernhard, Koncar Philipp, Saric Sanja
2020
(extended abstract)
Schrunner Stefan, Geiger Bernhard, Zernig Anja, Kern Roman
2020
Classification has been tackled by a large number of algorithms, predominantly following a supervised learning setting. Surprisingly little research has been devoted to the problem setting where a dataset is only partially labeled, including even instances of entirely unlabeled classes. Algorithmic solutions that are suited for such problems are especially important in practical scenarios, where the labelling of data is prohibitively expensive, or the understanding of the data is lacking, including cases, where only a subset of the classes is known. We present a generative method to address the problem of semi-supervised classification with unknown classes, whereby we follow a Bayesian perspective. In detail, we apply a two-step procedure based on Bayesian classifiers and exploit information from both a small set of labeled data in combination with a larger set of unlabeled training data, allowing that the labeled dataset does not contain samples from all present classes. This represents a common practical application setup, where the labeled training set is not exhaustive. We show in a series of experiments that our approach outperforms state-of-the-art methods tackling similar semi-supervised learning problems. Since our approach yields a generative model, which aids the understanding of the data, it is particularly suited for practical applications.
Amjad Rana Ali, Geiger Bernhard
2020
In this theory paper, we investigate training deep neural networks (DNNs) for classification via minimizing the information bottleneck (IB) functional. We show that the resulting optimization problem suffers from two severe issues: First, for deterministic DNNs, either the IB functional is infinite for almost all values of network parameters, making the optimization problem ill-posed, or it is piecewise constant, hence not admitting gradient-based optimization methods. Second, the invariance of the IB functional under bijections prevents it from capturing properties of the learned representation that are desirable for classification, such as robustness and simplicity. We argue that these issues are partly resolved for stochastic DNNs, DNNs that include a (hard or soft) decision rule, or by replacing the IB functional with related, but more well-behaved cost functions. We conclude that recent successes reported about training DNNs using the IB framework must be attributed to such solutions. As a side effect, our results indicate limitations of the IB framework for the analysis of DNNs. We also note that rather than trying to repair the inherent problems in the IB functional, a better approach may be to design regularizers on latent representation enforcing the desired properties directly.
Gogolenko Sergiy, Groen Derek, Suleimenova Dian, Mahmood Imra, Lawenda Marcin, Nieto De Santos Javie, Hanley Joh, Vukovic Milana, Kröll Mark, Geiger Bernhard, Elsaesser Rober, Hoppe Dennis
2020
Accurate digital twinning of the global challenges (GC) leadsto computationally expensive coupled simulations. These simulationsbring together not only different models, but also various sources of mas-sive static and streaming data sets. In this paper, we explore ways tobridge the gap between traditional high performance computing (HPC)and data-centric computation in order to provide efficient technologicalsolutions for accurate policy-making in the domain of GC. GC simula-tions in HPC environments give rise to a number of technical challengesrelated to coupling. Being intended to reflect current and upcoming situ-ation for policy-making, GC simulations extensively use recent streamingdata coming from external data sources, which requires changing tradi-tional HPC systems operation. Another common challenge stems fromthe necessity to couple simulations and exchange data across data centersin GC scenarios. By introducing a generalized GC simulation workflow,this paper shows commonality of the technical challenges for various GCand reflects on the approaches to tackle these technical challenges in theHiDALGO project
Amjad Rana Ali, Bloechl Clemens, Geiger Bernhard
2020
We propose an information-theoretic Markov aggregation framework that is motivated by two objectives: 1) The Markov chain observed through the aggregation mapping should be Markov. 2) The aggregated chain should retain the temporal dependence structure of the original chain. We analyze our parameterized cost function and show that it contains previous cost functions as special cases, which we critically assess. Our simple optimization heuristic for deterministic aggregations characterizes the optimization landscape for different parameter values.
Breitfuß Gert, Fruhwirth Michael, Wolf-Brenner Christof, Riedl Angelika, Ginthör Robert, Pimas Oliver
2020
In the future, every successful company must have a clear idea of what data means to it. The necessary transformation to a data-driven company places high demands on companies and challenges management, organization and individual employees. In order to generate concrete added value from data, the collaboration of different disciplines e.g. data scientists, domain experts and business people is necessary. So far few tools are available which facilitate the creativity and co-creation process amongst teams with different backgrounds. The goal of this paper is to design and develop a hands-on and easy to use card-based tool for the generation of data service ideas that supports the required interdisciplinary cooperation. By using a Design Science Research approach we analysed 122 data service ideas and developed an innovation tool consisting of 38 cards. The first evaluation results show that the developed Data Service Cards are both perceived as helpful and easy to use.
Fruhwirth Michael, Breitfuß Gert, Pammer-Schindler Viktoria
2020
The availability of data sources and advances in analytics and artificial intelligence offers the opportunity for organizationsto develop new data-driven products, services and business models. Though, this process is challenging for traditionalorganizations, as it requires knowledge and collaboration from several disciplines such as data science, domain experts, orbusiness perspective. Furthermore, it is challenging to craft a meaningful value proposition based on data; whereas existingresearch can provide little guidance. To overcome those challenges, we conducted a Design Science Research project toderive requirements from literature and a case study, develop a collaborative visual tool and evaluate it through severalworkshops with traditional organizations. This paper presents the Data Product Canvas, a tool connecting data sources withthe user challenges and wishes through several intermediate steps. Thus, this paper contributes to the scientific body ofknowledge on developing data-driven business models, products and services.
Koncar Philipp, Fuchs Alexandra, Hobisch Elisabeth, Geiger Bernhard, Scholger Martina, Helic Denis
2020
Spectator periodicals contributed to spreading the ideas of the Age of Enlightenment, a turning point in human history and the foundation of our modern societies. In this work, we study the spirit and atmosphere captured in the spectator periodicals about important social issues from the 18th century by analyzing text sentiment of those periodicals. Specifically, based on a manually annotated corpus of over 3 700 issues published in five different languages and over a period of more than one hundred years, we conduct a three-fold sentiment analysis: First, we analyze the development of sentiment over time as well as the influence of topics and narrative forms on sentiment. Second, we construct sentiment networks to assess the polarity of perceptions between different entities, including periodicals, places and people. Third, we construct and analyze sentiment word networks to determine topological differences between words with positive and negative polarity allowing us to make conclusions on how sentiment was expressed in spectator periodicals.Our results depict a mildly positive tone in spectator periodicals underlining the positive attitude towards important topics of the Age of Enlightenment, but also signaling stylistic devices to disguise critique in order to avoid censorship. We also observe strong regional variation in sentiment, indicating cultural and historic differences between countries. For example, while Italy perceived other European countries as positive role models, French periodicals were frequently more critical towards other European countries. Finally, our topological analysis depicts a weak overrepresentation of positive sentiment words corroborating our findings about a general mildly positive tone in spectator periodicals.We believe that our work based on the combination of the sentiment analysis of spectator periodicals and the extensive knowledge available from literary studies sheds interesting new light on these publications. Furthermore, we demonstrate the inclusion of sentiment analysis as another useful method in the digital humanist’s distant reading toolbox.
Fruhwirth Michael, Ropposch Christiana, Pammer-Schindler Viktoria
2020
Purpose: This paper synthesizes existing research on tools and methods that support data-driven business model innovation, and maps out relevant directions for future research.Design/methodology/approach: We have carried out a structured literature review and collected and analysed a respectable but not excessively large number of 33 publications, due to the comparatively emergent nature of the field.Findings: Current literature on supporting data-driven business model innovation differs in the types of contribution (taxonomies, patterns, visual tools, methods, IT tool and processes), the types of thinking supported (divergent and convergent) and the elements of the business models that are addressed by the research (value creation, value capturing and value proposition).Research implications: Our review highlights the following as relevant directions for future research. Firstly, most research focusses on supporting divergent thinking, i.e. ideation. However, convergent thinking, i.e. evaluating, prioritizing, and deciding, is also necessary. Secondly, the complete procedure of developing data-driven business models and also the development on chains of tools related to this have been under-investigated. Thirdly, scarcely any IT tools specifically support the development of data-driven business models. These avenues also highlight the necessity to integrate between research on specifics of data in business model innovation, on innovation management, information systems and business analytics.Originality/Value: This paper is the first to synthesize the literature on how to identify and develop data-driven
Dumouchel Suzanne, Blotiere Emilie, Barbot Laure, Breitfuß Gert, Chen Yin, Di Donato Francesca, Forbes Paula, Petifils Clara, Pohle Stefanie
2020
SSH research is divided across a wide array of disciplines, sub-disciplines, and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. Use and reuse of SSH research is suboptimal, interdisciplinary collaboration possibilities are often missed partially because of missing standards and referential keys between disciplines. By the way the reuse of data may paradoxically complicate a relevant sorting and a trust relationship. As a result, societal, economic and academic impacts are limited. Conceptually, there is a wealth of transdisciplinary collaborations, but in practice there is a need to help SSH researchers and research institutions to connect them and support them, to prepare the research data for these overarching approaches and to make them findable and usable. The TRIPLE (Targeting Researchers through Innovative Practices and Linked Exploration) project is a practical answer to the above issues, as it aims at designing and developing the European discovery platform dedicated to SSH resources. Funded under the European Commission program INFRAEOSC-02-2019 “Prototyping new innovative services”, thanks to a consortium of 18 partners, TRIPLE will develop a full multilingual and multicultural solution for the discovery and the reuse of SSH resources. The project started in October 2019 for a duration of 42 months thanks to European funding of 5.6 million €.
Dennerlein Sebastian, Tomberg Vladimir, Treasure-Jones, Tamsin, Theiler Dieter, Lindstaedt Stefanie , Ley Tobias
2020
PurposeIntroducing technology at work presents a special challenge as learning is tightly integrated with workplace practices. Current design-based research (DBR) methods are focused on formal learning context and often questioned for a lack of yielding traceable research insights. This paper aims to propose a method that extends DBR by understanding tools as sociocultural artefacts, co-designing affordances and systematically studying their adoption in practice.Design/methodology/approachThe iterative practice-centred method allows the co-design of cognitive tools in DBR, makes assumptions and design decisions traceable and builds convergent evidence by consistently analysing how affordances are appropriated. This is demonstrated in the context of health-care professionals’ informal learning, and how they make sense of their experiences. The authors report an 18-month DBR case study of using various prototypes and testing the designs with practitioners through various data collection means.FindingsBy considering the cognitive level in the analysis of appropriation, the authors came to an understanding of how professionals cope with pressure in the health-care domain (domain insight); a prototype with concrete design decisions (design insight); and an understanding of how memory and sensemaking processes interact when cognitive tools are used to elaborate representations of informal learning needs (theory insight).Research limitations/implicationsThe method is validated in one long-term and in-depth case study. While this was necessary to gain an understanding of stakeholder concerns, build trust and apply methods over several iterations, it also potentially limits this.Originality/valueBesides generating traceable research insights, the proposed DBR method allows to design technology-enhanced learning support for working domains and practices. The method is applicable in other domains and in formal learning.
Kowald Dominik, Lex Elisabeth, Markus Schedl
2020
In this paper, we introduce a psychology-inspired approachto model and predict the music genre preferences of differ-ent groups of users by utilizing human memory processes.These processes describe how humans access informationunits in their memory by considering the factors of (i) pastusage frequency, (ii) past usage recency, and (iii) the currentcontext. Using a publicly available dataset of more than abillion music listening records shared on the music stream-ing platform Last.fm, we find that our approach providessignificantly better prediction accuracy results than variousbaseline algorithms for all evaluated user groups, i.e., (i) low-mainstream music listeners, (ii) medium-mainstream musiclisteners, and (iii) high-mainstream music listeners. Further-more, our approach is based on a simple psychological model,which contributes to the transparency and explainability ofthe calculated predictions
Kowald Dominik, Markus Schedl, Lex Elisabeth
2020
Research has shown that recommender systems are typicallybiased towards popular items, which leads to less popular items beingunderrepresented in recommendations. The recent work of Abdollahpouriet al. in the context of movie recommendations has shown that this pop-ularity bias leads to unfair treatment of both long-tail items as well asusers with little interest in popular items. In this paper, we reproducethe analyses of Abdollahpouri et al. in the context of music recommen-dation. Specifically, we investigate three user groups from the Last.fmmusic platform that are categorized based on how much their listen-ing preferences deviate from the most popular music among all Last.fmusers in the dataset: (i) low-mainstream users, (ii) medium-mainstreamusers, and (iii) high-mainstream users. In line with Abdollahpouri et al.,we find that state-of-the-art recommendation algorithms favor popularitems also in the music domain. However, their proposed Group Aver-age Popularity metric yields different results for Last.fm than for themovie domain, presumably due to the larger number of available items(i.e., music artists) in the Last.fm dataset we use. Finally, we comparethe accuracy results of the recommendation algorithms for the three usergroups and find that the low-mainstreaminess group significantly receivesthe worst recommendations.
Dennerlein Sebastian, Pammer-Schindler Viktoria, Ebner Markus, Getzinger Günter, Ebner Martin
2020
Sustainably digitalizing higher education requires a human-centred approach. To address actual problems in teaching as well as learning and increase acceptance, the Technology Enhanced Learning (TEL) solution(s) must be co-designed with affected researchers, teachers, students and administrative staff. We present research-in-progress about a sandpit-informed innovation process with a f2f-marketplace of TEL research and problemmapping as well team formation alongside a competitive call phase, which is followed by a cooperative phase of funded interdisciplinary pilot teams codesigning and implementing TEL innovations. Pilot teams are supported by a University Innovation Canvas to document and reflect on their TEL innovation from multiple viewpoints.
Fuchs Alexandra, Geiger Bernhard, Hobisch Elisabeth, Koncar Philipp, More Jacqueline, Saric Sanja, Scholger Martina
2020
Feichtinger Gerald, Gursch Heimo, Schlager Elke, Brandl Daniel, Gratzl Markus
2020
Bhat Karthik Subramanya, Bachhiesl Udo, Feichtinger Gerald, Stigler Heinz
2020
India, as a ‘developing’ country, is in the middle of a unique situation of handling its energy transition towards carbon-free energy along with its continuous economic development. With respect to the agreed COP 21 and SDG 2030 targets, India has drafted several energy strategies revolving around clean renewable energy. With multiple roadblocks for development of large hydro power capacities within the country, the long-term renewable goals of India focus highly on renewable energy technologies like solar Photo-Voltaic (PV) and wind capacities. However, with a much slower rate of development in transmission infrastructure and the given situations of the regional energy systems in the Indian subcontinent, these significant changes in India could result in severe technical and economic consequences for the complete interconnected region. The presented investigations in this paper have been conducted using ATLANTIS_India, a unique techno-economic simulation model developed at the Institute of Electricity Economics and Energy Innovation/Graz University of Technology, designed for the electricity system in the Indian subcontinent region. The model covers the electricity systems of India, Bangladesh, Bhutan, Nepal, and Sri Lanka, and is used to analyse a scenario where around 118 GW of solar PV and wind capacity expansion is planned in India until the target year 2050. This paper presents the simulation approach as well as the simulated results and conclusions. The simulation results show the positive and negative technoeconomic impacts of the discussed strategy on the overall electricity system, while suggesting possible solutions.
Fadljevic Leon, Maitz Katharina, Kowald Dominik, Pammer-Schindler Viktoria, Gasteiger-Klicpera Barbara
2020
This paper describes the analysis of temporal behavior of 11--15 year old students in a heavily instructionally designed adaptive e-learning environment. The e-learning system is designed to support student's acquisition of health literacy. The system adapts text difficulty depending on students' reading competence, grouping students into four competence levels. Content for the four levels of reading competence was created by clinical psychologists, pedagogues and medicine students. The e-learning system consists of an initial reading competence assessment, texts about health issues, and learning tasks related to these texts. The research question we investigate in this work is whether temporal behavior is a differentiator between students despite the system's adaptation to students' reading competence, and despite students having comparatively little freedom of action within the system. Further, we also investigated the correlation of temporal behaviour with performance. Unsupervised clustering clearly separates students into slow and fast students with respect to the time they take to complete tasks. Furthermore, topic completion time is linearly correlated with performance in the tasks. This means that we interpret working slowly in this case as diligence, which leads to more correct answers, even though the level of text difficulty matches student's reading competence. This result also points to the design opportunity to integrate advice on overarching learning strategies, such as working diligently instead of rushing through, into the student's overall learning activity. This can be done either by teachers, or via additional adaptive learning guidance within the system.
Lex Elisabeth, Kowald Dominik, Schedl Markus
2020
In this paper, we address the problem of modeling and predicting the music genre preferences of users. We introduce a novel user modeling approach, BLLu, which takes into account the popularity of music genres as well as temporal drifts of user listening behavior. To model these two factors, BLLu adopts a psychological model that describes how humans access information in their memory. We evaluate our approach on a standard dataset of Last.fm listening histories, which contains fine-grained music genre information. To investigate performance for different types of users, we assign each user a mainstreaminess value that corresponds to the distance between the user’s music genre preferences and the music genre preferences of the (Last.fm) mainstream. We adopt BLLu to model the listening habits and to predict the music genre preferences of three user groups: listeners of (i) niche, low-mainstream music, (ii) mainstream music, and (iii) medium-mainstream music that lies in-between. Our results show that BLLu provides the highest accuracy for predicting music genre preferences, compared to five baselines: (i) group-based modeling, (ii) user-based collaborative filtering, (iii) item-based collaborative filtering, (iv) frequency-based modeling, and (v) recency-based modeling. Besides, we achieve the most substantial accuracy improvements for the low-mainstream group. We believe that our findings provide valuable insights into the design of music recommender systems
Thalmann Stefan, Fessl Angela, Pammer-Schindler Viktoria
2020
Digitization is currently one of the major factors changing society and the business world. Most research focused on the technical issues of this change, but also employees and especially the way how they learn changes dramatically. In this paper, we are interested in exploring the perspectives of decision makers in huge manufacturing companies on current challenges in organizing learning and knowledge distribution in digitized manufacturing environments. Moreover, weinvestigated the change process and challenges of implementing new knowledge and learning processes.To this purpose, we have conducted 24 interviews with senior representatives of large manufacturing companies from Austria, Germany, Italy, Liechtenstein and Switzerland. Our exploratory study shows that decision makers perceive significant changes in work practice of manufacturing due to digitization and they currently plan changes in organizational training and knowledge distribution processes in response. Due to the lack of best practices, companies focus verymuch on technological advancements. The delivery of knowledge just-in-time directly into work practice is afavorite approach. Overall, digital learning services are growing and new requirements regarding compliance,quality management and organisational culture arise.
Fruhwirth Michael, Rachinger Michael, Prlja Emina
2020
The modern economy relies heavily on data as a resource for advancement and growth. Data marketplaces have gained an increasing amount of attention, since they provide possibilities to exchange, trade and access data across organizations. Due to the rapid development of the field, the research on business models of data marketplaces is fragmented. We aimed to address this issue in this article by identifying the dimensions and characteristics of data marketplaces from a business model perspective. Following a rigorous process for taxonomy building, we propose a business model taxonomy for data marketplaces. Using evidence collected from a final sample of twenty data marketplaces, we analyze the frequency of specific characteristics of data marketplaces. In addition, we identify four data marketplace business model archetypes. The findings reveal the impact of the structure of data marketplaces as well as the relevance of anonymity and encryption for identified data marketplace archetypes.
Lovric Mario, Šimić Iva, Godec Ranka, Kröll Mark, Beslic Ivan
2020
Narrow city streets surrounded by tall buildings are favorable to inducing a general effect of a “canyon” in which pollutants strongly accumulate in a relatively small area because of weak or inexistent ventilation. In this study, levels of nitrogen-oxide (NO2), elemental carbon (EC) and organic carbon (OC) mass concentrations in PM10 particles were determined to compare between seasons and different years. Daily samples were collected at one such street canyon location in the center of Zagreb in 2011, 2012 and 2013. By applying machine learning methods we showed seasonal and yearly variations of mass concentrations for carbon species in PM10 and NO2, as well as their covariations and relationships. Furthermore, we compared the predictive capabilities of five regressors (Lasso, Random Forest, AdaBoost, Support Vector Machine and Partials Least squares) with Lasso regression being the overall best performing algorithm. By showing the feature importance for each model, we revealed true predictors per target. These measurements and application of machine learning of pollutants were done for the first time at a street canyon site in the city of Zagreb, Croatia.
Kaiser Rene_DB, Thalmann Stefan, Pammer-Schindler Viktoria, Fessl Angela
2020
Organisations participate in collaborative projects that include competitors for a number of strategic reasons, even whilst knowing that this requires them to consider both knowledge sharing and knowledge protection throughout collaboration. In this paper, we investigated which knowledge protection practices representatives of organizations employ in a collaborative research and innovation project that can be characterized as a co-opetitive setting. We conducted a series of 30 interviews and report the following seven practices in structured form: restrictive partner selection in operative project tasks, communication through a gatekeeper, to limit access to a central platform, to hide details of machine data dumps, to have data not leave a factory for analysis, a generic model enabling to hide usage parameters, and to apply legal measures. When connecting each practice to a priori literature, we find three practices focussing on collaborative data analytics tasks had not yet been covered so far.
Arslanovic Jasmina, Ajana Löw, Lovric Mario, Kern Roman
2020
Previous studies have suggested that artistic (synchronized) swimming athletes might showeating disorders symptoms. However, systematic research on eating disorders in artistic swimming is limited and the nature and antecedents of the development of eating disorders in this specific population of athletes is still scarce. Hence, the aim of our research was to investigate the eating disorder symptoms in artistic swimming athletes using the EAT-26 instrument, and to examine the relation of the incidence and severity of these symptoms to body mass index and body image dissatisfaction. Furthermore, we wanted to compare artistic swimmers with athletes of a non-leanness (but also an aquatic) sport, therefore we also included a group of female water-polo athletes of the same age. The sample consisted of 36 artistic swimmers and 34 female waterpolo players (both aged 13-16). To test the presence of the eating disorder symptoms the EAT-26 was used. The Mann-Whitney U Test (MWU) was used to test for the differences in EAT-26 scores. The EAT-26 total score and the Dieting subscale (one of the three subscale) showed significant differences between the two groups. The median value for EAT-26 total score was higher in the artistic swimmers’ group (C = 11) than in the waterpolo players’ group (C = 8). A decision tree classifier was used to discriminate the artistic swimmers and female water polo players based on the features from the EAT26 and calculated features. The most discriminative features were the BMI, the dieting subscale and the habit of post-meal vomiting.Our results suggest that artistic swimmers, at their typical competing age, show higher risk of developing eating disorders than female waterpoloplayers and that they are also prone to dieting weight-control behaviors to achieve a desired weight. Furthermore, results indicate that purgative behaviors, such as binge eating or self-induced vomiting, might not be a common weight-control behavior among these athletes. The results corroborate the findings that sport environment in leanness sports might contribute to the development of eating disorders. The results are also in line with evidence that leanness sports athletes are more at risk for developing restrictive than purgative eating behaviors, as the latter usually do not contribute to body weight reduction. As sport environment factors in artistic swimming include judging criteria that emphasize a specific body shape and performance, it is important to raise the awareness of mental health risks that such environment might encourage.
Chiancone Alessandro, Cuder Gerald, Geiger Bernhard, Harzl Annemarie, Tanzer Thomas, Kern Roman
2019
This paper presents a hybrid model for the prediction of magnetostriction in power transformers by leveraging the strengths of a data-driven approach and a physics-based model. Specifically, a non-linear physics-based model for magnetostriction as a function of the magnetic field is employed, the parameters of which are estimated as linear combinations of electrical coil measurements and coil dimensions. The model is validated in a practical scenario with coil data from two different suppliers, showing that the proposed approach captures the different magnetostrictive properties of the two suppliers and provides an estimation of magnetostriction in agreement with the measurement system in place. It is argued that the combination of a non-linear physics-based model with few parameters and a linear data-driven model to estimate these parameters is attractive both in terms of model accuracy and because it allows training the data-driven part with comparably small datasets.
Stanisavljevic Darko, Cemernek David, Gursch Heimo, Urak Günter, Lechner Gernot
2019
Additive manufacturing becomes a more and more important technology for production, mainly driven by the ability to realise extremely complex structures using multiple materials but without assembly or excessive waste. Nevertheless, like any high-precision technology additive manufacturing responds to interferences during the manufacturing process. These interferences – like vibrations – might lead to deviations in product quality, becoming manifest for instance in a reduced lifetime of a product or application issues. This study targets the issue of detecting such interferences during a manufacturing process in an exemplary experimental setup. Collection of data using current sensor technology directly on a 3D-printer enables a quantitative detection of interferences. The evaluation provides insights into the effectiveness of the realised application-oriented setup, the effort required for equipping a manufacturing system with sensors, and the effort for acquisition and processing the data. These insights are of practical utility for organisations dealing with additive manufacturing: the chosen approach for detecting interferences shows promising results, reaching interference detection rates of up to 100% depending on the applied data processing configuration.
Santos Tiago, Schrunner Stefan, Geiger Bernhard, Pfeiler Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2019
Semiconductor manufacturing is a highly innovative branch of industry, where a high degree of automation has already been achieved. For example, devices tested to be outside of their specifications in electrical wafer test are automatically scrapped. In this paper, we go one step further and analyze test data of devices still within the limits of the specification, by exploiting the information contained in the analog wafermaps. To that end, we propose two feature extraction approaches with the aim to detect patterns in the wafer test dataset. Such patterns might indicate the onset of critical deviations in the production process. The studied approaches are: 1) classical image processing and restoration techniques in combination with sophisticated feature engineering and 2) a data-driven deep generative model. The two approaches are evaluated on both a synthetic and a real-world dataset. The synthetic dataset has been modeled based on real-world patterns and characteristics. We found both approaches to provide similar overall evaluation metrics. Our in-depth analysis helps to choose one approach over the other depending on data availability as a major aspect, as well as on available computing power and required interpretability of the results.
Lacic Emanuel, Reiter-Haas Markus, Duricic Tomislav, Slawicek Valentin, Lex Elisabeth
2019
In this work, we present the findings of an online study, where we explore the impact of utilizing embeddings to recommend job postings under real-time constraints. On the Austrian job platform Studo Jobs, we evaluate two popular recommendation scenarios: (i) providing similar jobs and, (ii) personalizing the job postings that are shown on the homepage. Our results show that for recommending similar jobs, we achieve the best online performance in terms of Click-Through Rate when we employ embeddings based on the most recent interaction. To personalize the job postings shown on a user's homepage, however, combining embeddings based on the frequency and recency with which a user interacts with job postings results in the best online performance.
Duricic Tomislav, Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2019
User-based Collaborative Filtering (CF) is one of the most popular approaches to create recommender systems. CF, however, suffers from data sparsity and the cold-start problem since users often rate only a small fraction of available items. One solution is to incorporate additional information into the recommendation process such as explicit trust scores that are assigned by users to others or implicit trust relationships that result from social connections between users. Such relationships typically form a very sparse trust network, which can be utilized to generate recommendations for users based on people they trust. In our work, we explore the use of regular equivalence applied to a trust network to generate a similarity matrix that is used for selecting k-nearest neighbors used for item recommendation. Two vertices in a network are regularly equivalent if their neighbors are themselves equivalent and by using the iterative approach of calculating regular equivalence, we can study the impact of strong and weak ties on item recommendation. We evaluate our approach on cold start users on a dataset crawled from Epinions and find that by using weak ties in addition to strong ties, we can improve the performance of a trust-based recommender in terms of recommendation accuracy.
Lassnig Markus, Stabauer Petra, Breitfuß Gert, Müller Julian
2019
Zahlreiche Forschungsergebnisse im Bereich Geschäftsmodellinnovationen haben gezeigt, dass über 90 Prozent aller Geschäftsmodelle der letzten 50 Jahre aus einer Rekombination von bestehenden Konzepten entstanden sind. Grundsätzlich gilt das auch für digitale Geschäftsmodellinnovationen. Angesichts der Breite potenzieller digitaler Geschäftsmodellinnovationen wollten die Autoren wissen, welche Modellmuster in der wirtschaftlichen Praxis welche Bedeutung haben. Deshalb wurde die digitale Transformation mit neuen Geschäftsmodellen in einer empirischen Studie basierend auf qualitativen Interviews mit 68 Unternehmen untersucht. Dabei wurden sieben geeignete Geschäftsmodellmuster identifiziert, bezüglich ihres Disruptionspotenzials von evolutionär bis revolutionär klassifiziert und der Realisierungsgrad in den Unternehmen analysiert.Die stark komprimierte Conclusio lautet, dass das Thema Geschäftsmodellinnovationen durch Industrie 4.0 und digitale Transformation bei den Unternehmen angekommen ist. Es gibt jedoch sehr unterschiedliche Geschwindigkeiten in der Umsetzung und im Neuheitsgrad der Geschäftsmodellideen. Die schrittweise Weiterentwicklung von Geschäftsmodellen (evolutionär) wird von den meisten Unternehmen bevorzugt, da hier die grundsätzliche Art und Weise des Leistungsangebots bestehen bleibt. Im Gegensatz dazu gibt es aber auch Unternehmen, die bereits radikale Änderungen vornehmen, die die gesamte Geschäftslogik betreffen (revolutionäre Geschäftsmodellinnovationen). Entsprechend wird im vorliegenden Artikel ein Clustering von Geschäftsmodellinnovatoren vorgenommen – von Hesitator über Follower über Optimizer bis zu Leader in Geschäftsmodellinnovationen.
Wolfbauer Irmtraud
2019
Presentation of PhDUse Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Wolfbauer Irmtraud
2019
Use Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Kowald Dominik, Lex Elisabeth, Schdel Markus
2019
Iacopo Vagliano, Fessl Angela, Franziska Günther, Thomas Köhler, Vasileios Mezaris, Ahmed Saleh, Ansgar Scherp, Simic Ilija
2019
The MOVING platform enables its users to improve their information literacy by training how to exploit data and text mining methods in their daily research tasks. In this paper, we show how it can support researchers in various tasks, and we introduce its main features, such as text and video retrieval and processing, advanced visualizations, and the technologies to assist the learning process.
Fessl Angela, Apaolaza Aitor, Gledson Ann, Pammer-Schindler Viktoria, Vigo Markel
2019
Searching on the web is a key activity for working and learning purposes. In this work, we aimed to motivate users to reflect on their search behaviour, and to experiment with different search functionalities. We implemented a widget that logs user interactions within a search platform, mirrors back search behaviours to users, and prompts users to reflect about it. We carried out two studies to evaluate the impact of such widget on search behaviour: in Study 1 (N = 76), participants received screenshots of the widget including reflection prompts while in Study 2 (N = 15), a maximum of 10 search tasks were conducted by participants over a period of two weeks on a search platform that contained the widget. Study 1 shows that reflection prompts induce meaningful insights about search behaviour. Study 2 suggests that, when using a novel search platform for the first time, those participants who had the widget prioritised search behaviours over time. The incorporation of the widget into the search platform after users had become familiar with it, however, was not observed to impact search behaviour. While the potential to support un-learning of routines could not be shown, the two studies suggest the widget’s usability, perceived usefulness, potential to induce reflection and potential to impact search behaviour.
Kopeinik Simone, Seitlinger Paul, Lex Elisabeth
2019
Kopeinik Simone, Lex Elisabeth, Kowald Dominik, Albert Dietrich, Seitlinger Paul
2019
When people engage in Social Networking Sites, they influence one another through their contributions. Prior research suggests that the interplay between individual differences and environmental variables, such as a person’s openness to conflicting information, can give rise to either public spheres or echo chambers. In this work, we aim to unravel critical processes of this interplay in the context of learning. In particular, we observe high school students’ information behavior (search and evaluation of Web resources) to better understand a potential coupling between confirmatory search and polarization and, in further consequence, improve learning analytics and information services for individual and collective search in learning scenarios. In an empirical study, we had 91 high school students performing an information search in a social bookmarking environment. Gathered log data was used to compute indices of confirmatory search and polarisation as well as to analyze the impact of social stimulation. We find confirmatory search and polarization to correlate positively and social stimulation to mitigate, i.e., reduce the two variables’ relationship. From these findings, we derive practical implications for future work that aims to refine our formalism to compute confirmatory search and polarisation indices and to apply it for depolarizing information services
Fruhwirth Michael, Pammer-Schindler Viktoria, Thalmann Stefan
2019
Data plays a central role in many of today's business models. With the help of advanced analytics, knowledge about real-world phenomena can be discovered from data. This may lead to unintended knowledge spillover through a data-driven offering. To properly consider this risk in the design of data-driven business models, suitable decision support is needed. Prior research on approaches that support such decision-making is scarce. We frame designing business models as a set of decision problems with the lens of Behavioral Decision Theory and describe a Design Science Research project conducted in the context of an automotive company. We develop an artefact that supports identifying knowledge risks, concomitant with design decisions, during the design of data-driven business models and verify knowledge risks as a relevant problem. In further research, we explore the problem in-depth and further design and evaluate the artefact within the same company as well as in other companies.
Silva Nelson, Madureira, Luis
2019
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
Schlager Elke, Gursch Heimo, Feichtinger Gerald
2019
Poster to publish the finally implemented "Data Management System" @ Know-Center for the COMFORT project
Feichtinger Gerald, Gursch Heimo
2019
Poster - allgemeine Projektvorstellung
Monsberger Michael, Koppelhuber Daniela, Sabol Vedran, Gursch Heimo, Spataru Adrian, Prentner Oliver
2019
A lot of research is currently focused on studying user behavior indirectly by analyzing sensor data. However, only little attention has been given to the systematic acquisition of immediate user feedback to study user behavior in buildings. In this paper, we present a novel user feedback system which allows building users to provide feedback on the perceived sense of personal comfort in a room. To this end, a dedicated easy-to-use mobile app has been developed; it is complemented by a supporting infrastructure, including a web page for an at-a-glance overview. The obtained user feedback is compared with sensor data to assess whether building services (e.g., heating, ventilation and air-conditioning systems) are operated in accordance with user requirements. This serves as a basis to develop algorithms capable of optimizing building operation by providing recommendations to facility management staff or by automatic adjustment of operating points of building services. In this paper, we present the basic concept of the novel feedback system for building users and first results from an initial test phase. The results show that building users utilize the developed app to provide both, positive and negative feedback on room conditions. They also show that it is possible to identify rooms with non-ideal operating conditions and that reasonable measures to improve building operation can be derived from the gathered information. The results highlight the potential of the proposed system.
Fuchs Alexandra, Geiger Bernhard, Hobisch Elisabeth, Koncar Philipp, Saric Sanja, Scholger Martina
2019
with contributions from Denis Helic and Jacqueline More
Lindstaedt Stefanie , Geiger Bernhard, Pirker Gerhard
2019
Big Data and data-driven modeling are receiving more and more attention in various research disciplines, where they are often considered as universal remedies. Despite their remarkable records of success, in certain cases a purely data-driven approach has proven to be suboptimal or even insufficient.This extended abstract briefly defines the terms Big Data and data-driven modeling and characterizes scenarios in which a strong focus on data has proven to be promising. Furthermore, it explains what progress can be made by fusing concepts from data science and machine learning with current physics-based concepts to form hybrid models, and how these can be applied successfully in the field of engine pre-simulation and engine control.
di Sciascio Maria Cecilia, Strohmaier David, Errecalde Marcelo Luis, Veas Eduardo Enrique
2019
Digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success but also a hindrance to good quality. Although Wikipedia has established guidelines for the “perfect article,” authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever-growing amount of articles pending review. Great efforts have been invested in algorithmic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. Instead, our contribution is an interactive tool that combines automatic classification methods and human interaction in a toolkit, whereby experts can experiment with new quality metrics and share them with authors that need to identify weaknesses to improve a particular article. A design study shows that experts are able to effectively create complex quality metrics in a visual analytics environment. In turn, a user study evidences that regular users can identify flaws, as well as high-quality content based on the inspection of automatic quality scores.
di Sciascio Maria Cecilia, Brusilovsky Peter, Trattner Christoph, Veas Eduardo Enrique
2019
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial and error, and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control has been developed to support the exploratory search process. In this work, we present our attempt to increase the power of exploratory search interfaces by using ideas of social search—for instance, leveraging information left by past users of information systems. Social search technologies are highly popular today, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This article presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. This work presents the full series of evaluations conducted to, first, assess the value of the social models in contexts independent to the user interface, in terms of objective and perceived accuracy. Then, in a study with the full-fledged system, we investigated system accuracy and subjective aspects with a structural model revealing that when users actively interacted with all of its control features, the hybrid system outperformed a baseline content-based–only tool and users were more satisfied.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Geiger Bernhard, Koch Tobias
2019
In 1959, Rényi proposed the information dimension and the d-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly quantized stochastic process divided by minus the logarithm of the quantizer step size 1/m in the limit as m → ∞. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function R(D) of the stochastic process divided by - log(D) in the limit as D ↓ 0. It is further shown that among all multivariate stationary processes with a given (matrixvalued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate and the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.
Kaiser Rene_DB
2019
Video content and technology is an integral part of our private and professional lives. We consume news and entertainment content, and besides communication and learning there are many more significant application areas. One area, however, where video content and technology is not (yet) utilized and exploited to a large extent are production environments in factories of the producing industries like the semiconductor and electronic components and systems (ECS) industries. This article outlines some of the opportunities and challenges towards better exploitation of video content and technology in such contexts. An understanding of the current situation is the basis for future socio-technical interventions where video technology may be integrated in work processes within factories.
Schweimer Christoph, Geiger Bernhard, Suleimenova Diana, Groen Derek, Gfrerer Christine, Pape David, Elsaesser Robert, Kocsis Albert Tihamér, Liszkai B., Horváth Zoltan
2019
Jorge Guerra Torres, Carlos Catania, Veas Eduardo Enrique
2019
Modern Network Intrusion Detection systems depend on models trained with up-to-date labeled data. Yet, the process of labeling a network traffic dataset is specially expensive, since expert knowledge is required to perform the annotations. Visual analytics applications exist that claim to considerably reduce the labeling effort, but the expert still needs to ponder several factors before issuing a label. And, most often the effect of bad labels (noise) in the final model is not evaluated. The present article introduces a novel active learning strategy that learns to predict labels in (pseudo) real-time as the user performs the annotation. The system called RiskID, presents several innovations: i) a set of statistical methods summarize the information, which is illustrated in a visual analytics application, ii) that interfaces with the active learning strategy forbuilding a random forest model as the user issues annotations; iii) the (pseudo) real-time predictions of the model are fed back visually to scaffold the traffic annotation task. Finally, iv) an evaluation framework is introduced that represents a complete methodology for evaluating active learning solutions, including resilience against noise.
Jorge Guerra Torres, Veas Eduardo Enrique, Carlos Catania
2019
Labeling a real network dataset is specially expensive in computer security, as an expert has to ponder several factors before assigning each label. This paper describes an interactive intelligent system to support the task of identifying hostile behavior in network logs. The RiskID application uses visualizations to graphically encode features of network connections and promote visual comparison. In the background, two algorithms are used to actively organize connections and predict potential labels: a recommendation algorithm and a semi-supervised learning strategy. These algorithms together with interactive adaptions to the user interface constitute a behavior recommendation. A study is carried out to analyze how the algo-rithms for recommendation and prediction influence the workflow of labeling a dataset. The results of a study with 16 participants indicate that the behaviour recommendation significantly improves the quality of labels. Analyzing interaction patterns, we identify a more intuitive workflow used when behaviour recommendation isavailable.
Luzhnica Granit, Veas Eduardo Enrique
2019
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing.Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Barreiros Carla, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2019
We present a visual interface for communicating the internal state of a coffee machine via a tree metaphor. Nature-inspired representations have a positive impact on human well-being. We also hypothesize that representing the coffee machine asa tree stimulates emotional connection to it, which leads to better maintenance performance.The first study assessed the understandability of the tree representation, comparing it with icon-based and chart-based representations. An online survey with 25 participants indicated no significant mean error difference between representations.A two-week field study assessed the maintenance performance of 12 participants, comparing the tree representation with the icon-based representation. Based on 240 interactions with the coffee machine, we concluded that participants understood themachine states significantly better in the tree representation. Their comments and behavior indicated that the tree representation encouraged an emotional engagement with the machine. Moreover, the participants performed significantly more optional maintenance tasks with the tree representation.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Kowald Dominik, Lacic Emanuel, Theiler Dieter, Traub Matthias, Kuffer Lucky, Lindstaedt Stefanie , Lex Elisabeth
2019
Kowald Dominik, Lex Elisabeth, Schedl Markus
2019
Lex Elisabeth, Kowald Dominik
2019
Toller Maximilian, Santos Tiago, Kern Roman
2019
Season length estimation is the task of identifying the number of observations in the dominant repeating pattern of seasonal time series data. As such, it is a common pre-processing task crucial for various downstream applications. Inferring season length from a real-world time series is often challenging due to phenomena such as slightly varying period lengths and noise. These issues may, in turn, lead practitioners to dedicate considerable effort to preprocessing of time series data since existing approaches either require dedicated parameter-tuning or their performance is heavily domain-dependent. Hence, to address these challenges, we propose SAZED: spectral and average autocorrelation zero distance density. SAZED is a versatile ensemble of multiple, specialized time series season length estimation approaches. The combination of various base methods selected with respect to domain-agnostic criteria and a novel seasonality isolation technique, allow a broad applicability to real-world time series of varied properties. Further, SAZED is theoretically grounded and parameter-free, with a computational complexity of O(𝑛log𝑛), which makes it applicable in practice. In our experiments, SAZED was statistically significantly better than every other method on at least one dataset. The datasets we used for the evaluation consist of time series data from various real-world domains, sterile synthetic test cases and synthetic data that were designed to be seasonal and yet have no finite statistical moments of any order.
Toller Maximilian, Geiger Bernhard, Kern Roman
2019
Distance-based classification is among the most competitive classification methods for time series data. The most critical componentof distance-based classification is the selected distance function.Past research has proposed various different distance metrics ormeasures dedicated to particular aspects of real-world time seriesdata, yet there is an important aspect that has not been considered so far: Robustness against arbitrary data contamination. In thiswork, we propose a novel distance metric that is robust against arbitrarily “bad” contamination and has a worst-case computationalcomplexity of O(n logn). We formally argue why our proposedmetric is robust, and demonstrate in an empirical evaluation thatthe metric yields competitive classification accuracy when appliedin k-Nearest Neighbor time series classification.
Breitfuß Gert, Berger Martin, Doerrzapf Linda
2019
The Austrian Federal Ministry for Transport, Innovation and Technology created an initiative to fund the setup and operation of Living Labs to provide a vital innovation ecosystem for mobility and transport. Five Urban Mobility Labs (UML) located in four urban areas have been selected for funding (duration 4 years) and started operation in 2017. In order to cover the risk of a high dependency of public funding (which is mostly limited in time), the lab management teams face the challenge to develop a viable and future-proof UML Business Model. The overall research goal of this paper is to get empirical insights on how a UML Business Model evolves on a long-term perspective and which success factors play a role. To answer the research question, a method mix of desk research and qualitative methods have been selected. In order to get an insight into the UML Business Model, two circles of 10 semi-structured interviews (two responsible persons of each UML) are planned. The first circle of the interviews took place between July 2018 and January 2019. The second circle of interviews is planned for 2020. Between the two rounds of the survey, a Business Model workshop is planned to share and create ideas for future Business Model developments. Based on the gained research insights a comprehensive list of success factors and hands-on recommendations will be derived. This should help UML organizations in developing a viable Business Model in order to support sustainable innovations in transport and mobility.
Geiger Bernhard
2019
joint work with Tobias Koch, Universidad Carlos III de Madrid
Silva Nelson, Blascheck Tanja, Jianu Radu, Rodrigues Nils, Weiskopf Daniel, Raubal Martin, Schreck Tobias
2019
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complexdatasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches.Therefore, we discuss foundations for eye tracking support in VAsystems. We first review and discuss the structure and range oftypical VA systems. Based on a widely used VA model, we presentfive comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used tosystematically explore how concrete VA systems could be extendedwith eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and applicationopportunities, and classify them into research themes. In a call foraction, we map the road for future research to broaden the use ofeye tracking and advance visual analytics.
Kaiser Rene_DB
2019
This paper gives a comprehensive overview of the Virtual Director concept. A Virtual Director is a software component automating the key decision making tasks of a TV broadcast director. It decides how to mix and present the available content streams on a particular playout device, most essentially deciding which camera view to show and when to switch to another. A Virtual Director allows to take decisions respecting individual user preferences and playout device characteristics. In order to take meaningful decisions, a Virtual Director must be continuously informed by real-time sensors which emit information about what is happening in the scene. From such (low-level) 'cues', the Virtual Director infers higher-level events, actions, facts and states which in turn trigger the real-time processes deciding on the presentation of the content. The behaviour of a Virtual Director, the 'production grammar', defines how decisions are taken, generally encompassing two main aspects: selecting what is most relevant, and deciding how to show it, applying cinematographic principles.
Thalmann Stefan, Gursch Heimo, Suschnigg Josef, Gashi Milot, Ennsbrunner Helmut, Fuchs Anna Katharina, Schreck Tobias, Mutlu Belgin, Mangler Jürgen, Huemer Christian, Lindstaedt Stefanie
2019
Current trends in manufacturing lead to more intelligent products, produced in global supply chains in shorter cycles, taking more and complex requirements into account. To manage this increasing complexity, cognitive decision support systems, building on data analytic approaches and focusing on the product life cycle, stages seem a promising approach. With two high-tech companies (world market leader in their domains) from Austria, we are approaching this challenge and jointly develop cognitive decision support systems for three real world industrial use cases. Within this position paper, we introduce our understanding of cognitive decision support and we introduce three industrial use cases, focusing on the requirements for cognitive decision support. Finally, we describe our preliminary solution approach for each use case and our next steps.
Stepputat Kendra, Kienreich Wolfgang, Dick Christopher S.
2019
With this article, we present the ongoing research project “Tango Danceability of Music in European Perspective” and the transdisciplinary research design it is built upon. Three main aspects of tango argentino are in focus—the music, the dance, and the people—in order to understand what is considered danceable in tango music. The study of all three parts involves computer-aided analysis approaches, and the results are examined within ethnochoreological and ethnomusicological frameworks. Two approaches are illustrated in detail to show initial results of the research model. Network analysis based on the collection of online tango event data and quantitative evaluation of data gathered by an online survey showed significant results, corroborating the hypothesis of gatekeeping effects in the shaping of musical preferences. The experiment design includes incorporation of motion capture technology into dance research. We demonstrate certain advantages of transdisciplinary approaches in the study of Intangible Cultural Heritage, in contrast to conventional studies based on methods from just one academic discipline.
Pammer-Schindler Viktoria
2019
This is a commentary of mine, created in the context of an open review process, selected for publication alongside the accepted original paper in a juried process, and published alongside the paper at the given DOI,
Xie Benjamin, Harpstead Erik, DiSalvo Betsy, Slovak Petr, Kharuffa Ahmed, Lee Michael J., Pammer-Schindler Viktoria, Ogan Amy, Williams Joseph Jay
2019
Winter Kevin, Kern Roman
2019
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second,to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved aver-age F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Maritsch Martin, Diana Suleimenova, Geiger Bernhard, Derek Groen
2019
Geiger Bernhard, Schrunner Stefan, Kern Roman
2019
Schrunner and Geiger have contributed equally to this work.
Adolfo Ruiz Calleja, Dennerlein Sebastian, Kowald Dominik, Theiler Dieter, Lex Elisabeth, Tobias Ley
2019
In this paper, we propose the Social Semantic Server (SSS) as a service-based infrastructure for workplace andprofessional Learning Analytics (LA). The design and development of the SSS has evolved over 8 years, startingwith an analysis of workplace learning inspired by knowledge creation theories and its application in differentcontexts. The SSS collects data from workplace learning tools, integrates it into a common data model based ona semantically-enriched Artifact-Actor Network and offers it back for LA applications to exploit the data. Further,the SSS design promotes its flexibility in order to be adapted to different workplace learning situations. Thispaper contributes by systematizing the derivation of requirements for the SSS according to the knowledge creationtheories, and the support offered across a number of different learning tools and LA applications integrated to it.It also shows evidence for the usefulness of the SSS extracted from four authentic workplace learning situationsinvolving 57 participants. The evaluation results indicate that the SSS satisfactorily supports decision making indiverse workplace learning situations and allow us to reflect on the importance of the knowledge creation theoriesfor such analysis.
Renner Bettina, Wesiak Gudrun, Pammer-Schindler Viktoria, Prilla Michael, Müller Lars, Morosini Dalia, Mora Simone, Faltin Nils, Cress Ulrike
2019
Fessl Angela, Simic Ilija, Barthold Sabine, Pammer-Schindler Viktoria
2019
Information literacy, the access to knowledge and use of it are becoming a precondition for individuals to actively take part in social,economic, cultural and political life. Information literacy must be considered as a fundamental competency like the ability to read, write and calculate. Therefore, we are working on automatic learning guidance with respect to three modules of the information literacy curriculum developed by the EU (DigComp 2.1 Framework). In prior work, we havelaid out the essential research questions from a technical side. In this work, we follow-up by specifying the concept to micro learning, and micro learning content units. This means, that the overall intervention that we design is concretized to: The widget is initialized by assessing the learners competence with the help of a knowledge test. This is the basis for recommending suitable micro learning content, adapted to the identified competence level. After the learner has read/worked through the content, the widget asks a reflective question to the learner. The goal of the reflective question is to deepen the learning. In this paper we present the concept of the widget and its integration in a search platform.
Fruhwirth Michael, Breitfuß Gert, Müller Christiana
2019
Die Nutzung von Daten in Unternehmen zur Analyse und Beantwortung vielfältiger Fragestellungen ist “daily business”. Es steckt aber noch viel mehr Potenzial in Daten abseits von Prozessoptimierungen und Business Intelligence Anwendungen. Der vorliegende Beitrag gibt einen Überblick über die wichtigsten Aspekte bei der Transformation von Daten in Wert bzw. bei der Entwicklung datengetriebener Geschäftsmodelle. Dabei werden die Charakteristika von datengetriebenen Geschäftsmodellen und die benötigten Kompetenzen näher beleuchtet. Vier Fallbeispiele österreichischer Unternehmen geben Einblicke in die Praxis und abschließend werden aktuelle Herausforderungen und Entwicklungen diskutiert.
Luzhnica Granit, Veas Eduardo Enrique
2019
Luzhnica Granit, Veas Eduardo Enrique
2019
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Breitfuß Gert, Fruhwirth Michael, Pammer-Schindler Viktoria, Stern Hermann, Dennerlein Sebastian
2019
Increasing digitization is generating more and more data in all areas ofbusiness. Modern analytical methods open up these large amounts of data forbusiness value creation. Expected business value ranges from process optimizationsuch as reduction of maintenance work and strategic decision support to businessmodel innovation. In the development of a data-driven business model, it is usefulto conceptualise elements of data-driven business models in order to differentiateand compare between examples of a data-driven business model and to think ofopportunities for using data to innovate an existing or design a new businessmodel. The goal of this paper is to identify a conceptual tool that supports datadrivenbusiness model innovation in a similar manner: We applied three existingclassification schemes to differentiate between data-driven business models basedon 30 examples for data-driven business model innovations. Subsequently, wepresent the strength and weaknesses of every scheme to identify possible blindspots for gaining business value out of data-driven activities. Following thisdiscussion, we outline a new classification scheme. The newly developed schemecombines all positive aspects from the three analysed classification models andresolves the identified weaknesses.
Clemens Bloechl, Rana Ali Amjad, Geiger Bernhard
2019
We present an information-theoretic cost function for co-clustering, i.e., for simultaneous clustering of two sets based on similarities between their elements. By constructing a simple random walk on the corresponding bipartite graph, our cost function is derived from a recently proposed generalized framework for information-theoretic Markov chain aggregation. The goal of our cost function is to minimize relevant information loss, hence it connects to the information bottleneck formalism. Moreover, via the connection to Markov aggregation, our cost function is not ad hoc, but inherits its justification from the operational qualities associated with the corresponding Markov aggregation problem. We furthermore show that, for appropriate parameter settings, our cost function is identical to well-known approaches from the literature, such as “Information-Theoretic Co-Clustering” by Dhillon et al. Hence, understanding the influence of this parameter admits a deeper understanding of the relationship between previously proposed information-theoretic cost functions. We highlight some strengths and weaknesses of the cost function for different parameters. We also illustrate the performance of our cost function, optimized with a simple sequential heuristic, on several synthetic and real-world data sets, including the Newsgroup20 and the MovieLens100k data sets.
Lovric Mario, Molero Perez Jose Manuel, Kern Roman
2019
The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations
Robert Gutounig, Romana Rauter, Susanne Sackl-Sharif , Sabine Klinger, Dennerlein Sebastian
2018
Mit Digitalisierung werden unterschiedliche Erwartungen verbunden, die aus Organisationssicht bzw. aus ArbeitnehmerInnensicht durchaus ungleich ausfallen können. Eindeutig festzustellen ist jedenfalls die zunehmende Durch-dringung von Arbeitsprozessen durch digitale Tools. Bekannt sind mittlerweile auch zahlreiche gesundheitsbelastende Faktoren, die sich etwa durch Beschleu-nigung bzw. Intensivierung der Arbeit ergeben. Vor diesem Hintergrund wurde mittels einer explorativen Studie aus dem Gesundheitsdienstleistungsbereich er-hoben, vor welche neuen Herausforderungen ArbeitnehmerInnen und Organisa-tionen durch die zunehmende digitale Mediennutzung gestellt werden. Aus den Interviews und der Befragung geht hervor, dass die Durchführung der Arbeit ohne digitale Unterstützung nicht mehr denkbar wäre, besonders hinsichtlich der Dokumentation von Daten, aber zunehmend auch die Arbeit an den PatientInnen selbst betreffend. Durchgängig sind Ambivalenzen in der Wahrnehmung der Mit-arbeiterInnen zu finden, z.B. erleichterter Zugriff auf Daten vs. Kontrollregime durch den Arbeitgeber. Weitere identifizierte Themenfelder für Forschung zu Auswirkungen und Potenzialen digitaler Mediennutzung beinhalten u.a. Digital Literacy und partizipative Ansätze der Technikentwicklung. (PDF) Zwischen Produktivität und Überlastung. Auswirkungen digitalisierter Arbeitsprozesse im Gesundheitsdienstleistungsbereich am Beispiel Krankenhaus. Available from: https://www.researchgate.net/publication/324835753_Zwischen_Produktivitat_und_Uberlastung_Auswirkungen_digitalisierter_Arbeitsprozesse_im_Gesundheitsdienstleistungsbereich_am_Beispiel_Krankenhaus [accessed Nov 15 2019].
Mutlu Belgin, Simic Ilija, Cicchinelli Analia, Sabol Vedran, Veas Eduardo Enrique
2018
Learning dashboards (LD) are commonly applied for monitoring and visual analysis of learning activities. The main purpose of LDs is to increase awareness, to support self assessment and reflection and, when used in collaborative learning platforms (CLP), to improve the collaboration among learners. Collaborative learning platforms serve astools to bring learners together, who share the same interests and ideas and are willing to work and learn together – a process which, ideally, leads to effective knowledge building. However, there are collaborationand communications factors which affect the effectiveness of knowledge creation – human, social and motivational factors, design issues, technical conditions, and others. In this paper we introduce a learning dashboard – the Visualizer – that serves the purpose of (statistically) analyzing andexploring the behaviour of communities and users. Visualizer allows a learner to become aware of other learners with similar characteristics and also to draw comparisons with individuals having similar learninggoals. It also helps a teacher become aware of how individuals working in the groups (learning communities) interact with one another and across groups.
Fessl Angela, Wesiak Gudrun, Pammer-Schindler Viktoria
2018
Managing knowledge in periods of digital change requires not only changes in learning processes but also in knowledge transfer. For this knowledge transfer, we see reflective learning as an important strategy to keep the vast body of theoretical knowledge fresh and up-to-date, and to transfer theoretical knowledge to practical experience. In this work, we present a study situated in a qualification program for stroke nurses in Germany. In the seven-week study, 21 stroke nurses used a quiz on medical knowledge as an additional learning instrument. The quiz contained typical quiz questions (“content questions”) as well as reflective questions that aimed at stimulating nurses to reflect on the practical relevance of the learned knowledge. We particularly looked at how reflective questions can support the transfer of theoretical knowledge into practice. The results show that by playful learning and presenting reflective questions at the right time, participants reflected and related theoretical knowledge to practical experience.
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
Geiger Bernhard
2018
This short note presents results about the symmetric Jensen-Shannon divergence between two discrete mixture distributions p1 and p2. Specifically, for i=1,2, pi is the mixture of a common distribution q and a distribution p̃ i with mixture proportion λi. In general, p̃ 1≠p̃ 2 and λ1≠λ2. We provide experimental and theoretical insight to the behavior of the symmetric Jensen-Shannon divergence between p1 and p2 as the mixture proportions or the divergence between p̃ 1 and p̃ 2 change. We also provide insight into scenarios where the supports of the distributions p̃ 1, p̃ 2, and q do not coincide.
Ross-Hellauer Anthony, Schmidt Birgit, Kramer Bianca
2018
As open access (OA) to publications continues to gather momentum, we should continuously question whether it is moving in the right direction. A novel intervention in this space is the creation of OA publishing platforms commissioned by funding organizations. Examples include those of the Wellcome Trust and the Gates Foundation, as well as recently announced initiatives from public funders like the European Commission and the Irish Health Research Board. As the number of such platforms increases, it becomes urgently necessary to assess in which ways, for better or worse, this emergent phenomenon complements or disrupts the scholarly communications landscape. This article examines ethical, organizational, and economic strengths and weaknesses of such platforms, as well as usage and uptake to date, to scope the opportunities and threats presented by funder OA platforms in the ongoing transition to OA. The article is broadly supportive of the aims and current implementations of such platforms, finding them a novel intervention which stands to help increase OA uptake, control costs of OA, lower administrative burden on researchers, and demonstrate funders’ commitment to fostering open practices. However, the article identifies key areas of concern about the potential for unintended consequences, including the appearance of conflicts of interest, difficulties of scale, potential lock-in, and issues of the branding of research. The article ends with key recommendations for future consideration which include a focus on open scholarly infrastructure.
Geiger Bernhard
2018
This entry for the 2018 MDPI English Writing Prize has been published as a chapter of "The Global Benefits of Open Research", edited by Martyn Rittman.
Fernández Alonso, Miguel Yuste, Kern Roman
2018
Collection of environmental datasets recorded with Tinkerforge sensors and used in the development of a bachelor thesis on the topic of frequent pattern mining. The data was collected in several locations in the city of Graz, Austria, as well as an additional dataset recorded in Santander, Spain.
Fessl Angela, Kowald Dominik, Susana López Sola, Ana Moreno, Ricardo Alonso, Maturana, Thalmann_TU Stefan
2018
Learning analytics deals with tools and methods for analyzing anddetecting patterns in order to support learners while learning in formal as wellas informal learning settings. In this work, we present the results of two focusgroups in which the effects of a learning resource recommender system and adashboard based on analytics for everyday learning were discussed from twoperspectives: (1) knowledge workers as self-regulated everyday learners (i.e.,informal learning) and (2) teachers who serve as instructors for learners (i.e.,formal learning). Our findings show that the advantages of analytics for everydaylearning are three-fold: (1) it can enhance the motivation to learn, (2) it canmake learning easier and broadens the scope of learning, and (3) it helps to organizeand to systematize everyday learning.
Pammer-Schindler Viktoria, Fessl Angela, Wertner Alfred
2018
Becoming a data-savvy professional requires skills and competencesin information literacy, communication and collaboration, and content creationin digital environments. In this paper, we present a concept for automatic learningguidance in relation to an information literacy curriculum. The learning guidanceconcept has three components: Firstly, an open learner model in terms of an informationliteracy curriculum is created. Based on the data collected in the learnermodel, learning analytics is used in combination with a corresponding visualizationto present the current learning status of the learner. Secondly, reflectionprompts in form of sentence starters or reflective questions adaptive to the learnermodel aim to guide learning. Thirdly, learning resources are suggested that arestructured along learning goals to motivate learners to progress. The main contributionof this paper is to discuss what we see as main research challenges withrespect to existing literature on open learner modeling, learning analytics, recommendersystems for learning, and learning guidance.
Iacopo Vagliano, Franziska Günther, Mathias Heinz, Aitor Apaolaza, Irina Bienia, Breitfuß Gert, Till Blume, Chrysa Collyda, Fessl Angela, Sebastian Gottfried, Hasitschka Peter, Jasmin Kellermann, Thomas Köhler, Annalouise Maas, Vasileios Mezaris, Ahmed Saleh, Andrzej Skulimowski, Thalmann_TU Stefan, Markel Vigo, Wertner Alfred, Michael Wiese, Ansgar Scherp
2018
In the Big Data era, people can access vast amounts of information, but often lack the time, strategies and tools to efficiently extract the necessary knowledge from it. Research and innovation staff needs to effectively obtain an overview of publications, patents, funding opportunities, etc., to derive an innovation strategy. The MOVING platform enables its users to improve their information literacy by training how to exploit data mining methods in their daily research tasks. Through a novel integrated working and training environment, the platform supports the education of data-savvy information professionals and enables them to deal with the challenges of Big Data and open innovation.
Luzhnica Granit, Veas Eduardo Enrique, Caitlyn Seim
2018
This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.
Luzhnica Granit, Veas Eduardo Enrique
2018
Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2018
In the context of the Internet of Things (IoT), every device have sensing and computing capabilities to enhance many aspects of human life. There are more and more IoT devices in our homes and at our workplaces, and they still depend on human expertise and intervention for tasks as maintenance and (re)configuration. Using biophilic design and calm computing principles, we developed a nature-inspired representation, BioIoT, to communicate sensor information. This visual language contributes to the users’ well-being and performance while being as easy to understand as traditional data representations. Our work is based on the assumption that if machines are perceived to be more like living beings, users will take better care of them, which ideally would translate into a better device maintenance. In addition, the users’ overall well-being can be improved by bringing nature to their lives. In this work, we present two use case scenarios under which the BioIoT concept can be applied and demonstrate its potential benefits in households and at workplaces.
Lex Elisabeth, Wagner Mario, Kowald Dominik
2018
In this work, we propose a content-based recommendation approach to increase exposure to opposing beliefs and opinions. Our aim is to help provide users with more diverse viewpoints on issues, which are discussed in partisan groups from different perspectives. Since due to the backfire effect, people's original beliefs tend to strengthen when challenged with counter evidence, we need to expose them to opposing viewpoints at the right time. The preliminary work presented here describes our first step into this direction. As illustrative showcase, we take the political debate on Twitter around the presidency of Donald Trump.
Kowald Dominik, Lex Elisabeth
2018
The micro-blogging platform Twitter allows its nearly 320 million monthly active users to build a network of follower connections to other Twitter users (i.e., followees) in order to subscribe to content posted by these users. With this feature, Twitter has become one of the most popular social networks on the Web and was also the first platform that offered the concept of hashtags. Hashtags are freely-chosen keywords, which start with the hash character, to annotate, categorize and contextualize Twitter posts (i.e., tweets).Although hashtags are widely accepted and used by the Twitter community, the heavy reuse of hashtags that are popular in the personal Twitter networks (i.e., own hashtags and hashtags used by followees) can lead to filter bubble effects and thus, to situations, in which only content associated with these hashtags are presented to the user. These filter bubble effects are also highly associated with the concept of confirmation bias, which is the tendency to favor and reuse information that confirms personal preferences. One example would be a Twitter user who is interested in political tweets of US president Donald Trump. Depending on the hashtags used, the user could either be stuck in a pro-Trump (e.g., #MAGA) or contra-Trump (e.g., #fakepresident) filter bubble. Therefore, the goal of this paper is to study confirmation bias and filter bubble effects in hashtag usage on Twitter by treating the reuse of hashtags as a phenomenon that fosters confirmation bias.
Gursch Heimo, Silva Nelson, Reiterer Bernhard , Paletta Lucas , Bernauer Patrick, Fuchs Martin, Veas Eduardo Enrique, Kern Roman
2018
The project Flexible Intralogistics for Future Factories (FlexIFF) investigates human-robot collaboration in intralogistics teams in the manufacturing industry, which form a cyber-physical system consisting of human workers, mobile manipulators, manufacturing machinery, and manufacturing information systems. The workers use Virtual Reality (VR) and Augmented Reality (AR) devices to interact with the robots and machinery. The right information at the right time is key for making this collaboration successful. Hence, task scheduling for mobile manipulators and human workers must be closely linked with the enterprise’s information systems, offering all actors on the shop floor a common view of the current manufacturing status. FlexIFF will provide useful, well-tested, and sophisticated solutions for cyberphysicals systems in intralogistics, with humans and robots making the most of their strengths, working collaboratively and helping each other.
Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2018
In this paper, we present work-in-progress on applying user pre-filtering to speed up and enhance recommendations based on Collab-orative Filtering. We propose to pre-filter users in order to extracta smaller set of candidate neighbors, who exhibit a high numberof overlapping entities and to compute the final user similaritiesbased on this set. To realize this, we exploit features of the high-performance search engine Apache Solr and integrate them into ascalable recommender system. We have evaluated our approachon a dataset gathered from Foursquare and our evaluation resultssuggest that our proposed user pre-filtering step can help to achieveboth a better runtime performance as well as an increase in overallrecommendation accuracy
Kowald Dominik, Lacic Emanuel, Theiler Dieter, Lex Elisabeth
2018
In this paper, we present preliminary results of AFEL-REC, a rec-ommender system for social learning environments. AFEL-RECis build upon a scalable so‰ware architecture to provide recom-mendations of learning resources in near real-time. Furthermore,AFEL-REC can cope with any kind of data that is present in sociallearning environments such as resource metadata, user interactionsor social tags. We provide a preliminary evaluation of three rec-ommendation use cases implemented in AFEL-REC and we €ndthat utilizing social data in form of tags is helpful for not only im-proving recommendation accuracy but also coverage. ‘is papershould be valuable for both researchers and practitioners inter-ested in providing resource recommendations in social learningenvironments
Cuder Gerald, Baumgartner Christian
2018
Cancer is one of the most uprising diseases in our modern society and is defined by an uncontrolled growth of tissue. This growth is caused by mutation on the cellular level. In this thesis, a data-mining workflow was developed to find these responsible genes among thousands of irrelevant ones in three microarray datasets of different cancer types by applying machine learning methods such as classification and gene selection. In this work, four state-of-the-art selection algorithms are compared with a more sophisticated method, termed Stacked-Feature Ranking (SFR), further increasing the discriminatory ability in gene selection.
Dennerlein Sebastian, Kowald Dominik, Lex Elisabeth, Ley Tobias, Pammer-Schindler Viktoria
2018
Co-Creation methods for interactive computer systems design are by now widely accepted as part of the methodological repertoire in any software development process. As the communityis becoming more and more aware of the factthat software is driven by complex, artificially intelligent algorithms, the question arises what “co-creation of algorithms” in the sense of users ex-plicitly shaping the parameters of algorithms during co-creation, could mean, and how it would work. They are not tangible like featuresin a tool and desired effects are harder to be explained or understood. Therefore, we propose an it-erative simulation-based Co-Design approach that allows to Co-Create Algo-rithms together with the domain professionals by making their assumptions and effects observable. The proposal is a methodological idea for discussion within the EC-TEL community, yet to be applied in a research practice
Duricic Tomislav, Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2018
User-based Collaborative Filtering (CF) is one of the most popularapproaches to create recommender systems. Œis approach is basedon €nding the most relevant k users from whose rating history wecan extract items to recommend. CF, however, su‚ers from datasparsity and the cold-start problem since users o‰en rate only asmall fraction of available items. One solution is to incorporateadditional information into the recommendation process such asexplicit trust scores that are assigned by users to others or implicittrust relationships that result from social connections betweenusers. Such relationships typically form a very sparse trust network,which can be utilized to generate recommendations for users basedon people they trust. In our work, we explore the use of a measurefrom network science, i.e. regular equivalence, applied to a trustnetwork to generate a similarity matrix that is used to select thek-nearest neighbors for recommending items. We evaluate ourapproach on Epinions and we €nd that we can outperform relatedmethods for tackling cold-start users in terms of recommendationaccuracy
Cicchinelli Analia, Veas Eduardo Enrique, Pardo Abelardo, Pammer-Schindler Viktoria, Fessl Angela, Barreiros Carla, Lindstaedt Stefanie
2018
This paper aims to identify self-regulation strategies from students' interactions with the learning management system (LMS). We used learning analytics techniques to identify metacognitive and cognitive strategies in the data. We define three research questions that guide our studies analyzing i) self-assessments of motivation and self regulation strategies using standard methods to draw a baseline, ii) interactions with the LMS to find traces of self regulation in observable indicators, and iii) self regulation behaviours over the course duration. The results show that the observable indicators can better explain self-regulatory behaviour and its influence in performance than preliminary subjective assessments.
Silva Nelson, Schreck Tobias, Veas Eduardo Enrique, Sabol Vedran, Eggeling Eva, Fellner Dieter W.
2018
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
Fessl Angela, Wertner Alfred, Pammer-Schindler Viktoria
2018
In this demonstration paper, we describe a prototype that visualizes usage of different search interfaces on a single search platform with the goal to motivate users to explore alternative search interfaces. The underlying rationale is, that by now the one-line-input to search engines is so standard, that we can assume users’ search behavior to be operationalized. This means, that users may be reluctant to explore alternatives even though these may be suited better to their context of use / search task.
di Sciascio Maria Cecilia, Brusilovsky Peter, Veas Eduardo Enrique
2018
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial-and-error and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control have been developed to support exploratory search process. In this work we present our attempt to increase the power of exploratory search interfaces by using ideas of social search, i.e., leveraging information left by past users of information systems. Social search technologies are highly popular nowadays, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This paper presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. In an online study investigating system accuracy and subjective aspects with a structural model we found that, when users actively interacted with all its control features, the hybrid system outperformed a baseline content-based-only tool and users were more satisfied.
Pammer-Schindler Viktoria, Thalmann Stefan, Fessl Angela, Füssel Julia
2018
Traditionally, professional learning for senior professionalsis organized around face-2-face trainings. Virtual trainingsseem to offer an opportunity to reduce costs related to traveland travel time. In this paper we present a comparative casestudy that investigates the differences between traditionalface-2-face trainings in physical reality, and virtualtrainings via WebEx. Our goal is to identify how the way ofcommunication impacts interaction between trainees,between trainees and trainers, and how it impactsinterruptions. We present qualitative results fromobservations and interviews of three cases in differentsetups (traditional classroom, web-based with allparticipants co-located, web-based with all participants atdifferent locations) and with overall 25 training participantsand three trainers. The study is set within one of the BigFour global auditing companies, with advanced seniorauditors as learning cohort
Kaiser Rene_DB
2018
Production companies typically have not utilized video content and video technology in factory environ-ments to a significant extent in the past. However, the current Industry 4.0 movement inspires companies to reconsider production processes and job qualifications for their shop floor workforce. Infrastructure and machines get connected to central manufacturing execution systems in digitization and datafication efforts. In the realm of this fourth industrial revolution, companies are encouraged to revisit their strategy regarding video-based applications as well. This paper discusses the current situation and selected aspects of opportu-nities and challenges of video technology that might enable added value in such environments.
Kaiser Rene_DB
2018
This paper aims to contribute to the discussion on 360° video storytelling. It describes the 'Virtual Director' concept, an enabling technology that was developed to personalize video presentation in applications where multiple live streams are available at the same time. Users are supported in dynamically changing viewpoints, as the Virtual Director essentially automates the tasks of a human director. As research prototypes on a proof-of-concept maturity level, this approach has been evaluated for personalized live event broadcast, group video communication and distributed theatre performances. While on the capture side a 180° high-resolution panoramic video feed has been used in one of these application scenarios, so far, only traditional 2D video screen were investigated for playout. The research question this paper aims to contribute to is how technology in general, and an adaptation of the Virtual Director concept in particular, could assist users in their needs when consuming 360° content, both live and recorded. In contexts when users do not want to enjoy the freedom to look into any direction, or when content creators want them to look in a certain direction, how could the interaction with and intervention of a Virtual Director be applied from a storytelling point of view?
Kowald Dominik
2018
Social tagging systems enable users to collaboratively assign freely chosen keywords (i.e.,tags) to resources (e.g., Web links). In order to support users in nding descriptive tags, tagrecommendation algorithms have been proposed. One issue of current state-of-the-art tagrecommendation algorithms is that they are often designed in a purely data-driven way andthus, lack a thorough understanding of the cognitive processes that play a role when peopleassign tags to resources. A prominent example is the activation equation of the cognitivearchitecture ACT-R, which formalizes activation processes in human memory to determineif a specic memory unit (e.g., a word or tag) will be needed in a specic context. It is theaim of this thesis to investigate if a cognitive-inspired approach, which models activationprocesses in human memory, can improve tag recommendations.For this, the relation between activation processes in human memory and usage prac-tices of tags is studied, which reveals that (i) past usage frequency, (ii) recency, and (iii)semantic context cues are important factors when people reuse tags. Based on this, acognitive-inspired tag recommendation approach termed BLLAC+MPr is developed based onthe activation equation of ACT-R. An extensive evaluation using six real-world folksonomydatasets shows that BLLAC+MPr outperforms current state-of-the-art tag recommendationalgorithms with respect to various evaluation metrics. Finally, BLLAC+MPr is utilized forhashtag recommendations in Twitter to demonstrate its generalizability in related areas oftag-based recommender systems. The ndings of this thesis demonstrate that activationprocesses in human memory can be utilized to improve not only social tag recommendationsbut also hashtag recommendations. This opens up a number of possible research strands forfuture work, such as the design of cognitive-inspired resource recommender systems
Ross-Hellauer Anthony, Kowald Dominik, Lex Elisabeth
2018
Fruhwirth Michael, Breitfuß Gert, Pammer-Schindler Viktoria
2018
The increasing amount of generated data and advances in technology and data analytics and are enablers and drivers for new business models with data as a key resource. Currently established organisations struggle with identifying the value and benefits of data and have a lack of know-how, how to develop new products and services based on data. There is very little research that is narrowly focused on data-driven business model innovation in established organisations. The aim of this research is to investigate existing activities within Austrians enterprises with regard to exploring data-driven business models and challenges encountered in this endeavour. The outcome of the research in progress paper are categories of challenges related to organisation, business and technology, established organisations in Austria face during data-driven business model innovation
Cuder Gerald, Breitfuß Gert, Kern Roman
2018
Electric vehicles have enjoyed a substantial growth in recent years. One essential part to ensure their success in the future is a well-developed and easy-to-use charging infrastructure. Since charging stations generate a lot of (big) data, gaining useful information out of this data can help to push the transition to E-Mobility. In a joint research project, the Know-Center, together with the has.to.be GmbH applied data analytics methods and visualization technologies on the provided data sets. One objective of the research project is, to provide a consumption forecast based on the historical consumption data. Based on this information, the operators of charging stations are able to optimize the energy supply. Additionally, the infrastructure data were analysed with regard to "predictive maintenance", aiming to optimize the availability of the charging stations. Furthermore, advanced prediction algorithms were applied to provide services to the end user regarding availability of charging stations.
Andrusyak Bohdan, Kugi Thomas, Kern Roman
2018
The stock and foreign exchange markets are the two fundamental financial markets in the world and play acrucial role in international business. This paper examines the possibility of predicting the foreign exchangemarket via machine learning techniques, taking the stock market into account. We compare prediction modelsbased on algorithms from the fields of shallow and deep learning. Our models of foreign exchange marketsbased on information from the stock market have been shown to be able to predict the future of foreignexchange markets with an accuracy of over 60%. This can be seen as an indicator of a strong link between thetwo markets. Our insights offer a chance of a better understanding guiding the future of market predictions.We found the accuracy depends on the time frame of the forecast and the algorithms used, where deeplearning tends to perform better for farther-reaching forecasts
Lacic Emanuel, Traub Matthias, Duricic Tomislav, Haslauer Eva, Lex Elisabeth
2018
A challenge for importers in the automobile industry is adjusting to rapidly changing market demands. In this work, we describe a practical study of car import planning based on the monthly car registrations in Austria. We model the task as a data driven forecasting problem and we implement four different prediction approaches. One utilizes a seasonal ARIMA model, while the other is based on LSTM-RNN and both compared to a linear and seasonal baselines. In our experiments, we evaluate the 33 different brands by predicting the number of registrations for the next month and for the year to come.
Lassnig Markus, Stabauer Petra, Breitfuß Gert, Mauthner Katrin
2018
Zahlreiche Forschungsergebnisse im Bereich Geschäftsmodellinnovationenhaben gezeigt, dass über 90% aller Geschäftsmodelle der letzten50 Jahre aus einer Rekombination von bestehenden Konzepten entstanden sind.Grundsätzlich gilt das auch für digitale Geschäftsmodellinnovationen. Angesichtsder Breite potenzieller digitaler Geschäftsmodellinnovationen wollten die Autorenwissen, welche Modellmuster in der wirtschaftlichen Praxis welche Bedeutung haben.Deshalb wurde die digitale Transformation mit neuen Geschäftsmodellen ineiner empirischen Studie basierend auf qualitativen Interviews mit 68 Unternehmenuntersucht. Dabei wurden sieben geeignete Geschäftsmodellmuster identifiziert, bezüglichihres Disruptionspotenzials von evolutionär bis revolutionär klassifiziert undder Realisierungsgrad in den Unternehmen analysiert.Die stark komprimierte Conclusio lautet, dass das Thema Geschäftsmodellinnovationendurch Industrie 4.0 und digitale Transformation bei den Unternehmenangekommen ist. Es gibt jedoch sehr unterschiedliche Geschwindigkeiten in der Umsetzungund im Neuheitsgrad der Geschäftsmodellideen. Die schrittweise Weiterentwicklungvon Geschäftsmodellen (evolutionär) wird von den meisten Unternehmenbevorzugt, da hier die grundsätzliche Art und Weise des Leistungsangebots bestehenbleibt. Im Gegensatz dazu gibt es aber auch Unternehmen, die bereits radikale Änderungenvornehmen, die die gesamte Geschäftslogik betreffen. Entsprechend wird imvorliegenden Artikel ein Clustering von Geschäftsmodellinnovatoren vorgenommen – von Hesitator über Follower über Optimizer bis zu Leader in Geschäftsmodellinnovationen
Wertner Alfred, Stern Hermann, Pammer-Schindler Viktoria, Weghofer Franz
2018
Sprachsteuerung stellt ein potentiell sehr mächtiges Werkzeug dar und sollte rein von der Theorie (grundlegende Spracheingabe) her schon seit 20 Jahren einsetzbar sein. Sie ist in der Vergangenheit im industriellen Umfeld jedoch primär an nicht ausgereifter Hardware oder gar der Notwendigkeit einer firmenexternen aktiven Datenverbindung gescheitert. Bei Magna Steyr am Standort Graz wird die Kommissionierung bisher mit Hilfe von Scan-nern erledigt. Dieser Prozess ließe sich sehr effektiv durch eine durchgängige Sprachsteue-rung unterstützen, wenn diese einfach, zuverlässig sowie Compliance-konform umsetzbar wäre und weiterhin den Menschen als zentralen Mittelpunkt und Akteur (Stichwort Hu-man in the Loop) verstehen würde. Daher wurden bestehende Spracherkennungssysteme für mobile Plattformen sowie passende „off the shelf“ Hardware (Smartphones und Headsets) ausgewählt und prototypisch als Android Applikation („Talk2Me“) umgesetzt. Ziel war es, eine Aussage über die Einsetzbarkeit von sprachgesteuerten mobilen Anwen-dungen im industriellen Umfeld liefern zu können.Mit dem Open Source Speech Recognition Kit CMU Sphinx in Kombination mit speziell auf das Vokabular der abgebildeten Prozesse angepassten Wörterbüchern konnten wir eine sehr gute Erkennungsrate erreichen ohne das Sprachmodell individuell auf einzelne Mitar-beiterInnen trainieren zu müssen. Talk2Me zeigt innovativ, wie erprobte, kostengünstige und verfügbare Technologie (Smartphones und Spracherkennung als Eingabe sowie Sprachsynthese als Ausgabe) Ein-zug in unseren Arbeitsalltag haben kann.
d'Aquin Mathieu , Kowald Dominik, Fessl Angela, Thalmann Stefan, Lex Elisabeth
2018
The goal of AFEL is to develop, pilot and evaluate methods and applications, which advance informal/collective learning as it surfaces implicitly in online social environments. The project is following a multi-disciplinary, industry-driven approach to the analysis and understanding of learner data in order to personalize, accelerate and improve informal learning processes. Learning Analytics and Educational Data Mining traditionally relate to the analysis and exploration of data coming from learning environments, especially to understand learners' behaviours. However, studies have for a long time demonstrated that learning activities happen outside of formal educational platforms, also. This includes informal and collective learning usually associated, as a side effect, with other (social) environments and activities. Relying on real data from a commercially available platform, the aim of AFEL is to provide and validate the technological grounding and tools for exploiting learning analytics on such learning activities. This will be achieved in relation to cognitive models of learning and collaboration, which are necessary to the understanding of loosely defined learning processes in online social environments. Applying the skills available in the consortium to a concrete set of live, industrial online social environments, AFEL will tackle the main challenges of informal learning analytics through 1) developing the tools and techniques necessary to capture information about learning activities from (not necessarily educational) online social environments; 2) creating methods for the analysis of such informal learning data, based on combining feature engineering and visual analytics with cognitive models of learning and collaboration; and 3) demonstrating the potential of the approach in improving the understanding of informal learning, and the way it is better supported; 4) evaluate all the former items in real world large scale applications and platforms.
Kowald Dominik, Seitlinger Paul , Ley Tobias , Lex Elisabeth
2018
In this paper, we present the results of an online study with the aim to shed light on the impact that semantic context cues have on the user acceptance of tag recommendations. Therefore, we conducted a work-integrated social bookmarking scenario with 17 university employees in order to compare the user acceptance of a context-aware tag recommendation algorithm called 3Layers with the user acceptance of a simple popularity-based baseline. In this scenario, we validated and verified the hypothesis that semantic context cues have a higher impact on the user acceptance of tag recommendations in a collaborative tagging setting than in an individual tagging setting. With this paper, we contribute to the sparse line of research presenting online recommendation studies.
Koncar Philipp
2018
This synthetically generated dataset can be used to evaluate outlier detection algorithms. It has 10 attributes and 1000 observations, of which 100 are labeled as outliers. Two-dimensional combinations of attributes form differently shaped clusters. Attribute 0 & Attribute 1: Two circular clusters Attribute 2 & Attribute 3: Two banana shaped clusters Attribute 4 & Attribute 5: Three point clouds Attribute 6 & Attribute 7: Two point clouds with variances Attribute 8 & Attribute 9: Three anisotropic shaped clusters. The "outlier" column states whether an observation is an outlier or not. Additionally, the .zip file contains 10 stratified randomized train test splits (70% train, 30% test).
Lovric Mario
2018
The objects are numbered. The Y-variable are boiling points. Other features are structural features of molecules. In the outlier column the outliers are assigned with a value of 1.The data is derived from a published chemical dataset on boiling point measurements [1] and from public data [2]. Features were generated by means of the RDKit Python library [3]. The dataset was infused with known outliers (~5%) based on significant structural differences, i.e. polar and non-polar molecules. Cherqaoui D., Villemin D. Use of a Neural Network to determine the Boiling Point of Alkanes. J CHEM SOC FARADAY TRANS. 1994;90(1):97–102. https://pubchem.ncbi.nlm.nih.gov/ RDKit: Open-source cheminformatics; http://www.rdkit.org
Lovric Mario, Stipaničev Draženka , Repec Siniša , Malev Olga , Klobučar Göran
2018
Lacic Emanuel, Kowald Dominik, Reiter-Haas Markus, Slawicek Valentin, Lex Elisabeth
2018
In this work, we address the problem of recommending jobs touniversity students. For this, we explore the impact of using itemembeddings for a content-based job recommendation system. Fur-thermore, we utilize a model from human memory theory to integratethe factors of frequency and recency of job posting interactions forcombining item embeddings. We evaluate our job recommendationsystem on a dataset of the Austrian student job portal Studo usingprediction accuracy, diversity as well as adapted novelty, which isintroduced in this work. We find that utilizing frequency and recencyof interactions with job postings for combining item embeddingsresults in a robust model with respect to accuracy and diversity, butalso provides the best adapted novelty results
Hasani-Mavriqi Ilire, Kowald Dominik, Helic Denis, Lex Elisabeth
2018
In this paper, we study the process of opinion dynamics and consensus building inonline collaboration systems, in which users interact with each other followingtheir common interests and their social proles. Specically, we are interested inhow users similarity and their social status in the community, as well as theinterplay of those two factors inuence the process of consensus dynamics. Forour study, we simulate the diusion of opinions in collaboration systems using thewell-known Naming Game model, which we extend by incorporating aninteraction mechanism based on user similarity and user social status. Weconduct our experiments on collaborative datasets extracted from the Web. Ourndings reveal that when users are guided by their similarity to other users, theprocess of consensus building in online collaboration systems is delayed. Asuitable increase of inuence of user social status on their actions can in turnfacilitate this process. In summary, our results suggest that achieving an optimalconsensus building process in collaboration systems requires an appropriatebalance between those two factors.
Luzhnica Granit, Veas Eduardo Enrique
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
Lovric Mario, Krebs Sarah, Cemernek David, Kern Roman
2018
The use of big data technologies has a deep impact on today’s research (Tetko et al., 2016) and industry (Li et al., n.d.), but also on public health (Khoury and Ioannidis, 2014) and economy (Einav and Levin, 2014). These technologies are particularly important for manufacturing sites, where complex processes are coupled with large amounts of data, for example in chemical and steel industry. This data originates from sensors, processes. and quality-testing. Typical application of these technologies is related to predictive maintenance and optimisation of production processes. Media makes the term “big data” a hot buzzword without going to deep into the topic. We noted a lack in user’s understanding of the technologies and techniques behind it, making the application of such technologies challenging. In practice the data is often unstructured (Gandomi and Haider, 2015) and a lot of resources are devoted to cleaning and preparation, but also to understanding causalities and relevance among features. The latter one requires domain knowledge, making big data projects not only challenging from a technical perspective, but also from a communication perspective. Therefore, there is a need to rethink the big data concept among researchers and manufacturing experts including topics like data quality, knowledge exchange and technology required. The scope of this presentation is to present the main pitfalls in applying big data technologies amongst users from industry, explain scaling principles in big data projects, and demonstrate common challenges in an industrial big data project
Lovric Mario
2018
Today's data amount is significantly increasing. A strong buzzword in research nowadays is big data.Therefore the chemistry student has to be well prepared for the upcoming age where he does not only rule the laboratories but is a modeler and data scientist as well. This tutorial covers the very basics of molecular modeling and data handling with the use of Python and Jupyter Notebook. It is the first in a series aiming to cover the relevant topics in machine learning, QSAR and molecular modeling, as well as the basics of Python programming
Santos Tiago, Kern Roman
2018
Semiconductor manufacturing processes critically depend on hundreds of highly complex process steps, which may cause critical deviations in the end-product.Hence, a better understanding of wafer test data patterns, which represent stress tests conducted on devices in semiconductor material slices, may lead to an improved production process.However, the shapes and types of these wafer patterns, as well as their relation to single process steps, are unknown.In a first step to address these issues, we tailor and apply a variational auto-encoder (VAE) to wafer pattern images.We find the VAE's generator allows for explorative wafer pattern analysis, andits encoder provides an effective dimensionality reduction algorithm, which, in a clustering application, performs better than several baselines such as t-SNE and yields interpretable clusters of wafer patterns.
Urak Günter, Ziak Hermann, Kern Roman
2018
The task of federated search is to combine results from multiple knowledge bases into a single, aggregated result list, where the items typically range from textual documents toimages. These knowledge bases are also called sources, and the process of choosing the actual subset of sources for a given query is called source selection. A scenario wherethese sources do not provide information about their content in a standardized way is called uncooperative setting. In our work we focus on knowledge bases providing long tail content, i.e., rather specialized sources offering a low number of relevant documents. These sources are often neglected in favor of more popular knowledge sources, both by today’s Web users as well as by most of the existing source selection techniques. We propose a system for source selection which i) could be utilized to automatically detect long tail knowledge bases and ii) generates aggregated search results that tend to incorporate results from these long tail sources. Starting from the current state-of-the-art we developed components that allowed to adjust the amount of contribution from long tail sources. Our evaluation is conducted on theTREC 2014 Federated WebSearch dataset. As this dataset also favors the most popular sources, systems that include many long tail knowledge bases will yield low performancemeasures. Here, we propose a system where just a few relevant long tail sources are integrated into the list of more popular knowledge bases. Additionally, we evaluated the implications of an uncooperative setting, where only minimal information of the sources is available to the federated search system. Here a severe drop in performance is observed once the share of long tail sources is higher than 40%. Our work is intended to steer the development of federated search systems that aim at increasing the diversity and coverage of the aggregated search result.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2018
The goal of our work is inspired by the task of associating segments of text to their real authors. In this work, we focus on analyzing the way humans judge different writing styles. This analysis can help to better understand this process and to thus simulate/ mimic such behavior accordingly. Unlike the majority of the work done in this field (i.e., authorship attribution, plagiarism detection, etc.) which uses content features, we focus only on the stylometric, i.e. content-agnostic, characteristics of authors.Therefore, we conducted two pilot studies to determine, if humans can identify authorship among documents with high content similarity. The first was a quantitative experiment involving crowd-sourcing, while the second was a qualitative one executed by the authors of this paper.Both studies confirmed that this task is quite challenging.To gain a better understanding of how humans tackle such a problem, we conducted an exploratory data analysis on the results of the studies. In the first experiment, we compared the decisions against content features and stylometric features. While in the second, the evaluators described the process and the features on which their judgment was based. The findings of our detailed analysis could (i) help to improve algorithms such as automatic authorship attribution as well as plagiarism detection, (ii) assist forensic experts or linguists to create profiles of writers, (iii) support intelligence applications to analyze aggressive and threatening messages and (iv) help editor conformity by adhering to, for instance, journal specific writing style.
Babić Sanja, Barišić Josip, Stipaničev Draženka, Repec Siniša, Lovric Mario, Malev Olga, Čož-Rakovac Rozalindra, Klobučar GIV
2018
Quantitative chemical analyses of 428 organic contaminants (OCs) confirmed the presence of 313 OCs in the sediment extracts from river Sava, Croatia. Pharmaceuticals were present in higher concentration than pesticides thus confirming their increasing threat to freshwater ecosystems. Toxicity evaluation of the sediment extracts from four locations (Jesenice, Rugvica, Galdovo and Lukavec) using zebrafish embryotoxicity test (ZET) accompanied with semi-quantitative histopathological analyses exhibited good correlation with cumulative number and concentrations of OCs at investigated sites (10,048.6, 15,222.8, 1,247.6, and 9,130.5 ng/g respectively) and proved its role as a good indicator of toxic potential of complex contaminant mixtures. Toxicity prediction of sediment extracts and sediment was assessed using Toxic unit (TU) approach and PBT (persistence, bioaccumulation and toxicity) ranking. Also, prior-knowledge informed chemical-gene interaction models were generated and graph mining approaches used to identify OCs and genes most likely to be influential in these mixtures. Predicted toxicity of sediment extracts (TUext) for sampled locations was similar to the results obtained by ZET and associated histopathology resulting in Rugvica sediment as being the most toxic, followed by Jesenice, Lukavec and Galdovo. Sediment TU (TUsed) favoured OCs with low octanol-water partition coefficient like herbicide glyphosate and antibiotics ciprofloxacin and sulfamethazine thus indicating locations containing higher concentrations of these OCs (Galdovo and Rugvica) as most toxic. Results suggest that comprehensive in silico sediment toxicity predictions advocate providing equal attention to organic contaminants with either very low or very high log Kow
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2018
This paper describes a novel visual metaphor to communicate sensor information of a connected device. The Internet of Things aims to extend every device with sensing and computing capabilities. A byproduct is that even domestic machines become increasingly complex, tedious to understand and maintain. This paper presents a prototype instrumenting a coffee machine with sensors. The machine streams the sensor data, which is picked up by an augmented reality application serving a nature metaphor. The nature metaphor, BioAR, represents the status derived from the coffee machine sensors in the features of a 3D virtual tree. The tree is meant to pass for a living proxy of the machine it represents. The metaphor, shown either with AR or a simple holographic display, reacts to the user manipulation of the machine and its workings. A first user study validates that the representation is correctly understood, and that it inspires affect for the machine. A second user study validates that the metaphor scales to a large number of machines.
Breitfuß Gert, Berger Martin, Doerrzapf Linda
2018
The initiative „Urban Mobility Labs“ (UML), driven by the Austrian Ministry of Transport, Innovation and Technology, was started to support the setup of innovative and experimental environments for research, testing, implementation and transfer of mobility solutions. This should happen by incorporating the scientific community, citizens and stakeholders in politics and administration as well as other groups. The emerging structural frame shall enhance the efficiency and effectivity of the innovation process. In this paper insights and in-depth analysis of the approaches and experiences gained in the eight UML exploratory projects will be outlined. These projects were analyzed, systematized and enriched with further considerations. Furthermore, their knowledge growth as user-centered innovation environments was documented during the exploratory phase.
Bassa Akim, Kröll Mark, Kern Roman
2018
Open Information Extraction (OIE) is the task of extracting relations fromtext without the need of domain speci c training data. Currently, most of the researchon OIE is devoted to the English language, but little or no research has been conductedon other languages including German. We tackled this problem and present GerIE, anOIE parser for the German language. Therefore we started by surveying the availableliterature on OIE with a focus on concepts, which may also apply to the Germanlanguage. Our system is built upon the output of a dependency parser, on which anumber of hand crafted rules are executed. For the evaluation we created two dedicateddatasets, one derived from news articles and one based on texts from an encyclopedia.Our system achieves F-measures of up to 0.89 for sentences that have been correctlypreprocessed.
Neuhold Robert, Gursch Heimo, Cik Michael
2018
Data collection on motorways for traffic management operations is traditionally based on local measurements points and camera monitoring systems. This work looks into social media as additional data source for the Austrian motorway operator ASFINAG. A data driven system called Driver´s Dashboard was developed to collect incident descriptions from social media sources (Facebook, RSS feeds), to filter relevant messages, and to fuse them with local traffic data. All collected texts were analysed for concepts describing road situations linking the texts from the web and social media with traffic messages and traffic data. Due to the Austrian characteristics in social media use and road transportation very few messages are available compared to other studies. 3,586 messages were collected within a five-week period. 7.1% of these messages were automatically annotated as traffic relevant by the system. An evaluation of these traffic relevant messages showed that 22% of these messages were actually relevant for the motorway operator. Further, the traffic relevant messages for the motorway operator were analysed more in detail to identify correlations between message text and traffic data characteristics. A correlation of message text and traffic data was found in nine of eleven messages by comparing the speed profiles and traffic state data with the message text.
Rauter, R., Zimek, M.
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Lukas Sabine, Pammer-Schindler Viktoria, Almer Alexander, Schnabel Thomas
2017
Köfler Armin, Pammer-Schindler Viktoria, Almer Alexander, Schnabel Thomas
2017
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower. The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment. For more information about the extraction of the data, please consider reading our paper: "Extending Scientific Literature Search by Including the Author’s Writing Style" @BIR: http://www.gesis.org/en/services/events/events-archive/conferences/ecir-workshops/ecir-workshop-2017
Kowald Dominik
2017
Social tagging systems enable users to collaboratively assign freely chosen keywords(i.e., tags) to resources (e.g., Web links). In order to support users in finding descrip-tive tags, tag recommendation algorithms have been proposed. One issue of currentstate-of-the-art tag recommendation algorithms is that they are often designed ina purely data-driven way and thus, lack a thorough understanding of the cognitiveprocesses that play a role when people assign tags to resources. A prominent exam-ple is the activation equation of the cognitive architecture ACT-R, which formalizesactivation processes in human memory to determine if a specific memory unit (e.g.,a word or tag) will be needed in a specific context. It is the aim of this thesis toinvestigate if a cognitive-inspired approach, which models activation processes inhuman memory, can improve tag recommendations.For this, the relation between activation processes in human memory and usagepractices of tags is studied, which reveals that (i) past usage frequency, (ii) recency,and (iii) semantic context cues are important factors when people reuse tags. Basedon this, a cognitive-inspired tag recommendation approach termed BLLAC+MPrisdeveloped based on the activation equation of ACT-R. An extensive evaluation usingsix real-world folksonomy datasets shows that BLLAC+MProutperforms currentstate-of-the-art tag recommendation algorithms with respect to various evaluationmetrics. Finally, BLLAC+MPris utilized for hashtag recommendations in Twitter todemonstrate its generalizability in related areas of tag-based recommender systems.The findings of this thesis demonstrate that activation processes in human memorycan be utilized to improve not only social tag recommendations but also hashtagrecommendations. This opens up a number of possible research strands for futurework, such as the design of cognitive-inspired resource recommender systems
Breitfuß Gert, Kaiser Rene_DB, Kern Roman, Kowald Dominik, Lex Elisabeth, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2017
Proceedings of the Workshop Papers of i-Know 2017, co-located with International Conference on Knowledge Technologies and Data-Driven Business 2017 (i-Know 2017), Graz, Austria, October 11-12, 2017.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce an interactive user-driven tool that aims at supporting users in understanding, refining, and reorganizing documents on the fly as information needs evolve. Decisions regarding visual and interactive design aspects are tightly grounded on a conceptual model for exploratory search. In other words, the different views in the user interface address stages of awareness, exploration, and explanation unfolding along the discovery process, supported by a set of text-mining methods. A formal evaluation showed that gathering items relevant to a particular topic of interest with our tool incurs in a lower cognitive load compared to a traditional ranked list. A second study reports on usage patterns and usability of the various interaction techniques within a free, unsupervised setting.
d'Aquin Mathieu , Adamou Alessandro , Dietze Stefan , Fetahu Besnik , Gadiraju Ujwal , Hasani-Mavriqi Ilire, Holz Peter, Kümmerle Joachim, Kowald Dominik, Lex Elisabeth, Lopez Sola Susana, Mataran Ricardo, Sabol Vedran, Troullinou Pinelopi, Veas Eduardo, Veas Eduardo Enrique
2017
More and more learning activities take place online in a self-directed manner. Therefore, just as the idea of self-tracking activities for fitness purposes has gained momentum in the past few years, tools and methods for awareness and self-reflection on one's own online learning behavior appear as an emerging need for both formal and informal learners. Addressing this need is one of the key objectives of the AFEL (Analytics for Everyday Learning) project. In this paper, we discuss the different aspects of what needs to be put in place in order to enable awareness and self-reflection in online learning. We start by describing a scenario that guides the work done. We then investigate the theoretical, technical and support aspects that are required to enable this scenario, as well as the current state of the research in each aspect within the AFEL project. We conclude with a discussion of the ongoing plans from the project to develop learner-facing tools that enable awareness and self-reflection for online, self-directed learners. We also elucidate the need to establish further research programs on facets of self-tracking for learning that are necessarily going to emerge in the near future, especially regarding privacy and ethics.
Ross-Hellauer Anthony, Deppe A., Schmidt B.
2017
Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken. This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR. The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%). Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor. There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse. Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR’s prospects for moving mainstream but indicate that due care must be taken to avoid a “one-size fits all” solution and to tailor such systems to differing (especially disciplinary) contexts. OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.
Seifert Christin, Bailer Werner, Orgel Thomas, Gantner Louis, Kern Roman, Ziak Hermann, Petit Albin, Schlötterer Jörg, Zwicklbauer Stefan, Granitzer Michael
2017
The digitization initiatives in the past decades have led to a tremendous increase in digitized objects in the cultural heritagedomain. Although digitally available, these objects are often not easily accessible for interested users because of the distributedallocation of the content in different repositories and the variety in data structure and standards. When users search for culturalcontent, they first need to identify the specific repository and then need to know how to search within this platform (e.g., usageof specific vocabulary). The goal of the EEXCESS project is to design and implement an infrastructure that enables ubiquitousaccess to digital cultural heritage content. Cultural content should be made available in the channels that users habituallyvisit and be tailored to their current context without the need to manually search multiple portals or content repositories. Torealize this goal, open-source software components and services have been developed that can either be used as an integratedinfrastructure or as modular components suitable to be integrated in other products and services. The EEXCESS modules andcomponents comprise (i) Web-based context detection, (ii) information retrieval-based, federated content aggregation, (iii) meta-data definition and mapping, and (iv) a component responsible for privacy preservation. Various applications have been realizedbased on these components that bring cultural content to the user in content consumption and content creation scenarios. Forexample, content consumption is realized by a browser extension generating automatic search queries from the current pagecontext and the focus paragraph and presenting related results aggregated from different data providers. A Google Docs add-onallows retrieval of relevant content aggregated from multiple data providers while collaboratively writing a document. Theserelevant resources then can be included in the current document either as citation, an image, or a link (with preview) withouthaving to leave disrupt the current writing task for an explicit search in various content providers’ portals.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever we gather or organize knowledge, the task of search-ing inevitably takes precedence. As exploration unfolds, it be-comes cumbersome to reorganize resources along new interests,as any new search brings new results. Despite huge advances inretrieval and recommender systems from the algorithmic point ofview, many real-world interfaces have remained largely unchanged:results appear in an infinite list ordered by relevance with respect tothe current query. We introduceuRank, a user-driven visual tool forexploration and discovery of textual document recommendations.It includes a view summarizing the content of the recommenda-tion set, combined with interactive methods for understanding, re-fining and reorganizing documents on-the-fly as information needsevolve. We provide a formal experiment showing thatuRankuserscan browse the document collection and efficiently gather items rel-evant to particular topics of interest with significantly lower cogni-tive load compared to traditional list-based representations.
Müller-Putz G. R., Ofner P., Schwarz Andreas, Pereira J., Luzhnica Granit, di Sciascio Maria Cecilia, Veas Eduardo Enrique, Stein Sebastian, Williamson John, Murray-Smith Roderick, Escolano C., Montesano L., Hessing B., Schneiders M., Rupp R.
2017
The aim of the MoreGrasp project is to develop a non-invasive, multimodal user interface including a brain-computer interface(BCI)for intuitive control of a grasp neuroprosthesisto supportindividuals with high spinal cord injury(SCI)in everyday activities. We describe the current state of the project, including the EEG system, preliminary results of natural movements decoding in people with SCI, the new electrode concept for the grasp neuroprosthesis, the shared control architecture behind the system and the implementation ofa user-centered design.
Mohr Peter, Mandl David, Tatzgern Markus, Veas Eduardo Enrique, Schmalstieg Dieter, Kalkofen Denis
2017
A video tutorial effectively conveys complex motions, butmay be hard to follow precisely because of its restriction toa predetermined viewpoint. Augmented reality (AR) tutori-als have been demonstrated to be more effective. We bringthe advantages of both together by interactively retargetingconventional, two-dimensional videos into three-dimensionalAR tutorials. Unlike previous work, we do not simply overlayvideo, but synthesize 3D-registered motion from the video.Since the information in the resulting AR tutorial is registeredto 3D objects, the user can freely change the viewpoint with-out degrading the experience. This approach applies to manystyles of video tutorials. In this work, we concentrate on aclass of tutorials which alter the surface of an object
Guerra Jorge, Catania Carlos, Veas Eduardo Enrique
2017
This paper presents a graphical interface to identify hostilebehavior in network logs. The problem of identifying andlabeling hostile behavior is well known in the network securitycommunity. There is a lack of labeled datasets, which makeit difficult to deploy automated methods or to test the perfor-mance of manual ones. We describe the process of search-ing and identifying hostile behavior with a graphical tool de-rived from an open source Intrusion Prevention System, whichgraphically encodes features of network connections from alog-file. A design study with two network security expertsillustrates the workflow of searching for patterns descriptiveof unwanted behavior and labeling occurrences therewith.
Veas Eduardo Enrique
2017
In our goal to personalize the discovery of scientific information, we built systems using visual analytics principles for exploration of textual documents [1]. The concept was extended to explore information quality of user generated content [2]. Our interfaces build upon a cognitive model, where awareness is a key step of exploration [3]. In education-related circles, a frequent concern is that people increasingly need to know how to search, and that knowing how to search leads to finding information efficiently. The ever-growing information overabundance right at our fingertips needs a naturalskill to develop and refine search queries to get better search results, or does it?Exploratory search is an investigative behavior we adopt to build knowledge by iteratively selecting interesting features that lead to associations between representative items in the information space [4,5]. Formulating queries was proven more complicated for humans than recognizing information visually [6]. Visual analytics takes the form of an open ended dialog between the user and the underlying analytics algorithms operating on the data [7]. This talk describes studies on exploration and discovery with visual analytics interfaces that emphasize transparency and control featuresto trigger awareness. We will discuss the interface design and the studies of visual exploration behavior.
di Sciascio Maria Cecilia, Mayr Lukas, Veas Eduardo Enrique
2017
Knowledge work such as summarizing related research inpreparation for writing, typically requires the extraction ofuseful information from scientific literature. Nowadays theprimary source of information for researchers comes fromelectronic documents available on the Web, accessible throughgeneral and academic search engines such as Google Scholaror IEEE Xplore. Yet, the vast amount of resources makesretrieving only the most relevant results a difficult task. Asa consequence, researchers are often confronted with loadsof low-quality or irrelevant content. To address this issuewe introduce a novel system, which combines a rich, inter-active Web-based user interface and different visualizationapproaches. This system enables researchers to identify keyphrases matching current information needs and spot poten-tially relevant literature within hierarchical document collec-tions. The chosen context was the collection and summariza-tion of related work in preparation for scientific writing, thusthe system supports features such as bibliography and citationmanagement, document metadata extraction and a text editor.This paper introduces the design rationale and components ofthe PaperViz. Moreover, we report the insights gathered in aformative design study addressing usability
Ross-Hellauer Anthony
2017
Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with numerous overlapping and contradictory definitions. While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods.Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are systematically analysed to build a coherent typology of the various innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking.Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature reviewed.Conclusions: I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.
Kowald Dominik, Lex Elisabeth
2017
In this paper, we study the imbalance between current state-of-the-art tag recommendation algorithms and the folksonomy structures of real-world social tagging systems. While algorithms such as FolkRank are designed for dense folksonomy structures, most social tagging systems exhibit a sparse nature. To overcome this imbalance, we show that cognitive-inspired algorithms, which model the tag vocabulary of a user in a cognitive-plausible way, can be helpful. Our present approach does this via implementing the activation equation of the cognitive architecture ACT-R, which determines the usefulness of units in human memory (e.g., tags). In this sense, our long-term research goal is to design hybrid recommendation approaches, which combine the advantages of both worlds in order to adapt to the current setting (i.e., sparse vs. dense ones)
Luzhnica Granit, Veas Eduardo Enrique
2017
This paper investigates sensitivity based prioritisation in the construction of tactile patterns. Our evidence is obtained by three studies using a wearable haptic display with vibrotactile motors (tactors). Haptic displays intended to transmit symbols often suffer the tradeoff between throughput and accuracy. For a symbol encoded with more than one tactor simultaneous onsets (spatial encoding) yields the highest throughput at the expense of the accuracy. Sequential onset increases accuracy at the expense of throughput. In the desire to overcome these issues, we investigate aspects of prioritisation based on sensitivity applied to the encoding of haptics patterns. First, we investigate an encoding method using mixed intensities, where different body locations are simultaneously stimulated with different vibration intensities. We investigate whether prioritising the intensity based on sensitivity improves identification accuracy when compared to simple spatial encoding. Second, we investigate whether prioritising onset based on sensitivity affects the identification of overlapped spatiotemporal patterns. A user study shows that this method significantly increases the accuracy. Furthermore, in a third study, we identify three locations on the hand that lead to an accurate recall. Thereby, we design the layout of a haptic display equipped with eight tactors, capable of encoding 36 symbols with only one or two locations per symbol.
Luzhnica Granit, Veas Eduardo Enrique, Stein Sebastian, Pammer-Schindler Viktoria, Williamson John, Murray-Smith Roderick
2017
Haptic displays are commonly limited to transmitting a dis- crete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the lin- ear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and per- sonalisation through sensitivity adjustment increased decoding precision.
Dragoni Mauro, Federici Marco, Rexha Andi
2017
One of the most important opinion mining research directions falls in the extraction ofpolarities referring to specific entities (aspects) contained in the analyzed texts. The detectionof such aspects may be very critical especially when documents come from unknowndomains. Indeed, while in some contexts it is possible to train domain-specificmodels for improving the effectiveness of aspects extraction algorithms, in others themost suitable solution is to apply unsupervised techniques by making such algorithmsdomain-independent. Moreover, an emerging need is to exploit the results of aspectbasedanalysis for triggering actions based on these data. This led to the necessityof providing solutions supporting both an effective analysis of user-generated contentand an efficient and intuitive way of visualizing collected data. In this work, we implementedan opinion monitoring service implementing (i) a set of unsupervised strategiesfor aspect-based opinion mining together with (ii) a monitoring tool supporting usersin visualizing analyzed data. The aspect extraction strategies are based on the use of semanticresources for performing the extraction of aspects from texts. The effectivenessof the platform has been tested on benchmarks provided by the SemEval campaign and have been compared with the results obtained by domain-adapted techniques.
Kern Roman, Falk Stefan, Rexha Andi
2017
This paper describes our participation inSemEval-2017 Task 10, named ScienceIE(Machine Reading for Scientist). We competedin Subtask 1 and 2 which consist respectivelyin identifying all the key phrasesin scientific publications and label them withone of the three categories: Task, Process,and Material. These scientific publicationsare selected from Computer Science, MaterialSciences, and Physics domains. We followeda supervised approach for both subtasksby using a sequential classifier (CRF - ConditionalRandom Fields). For generating oursolution we used a web-based application implementedin the EU-funded research project,named CODE. Our system achieved an F1score of 0.39 for the Subtask 1 and 0.28 forthe Subtask 2.
Rexha Andi, Kern Roman, Ziak Hermann, Dragoni Mauro
2017
Retrieval of domain-specific documents became attractive for theSemantic Web community due to the possibility of integrating classicInformation Retrieval (IR) techniques with semantic knowledge.Unfortunately, the gap between the construction of a full semanticsearch engine and the possibility of exploiting a repository ofontologies covering all possible domains is far from being filled.Recent solutions focused on the aggregation of different domain-specificrepositories managed by third-parties. In this paper, wepresent a semantic federated search engine developed in the contextof the EEXCESS EU project. Through the developed platform,users are able to perform federated queries over repositories in atransparent way, i.e. without knowing how their original queries aretransformed before being actually submitted. The platform implementsa facility for plugging new repositories and for creating, withthe support of general purpose knowledge bases, knowledge graphsdescribing the content of each connected repository. Such knowledgegraphs are then exploited for enriching queries performed byusers.
Schrunner Stefan, Bluder Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2017
In semiconductor industry it is of paramount im- portance to check whether a manufactured device fulfills all quality specifications and is therefore suitable for being sold to the customer. The occurrence of specific spatial patterns within the so-called wafer test data, i.e. analog electric measurements, might point out on production issues. However the shape of these critical patterns is unknown. In this paper different kinds of process patterns are extracted from wafer test data by an image processing approach using Markov Random Field models for image restoration. The goal is to develop an automated procedure to identify visible patterns in wafer test data to improve pattern matching. This step is a necessary precondition for a subsequent root-cause analysis of these patterns. The developed pattern ex- traction algorithm yields a more accurate discrimination between distinct patterns, resulting in an improved pattern comparison than in the original dataset. In a next step pattern classification will be applied to improve the production process control.
Lindstaedt Stefanie , Czech Paul, Fessl Angela
2017
A Lifecycle Approach to Knowledge Excellence various industries and use cases. Through their cognitive computing-based approach, which combines the strength of man and the machine, they are setting standards within both the local and the international research community. With their expertise in the field of knowledge management they are describing the basic approaches in this chapter.
Tschinkel Gerwald, Sabol Vedran
2017
When using classical search engines, researchers are often confronted with a number of results far beyond what they can realistically manage to read; when this happens, recommender systems can help, by pointing users to the most valuable sources of information. In the course of a long-term research project, research into one area can extend over several days, weeks, or even months. Interruptions are unavoidable, and, when multiple team members have to discuss the status of a project, it’s important to be able to communicate the current research status easily and accurately. Multiple type-specific interactive views can help users identify the results most relevant to their focus of interest. Our recommendation dashboard uses micro-filter visualizations intended to improve the experience of working with multiple active filters, allowing researchers to maintain an overview of their progress. Within this paper, we carry out an evaluation of whether micro-visualizations help to increase the memorability and readability of active filters in comparison to textual filters. Five tasks, quantitative and qualitative questions, and the separate view on the different visualisation types enabled us to gain insights on how micro-visualisations behave and will be discussed throughout the paper.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph
2017
In today's digital age with an increasing number of websites, social/learning platforms, and different computer-mediated communication systems, finding valuable information is a challenging and tedious task, regardless from which discipline a person is. However, visualizations have shown to be effective in dealing with huge datasets: because they are grounded on visual cognition, people understand them and can naturally perform visual operations such as clustering, filtering and comparing quantities. But, creating appropriate visual representations of data is also challenging: it requires domain knowledge, understanding of the data, and knowledge about task and user preferences. To tackle this issue, we have developed a recommender system that generates visualizations based on (i) a set of visual cognition rules/guidelines, and (ii) filters a subset considering user preferences. A user places interests on several aspects of a visualization, the task or problem it helps to solve, the operations it permits, or the features of the dataset it represents. This paper concentrates on characterizing user preferences, in particular: i) the sources of information used to describe the visualizations, the content descriptors respectively, and ii) the methods to produce the most suitable recommendations thereby. We consider three sources corresponding to different aspects of interest: a title that describes the chart, a question that can be answered with the chart (and the answer), and a collection of tags describing features of the chart. We investigate user-provided input based on these sources collected with a crowd-sourced study. Firstly, information-theoretic measures are applied to each source to determine the efficiency of the input in describing user preferences and visualization contents (user and item models). Secondly, the practicability of each input is evaluated with content-based recommender system. The overall methodology and results contribute methods for design and analysis of visual recommender systems. The findings in this paper highlight the inputs which can (i) effectively encode the content of the visualizations and user's visual preferences/interest, and (ii) are more valuable for recommending personalized visualizations.
Seitlinger Paul, Ley Tobias, Kowald Dominik, Theiler Dieter, Hasani-Mavriqi Ilire, Dennerlein Sebastian, Lex Elisabeth, Albert D.
2017
Creative group work can be supported by collaborative search and annotation of Web resources. In this setting, it is important to help individuals both stay fluent in generating ideas of what to search next (i.e., maintain ideational fluency) and stay consistent in annotating resources (i.e., maintain organization). Based on a model of human memory, we hypothesize that sharing search results with other users, such as through bookmarks and social tags, prompts search processes in memory, which increase ideational fluency, but decrease the consistency of annotations, e.g., the reuse of tags for topically similar resources. To balance this tradeoff, we suggest the tag recommender SoMe, which is designed to simulate search of memory from user-specific tag-topic associations. An experimental field study (N = 18) in a workplace context finds evidence of the expected tradeoff and an advantage of SoMe over a conventional recommender in the collaborative setting. We conclude that sharing search results supports group creativity by increasing the ideational fluency, and that SoMe helps balancing the evidenced fluency-consistency tradeoff.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2017
In our research we explore representing the state of production machines using a new nature metaphor, called BioIoT. The underlying rationale is to represent relevant information in an agreeable manner and to increase machines’ appeal to operators. In this paper we describe a study with twelve participants in which sensory information of a coffee machine is encoded in a virtual tree. All participants considered the interaction with the BioIoT pleasant; and most reported to feel more inclined to perform machine maintenance, take “care” for the machine, than given classic state representation. The study highlights as directions for follow-up research personalization, intelligibility vs representational power, limits of the metaphor, and immersive visualization.
Kaiser Rene_DB, Meixner Britta, Jäger Joscha
2017
Enabling interactive access to multimedia content and evaluating content-consumption behaviors and experiences involve several different research areas, which are covered at many different conferences. For four years, the Workshop on Interactive Content Consumption (WSICC) series offered a forum for combining interdisciplinary, comprehensive views, inspiring new discussions related to interactive multimedia. Here, the authors reflect on the outcome of the series.
Meixner Britta, Kaiser Rene_DB, Jäger Joscha, Ooi Wei Tsang, Kosch Harald
2017
(journal special issue)
Cemernek David, Gursch Heimo, Kern Roman
2017
The catchphrase “Industry 4.0” is widely regarded as a methodology for succeeding in modern manufacturing. This paper provides an overview of the history, technologies and concepts of Industry 4.0. One of the biggest challenges to implementing the Industry 4.0 paradigms in manufacturing are the heterogeneity of system landscapes and integrating data from various sources, such as different suppliers and different data formats. These issues have been addressed in the semiconductor industry since the early 1980s and some solutions have become well-established standards. Hence, the semiconductor industry can provide guidelines for a transition towards Industry 4.0 in other manufacturing domains. In this work, the methodologies of Industry 4.0, cyber-physical systems and Big data processes are discussed. Based on a thorough literature review and experiences from the semiconductor industry, we offer implementation recommendations for Industry 4.0 using the manufacturing process of an electronics manufacturer as an example.
Shao Lin, Silva Nelson, Schreck Tobias, Eggeling Eva
2017
The Scatter Plot Matrix (SPLOM) is a well-known technique for visual analysis of high-dimensional data. However, one problem of large SPLOMs is that typically not all views are potentially relevant to a given analysis task or user. The matrix itself may contain structured patterns across the dimensions, which could interfere with the investigation for unexplored views. We introduce a new concept and prototype implementation for an interactive recommender system supporting the exploration of large SPLOMs based on indirectly obtained user feedback from user eye tracking. Our system records the patterns that are currently under exploration based on gaze times, recommending areas of the SPLOM containing potentially new, unseen patterns for successive exploration. We use an image-based dissimilarity measure to recommend patterns that are visually dissimilar to previously seen ones, to guide the exploration in large SPLOMs. The dynamic exploration process is visualized by an analysis provenance heatmap, which captures the duration on explored and recommended SPLOM areas. We demonstrate our exploration process by a user experiment, showing the indirectly controlled recommender system achieves higher pattern recall as compared to fully interactive navigation using mouse operations.
Gursch Heimo, Cemernek David, Kern Roman
2017
In manufacturing environments today, automated machinery works alongside human workers. In many cases computers and humans oversee different aspects of the same manufacturing steps, sub-processes, and processes. This paper identifies and describes four feedback loops in manufacturing and organises them in terms of their time horizon and degree of automation versus human involvement. The data flow in the feedback loops is further characterised by features commonly associated with Big Data. Velocity, volume, variety, and veracity are used to establish, describe and compare differences in the data flows.
Hasitschka Peter, Sabol Vedran, Thalmann Stefan
2017
Industry 4.0 describes the digitization and the interlinkingof companies working together in a supply chain [1]. Thereby,the digitization and the interlinking does not only affects themachines and IT infrastructure, rather also the employees areaffected [3]. The employees have to acquire more and morecomplex knowledge within a shorter period of time. To copewith this challenge, the learning needs to be integrated into thedaily work practices, while the learning communities shouldmap the organizational production networks [2]. Such learningnetworks support the knowledge exchange and joint problemsolving together with all involved parties [4]. However, insuch communities not all involved actors are known and hencesupport to find the right learning material and peers is needed.Nowadays, many different learning environments are usedin the industry. Their complexity makes it hard to understandwhether the system provides an optimal learning environment.The large number of learning resources, learners and theiractivities makes it hard to identify potential problems inside alearning environment. Since the human visual system providesenormous power for discovering patterns from data displayedusing a suitable visual representation [5], visualizing such alearning environment could provide deeper insights into itsstructure and activities of the learners.Our goal is to provide a visual framework supporting theanalysis of communities that arise in a learning environment.Such analysis may lead to discovery of information that helpsto improve the learning environment and the users’ learningsuccess.
Geiger Manfred, Waizenegger Lena, Treasure-Jones Tamsin, Sarigianni Christina, Maier Ronald, Thalmann Stefan, Remus Ulrich
2017
Research on information system (IS) adoption and resistance has accumulatedsubstantial theoretical and managerial knowledge. Surprisingly, the paradox that end userssupport and at the same time resist use of an IS has received relatively little attention. Theinvestigation of this puzzle, however, is important to complement our understanding ofresistant behaviours and consequently to strengthen the explanatory power of extanttheoretical constructs on IS resistance. We investigate an IS project within the healthcare ...
Thalmann Stefan, Thiele Janna, Manhart Markus, Virnes Marjo
2017
This study explored the application scenarios of a mobile app called Ach So! forworkplace learning of construction work apprentices. The mobile application was used forpiloting new technology-enhanced learning practices in vocational apprenticeship trainingat construction sites in Finland and in a training center in Germany. Semi-structured focusgroup interviews were conducted after the pilot test periods. The interview data served asthe data source for the concept-driven framework analysis that employed theoretical ...
Thalmann Stefan, Larrazábal Jorge, Pammer-Schindler Viktoria, Kreuzthaler Armin, Fessl Angela
2017
n times of globalization, also workforce needs to be able to go global. This holds true especially for technical experts holding an exclusive expertise. Together with a global manufacturing company, we addressed the challenge of being able to send staff into foreign countries for managing technical projects in the foreign language. We developed a language learning concept that combines a language learning platform with conventional individual but virtually conducted coaching sessions. In our use case, we developed this ...
Thalmann Stefan, Pammer-Schindler Viktoria
2017
Aktuelle Untersuchungen zeigen einerseits auf, dass der Mensch weiterhin eine zentrale Rolle in der Industrie spielt. Andererseits ist aber auch klar, dass die Zahl der direkt in der Produktion beschäftigten Mitarbeter sinken wird. Die Veränderung wird dahin gehen, dass der Mensch weniger gleichförmige Prozese bearbeitet, stattdessen aber in der Lage sein muss, sich schnell ändernden Arbeitstätigkeiten azupassen und individualisierte Fertigungsprozesse zu steuern. Die Reduktion der Mitarbeiter hat jedoch auch eine Reduktion von Redunanzen zur Folge. Dies führt dazu, dass dem Einzelnen mehr Verantwortung übertragen wird. Als Folge haben Fehlentscheidungen eine görßere Tragweite und bedeuten somit auch ein höheres Risikio. Der Erfolg einer Industrie 4.0 Kampagne wird daher im Wesentlichen von den Anpassungsfähigkeiten der Mitarbeiter abhängen.
Pammer-Schindler Viktoria, Fessl Angela, Weghofer Franz, Thalmann Stefan
2017
Die Digitalisierung der Industrie wird aktuell sehr stark aus technoogischer Sicht betrachtet. Aber auch für den Menschen ergebn sich vielfältige Herausforderungen in dieser veränderten Arbeitsumgebung. Sie betreffen hautsächlich das Lernen von benötigtem Wissen.
Stabauer Petra, Breitfuß Gert, Lassnig Markus
2017
Nowadays digitalization is on everyone’s mind and affecting all areas of life. The rapid development of information technology and the increasing pervasiveness of digitalization represent new challenges to the business world. The emergence of the so-called fourth industrial revolution and the Internet of Things (IoT) confronts existing firms with changes in numerous aspects of doing business. Not only information and communication technologies are changing production processes through increasing automation. Digitalization can affect products and services itself. This could lead to major changes in a company’s value chain and as a consequence affects the company’s business model. In the age of digitalization, it is no longer sufficient to change single aspects of a firm’s business strategy, the business model itself needs to be the subject of innovation. This paper presents how digitalization affects business models of well-established companies in Austria. The results are demonstrated by means of two best practice case studies. The case studies were identified within an empirical research study funded by the Austrian Ministry for Transport, Innovation and Technology (BMVIT). The selected best practice cases presents how digitalization affects a firm’s business model and demonstrates the transformation of the value creation process by simultaneously contributing to sustainable development.
de Reuver Mark, Tarkus Astrid, Haaker Timber, Breitfuß Gert, Roelfsema Melissa, Kosman Ruud, Heikkilä Marikka
2017
In this paper, we present two design cycles for an online platform with ICT-enabled tooling that supports business model innovation by SMEs. The platform connects the needs of the SMEs regarding BMI with tools that can help to solve those needs and questions. The needs are derived from our earlier case study work (Heikkilä et al. 2016), showing typical BMI patterns of the SMEs needs - labelled as ‘I want to’s - about what an entrepreneur wants to achieve with business model innovation. The platform provides sets of integrated tools that can answer the typical ‘I want to’ questions that SMEs have with innovating their business models.
Pammer-Schindler Viktoria, Fessl Angela, Wiese Michael, Thalmann Stefan
2017
Financial auditors routinely search internal as well as public knowledge bases as part of the auditing process. Efficient search strategies are crucial for knowledge workers in general and for auditors in particular. Modern search technology quickly evolves; and features beyond keyword search like fac-etted search or visual overview of knowledge bases like graph visualisations emerge. It is therefore desirable for auditors to learn about new innovations and to explore and experiment with such technologies. In this paper, we present a reflection intervention concept that intends to nudge auditors to reflect on their search behaviour and to trigger informal learning in terms of by trying out new or less frequently used search features. The reflection intervention concept has been tested in a focus group with six auditors using a mockup. Foremost, the discussion centred on the timing of reflection interventions and how to raise mo-tivation to achieve a change in search behaviour.
Pammer-Schindler Viktoria, Fessl Angela, Wesiak Gudrun, Feyertag Sandra, Rivera-Pelayo Verónica
2017
This paper presents a concept for in-app reflection guidance and its evaluation in four work-related field trials. By synthesizing across four field trials, we can show that computer-based reflection guidance can function in the workplace, in the sense of being accepted as technology, being perceived as useful and leading to reflective learning. This is encouraging for all endeavours aiming to transfer existing knowledge on reflection supportive technology from educational settings to the workplace. However,reflective learning in our studies was mostly visible to limited depth in textual entries made in the applications themselves; and proactive reflection guidance technology like prompts were often found to be disruptive. We offer these two issues as highly relevant questions for future research.
Pammer-Schindler Viktoria, Rivera-Pelayo Verónica, Fessl Angela, Müller Lars
2017
The benefits of self-tracking have been thoroughly investigated in private areas of life, like health or sustainable living, but less attention has been given to the impact and benefits of self-tracking in work-related settings. Through two field studies, we introduced and evaluated a mood self-tracking application in two call centers to investigate the role of mood self-tracking at work, as well as its impact on individuals and teams. Our studies indicate that mood self-tracking is accepted and can improve performance if the application is well integrated into the work processes and matches the management style. The results show that (i) capturing moods and explicitly relating them to work tasks facilitated reflection, (ii) mood self-tracking increased emotional awareness and this improved cohesion within teams, and (iii) proactive reactions by managers to trends and changes in team members’ mood were key for acceptance of reflection and correlated with measured improvements in work performance. These findings help to better understand the role and potential of self-tracking in work settings and further provide insights that guide future researchers and practitioners to design and introduce these tools in a workplace setting.
Ginthör Robert, Lamb Reinhold, Koinegg Johann
2017
Daten stellen den Rohstoff und die Basis für viele Unternehmen und deren künftigen wirtschaftlichen Erfolg in der Industrie dar. Diese Radar-Ausgabe knüpft inhaltlich an die veröffentlichten Radar-Ausgaben „Dienstleistungsinnovationen“ und „Digitalisierte Maschinen und Anlagen“ an und beleuchtet die technischen Möglichkeiten und zukünftigen Entwicklungen von Data-driven Business im Kontext der Green Tech Industries. Basierend auf der fortschreitenden Digitalisierung nimmt das Angebotan strukturierten und unstrukturierten Daten in den unterschiedlichen Bereichen der Wirtschaft rasant zu. In diesem Kontext gilt es sowohl interne als auch externe Daten unterschiedlichen Ursprungs zentral zu erfassen, zu validieren, miteinander zu kombinieren, auszuwerten sowie daraus neue Erkenntnisse und Anwendungen für ein Data DrivenBusiness zu generieren.
Stern Hermann, Dennerlein Sebastian, Pammer-Schindler Viktoria, Ginthör Robert, Breitfuß Gert
2017
To specify the current understanding of business models in the realm of Big Data, we used a qualitative approach analysing 25 Big Data projects spread over the domains of Retail, Energy, Production, and Life Sciences, and various company types (SME, group, start-up, etc.). All projects have been conducted in the last two years at Austria’s competence center for Data-driven Business and Big Data Analytics, the Know-Center.
Reiter-Haas Markus, Slawicek Valentin, Lacic Emanuel
2017
Topps David, Dennerlein Sebastian, Treasure-Jones Tamsin
2017
There is increasing interest in Barcamps and Unconferences as an educational approach during traditional medical education conferences. Ourgroup has now accumulated extensive experience in these formats over a number of years in different educational venues. We present asummary of observations and lessons learned about what works and what doesn’t.
Ruiz-Calleja Adolfo, Prieto Luis Pablo, Jesús Rodríguez Triana María , Dennerlein Sebastian, Ley Tobias
2017
Despite the ubiquity of learning in the everyday life of most workplaces, the learning analytics community only has paid attention to such settings very recently. One probable reason for this oversight is the fact that learning in the workplace is often informal, hard to grasp and not univocally defined. This paper summarizes the state of the art of Workplace Learning Analytics (WPLA), extracted from a systematic literature review of five academic databases as well as other known sources in the WPLA community. Our analysis of existing proposals discusses particularly on the role of different conceptions of learning and their influence on the LA proposals’ design and technology choices. We end the paper by discussing opportunities for future work in this emergent field.
Wilsdon James , Bar-Ilan Judit, Frodemann Robert, Lex Elisabeth, Peters Isabella , Wouters Paul
2017
Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2017
Recommender systems are acknowledged as an essential instrumentto support users in finding relevant information. However,the adaptation of recommender systems to multiple domain-specificrequirements and data models still remains an open challenge. Inthe present paper, we contribute to this sparse line of research withguidance on how to design a customizable recommender systemthat accounts for multiple domains with heterogeneous data. Usingconcrete showcase examples, we demonstrate how to setup amulti-domain system on the item and system level, and we reportevaluation results for the domains of (i) LastFM, (ii) FourSquare,and (iii) MovieLens. We believe that our findings and guidelinescan support developers and researchers of recommender systemsto easily adapt and deploy a recommender system in distributedenvironments, as well as to develop and evaluate algorithms suitedfor multi-domain settings
Kowald Dominik, Kopeinik Simone , Lex Elisabeth
2017
Recommender systems have become important tools to supportusers in identifying relevant content in an overloaded informationspace. To ease the development of recommender systems, a numberof recommender frameworks have been proposed that serve a widerange of application domains. Our TagRec framework is one of thefew examples of an open-source framework tailored towards developingand evaluating tag-based recommender systems. In this paper,we present the current, updated state of TagRec, and we summarizeand reƒect on four use cases that have been implemented withTagRec: (i) tag recommendations, (ii) resource recommendations,(iii) recommendation evaluation, and (iv) hashtag recommendations.To date, TagRec served the development and/or evaluation processof tag-based recommender systems in two large scale Europeanresearch projects, which have been described in 17 research papers.‘us, we believe that this work is of interest for both researchersand practitioners of tag-based recommender systems.
Görögh Edit, Vignoli Michela, Gauch Stephan, Blümel Clemens, Kraker Peter, Hasani-Mavriqi Ilire, Luzi Daniela , Walker Mappet, Toli Eleni, Sifacaki Electra
2017
The growing dissatisfaction with the traditional scholarly communication process and publishing practices as well as increasing usage and acceptance of ICT and Web 2.0 technologies in research have resulted in the proliferation of alternative review, publishing and bibliometric methods. The EU-funded project OpenUP addresses key aspects and challenges of the currently transforming science landscape and aspires to come up with a cohesive framework for the review-disseminate-assess phases of the research life cycle that is fit to support and promote open science. The objective of this paper is to present first results and conclusions of the landscape scan and analysis of alternative peer review, altmetrics and innovative dissemination methods done during the first project year.
Kraker Peter, Enkhbayar Asuraa, Schramm Maxi, Kittel Christopher, Chamberlain Scott, Skaug Mike , Brembs Björn
2017
Görögh Edit, Toli Eleni, Kraker Peter
2017
Kopeinik Simone, Lex Elisabeth, Seitlinger Paul, Ley Tobias, Albert Dietrich
2017
In online social learning environments, tagging has demonstratedits potential to facilitate search, to improve recommendationsand to foster reflection and learning.Studieshave shown that shared understanding needs to be establishedin the group as a prerequisite for learning. We hypothesisethat this can be fostered through tag recommendationstrategies that contribute to semantic stabilization.In this study, we investigate the application of two tag recommendersthat are inspired by models of human memory:(i) the base-level learning equation BLL and (ii) Minerva.BLL models the frequency and recency of tag use while Minervais based on frequency of tag use and semantic context.We test the impact of both tag recommenders on semanticstabilization in an online study with 56 students completinga group-based inquiry learning project in school. Wefind that displaying tags from other group members contributessignificantly to semantic stabilization in the group,as compared to a strategy where tags from the students’individual vocabularies are used. Testing for the accuracyof the different recommenders revealed that algorithms usingfrequency counts such as BLL performed better whenindividual tags were recommended. When group tags wererecommended, the Minerva algorithm performed better. Weconclude that tag recommenders, exposing learners to eachother’s tag choices by simulating search processes on learners’semantic memory structures, show potential to supportsemantic stabilization and thus, inquiry-based learning ingroups.
Kowald Dominik, Pujari Suhbash Chandra, Lex Elisabeth
2017
Hashtags have become a powerful tool in social platformssuch as Twitter to categorize and search for content, and tospread short messages across members of the social network.In this paper, we study temporal hashtag usage practices inTwitter with the aim of designing a cognitive-inspired hashtagrecommendation algorithm we call BLLI,S. Our mainidea is to incorporate the effect of time on (i) individualhashtag reuse (i.e., reusing own hashtags), and (ii) socialhashtag reuse (i.e., reusing hashtags, which has been previouslyused by a followee) into a predictive model. For this,we turn to the Base-Level Learning (BLL) equation from thecognitive architecture ACT-R, which accounts for the timedependentdecay of item exposure in human memory. Wevalidate BLLI,S using two crawled Twitter datasets in twoevaluation scenarios. Firstly, only temporal usage patternsof past hashtag assignments are utilized and secondly, thesepatterns are combined with a content-based analysis of thecurrent tweet. In both evaluation scenarios, we find not onlythat temporal effects play an important role for both individualand social hashtag reuse but also that our BLLI,S approachprovides significantly better prediction accuracy andranking results than current state-of-the-art hashtag recommendationmethods.
Traub Matthias, Gursch Heimo, Lex Elisabeth, Kern Roman
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Trattner Christoph, Elsweiler David
2017
Food recommenders have the potential to positively inuence theeating habits of users. To achieve this, however, we need to understandhow healthy recommendations are and the factors whichinuence this. Focusing on two approaches from the literature(single item and daily meal plan recommendation) and utilizing alarge Internet sourced dataset from Allrecipes.com, we show howalgorithmic solutions relate to the healthiness of the underlyingrecipe collection. First, we analyze the healthiness of Allrecipes.comrecipes using nutritional standards from the World Health Organisationand the United Kingdom Food Standards Agency. Second,we investigate user interaction patterns and how these relate to thehealthiness of recipes. Third, we experiment with both recommendationapproaches. Our results indicate that overall the recipes inthe collection are quite unhealthy, but this varies across categorieson the website. Users in general tend to interact most often with theleast healthy recipes. Recommender algorithms tend to score popularitems highly and thus on average promote unhealthy items. Thiscan be tempered, however, with simple post-ltering approaches,which we show by experiment are better suited to some algorithmsthan others. Similarly, we show that the generation of meal planscan dramatically increase the number of healthy options open tousers. One of the main ndings is, nevertheless, that the utilityof both approaches is strongly restricted by the recipe collection.Based on our ndings we draw conclusions how researchers shouldattempt to make food recommendation systems promote healthynutrition.
Ziak Hermann, Kern Roman
2017
The combination of different knowledge bases in thefield of information retrieval is called federated or aggregated search. It has several benefits over single source retrieval but poses some challenges as well. This work focuses on the challenge of result aggregation; especially in a setting where the final result list should include a certain degree of diversity and serendipity. Both concepts have been shown to have an impact on how user perceive an information retrieval system. In particular, we want to assess if common procedures for result list aggregation can be utilized to introduce diversity and serendipity. Furthermore, we study whether a blocking or interleaving for result aggregation yields better results. In a cross vertical aggregated search the so-called verticalscould be news, multimedia content or text. Block ranking is one approach to combine such heterogeneous result. It relies on the idea that these verticals are combined into a single result list as blocks of several adjacent items. An alternative approach for this is interleaving. Here the verticals are blended into one result list on an item by item basis, i.e. adjacent items in the result list may come from different verticals. To generate the diverse and serendipitous results we reliedon a query reformulation technique which we showed to be beneficial to generate diversified results in previous work. To conduct this evaluation we created a dedicated dataset. This dataset served as a basis for three different evaluation settings on a crowd sourcing platform, with over 300 participants. Our results show that query based diversification can be adapted to generate serendipitous results in a similar manner. Further, we discovered that both approaches, interleaving and block ranking, appear to be beneficial to introduce diversity and serendipity. Though it seems that queries either benefit from one approach or the other but not from both.
Toller Maximilian, Kern Roman
2017
The in-depth analysis of time series has gained a lot of re-search interest in recent years, with the identification of pe-riodic patterns being one important aspect. Many of themethods for identifying periodic patterns require time series’season length as input parameter. There exist only a few al-gorithms for automatic season length approximation. Manyof these rely on simplifications such as data discretization.This paper presents an algorithm for season length detec-tion that is designed to be sufficiently reliable to be used inpractical applications. The algorithm estimates a time series’season length by interpolating, filtering and detrending thedata. This is followed by analyzing the distances betweenzeros in the directly corresponding autocorrelation function.Our algorithm was tested against a comparable algorithmand outperformed it by passing 122 out of 165 tests, whilethe existing algorithm passed 83 tests. The robustness of ourmethod can be jointly attributed to both the algorithmic ap-proach and also to design decisions taken at the implemen-tational level.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
Our work is motivated by the idea to extend the retrieval of related scientific literature to cases, where the relatedness also incorporates the writing style of individual scientific authors. Therefore we conducted a pilot study to answer the question whether humans can identity authorship once the topological clues have been removed. As first result, we found out that this task is challenging, even for humans. We also found some agreement between the annotators. To gain a better understanding how humans tackle such a problem, we conducted an exploratory data analysis. Here, we compared the decisions against a number of topological and stylometric features. The outcome of our work should help to improve automatic authorship identificationalgorithms and to shape potential follow-up studies.
Santos Tiago, Walk Simon, Helic Denis
2017
Modeling activity in online collaboration websites, such asStackExchange Question and Answering portals, is becom-ing increasingly important, as the success of these websitescritically depends on the content contributed by its users. Inthis paper, we represent user activity as time series and per-form an initial analysis of these time series to obtain a bet-ter understanding of the underlying mechanisms that governtheir creation. In particular, we are interested in identifyinglatent nonlinear behavior in online user activity as opposedto a simpler linear operating mode. To that end, we applya set of statistical tests for nonlinearity as a means to char-acterize activity time series derived from 16 different onlinecollaboration websites. We validate our approach by com-paring activity forecast performance from linear and nonlin-ear models, and study the underlying dynamical systems wederive with nonlinear time series analysis. Our results showthat nonlinear characterizations of activity time series helpto (i) improve our understanding of activity dynamics in on-line collaboration websites, and (ii) increase the accuracy offorecasting experiments.
Strohmaier David, di Sciascio Maria Cecilia, Errecalde Marcelo, Veas Eduardo Enrique
2017
Innovations in digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success, but also a hindrance to good quality: contributions can be of poor quality because everyone, even anonymous users, can participate. Though Wikipedia has defined guidelines as to what makes the perfect article, authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever growing amount of articles pending review. Great efforts have been invested in algorith-mic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. However, little has been done to support quality assessment of user-generated content through interactive tools that allow for combining automatic methods and human intelligence. We developed WikiLyzer, a toolkit comprising three Web-based interactive graphic tools designed to assist (i) knowledge discovery experts in creating and testing metrics for quality measurement , (ii) users searching for good articles, and (iii) users that need to identify weaknesses to improve a particular article. A case study suggests that experts are able to create complex quality metrics with our tool and a report in a user study on its usefulness to identify high-quality content.
Ayris Paul, Berthou Jean-Yves, Bruce Rachel, Lindstaedt Stefanie , Monreale Anna, Mons Barend, Murayama Yasuhiro, Södegard Caj, Tochtermann Klaus, Wilkinson Ross
2016
The European Open Science Cloud (EOSC) aims to accelerate and support the current transition to more effective Open Science and Open Innovation in the Digital Single Market. It should enable trusted access to services, systems and the re-use of shared scientific data across disciplinary, social and geographical borders. This report approaches the EOSC as a federated environment for scientific data sharing and re-use, based on existing and emerging elements in the Member States, with light-weight international guidance and governance, and a large degree of freedom regarding practical implementation.
Lindstaedt Stefanie , Ley Tobias, Klamma Ralf, Wild Fridolin
2016
Recognizing the need for addressing the rather fragmented character of research in this field, we have held a workshop on learning analytics for workplace and professional learning at the Learning Analytics and Knowledge (LAK) Conference. The workshop has taken a broad perspective, encompassing approaches from a number of previous traditions, such as adaptive learning, professional online communities, workplace learning and performance analytics. Being co-located with the LAK conference has provided an ideal venue for addressing common challenges and for benefiting from the strong research on learning analytics in other sectors that LAK has established. Learning Analytics for Workplace and Professional Learning is now on the research agenda of several ongoing EU projects, and therefore a number of follow-up activities are planned for strengthening integration in this emerging field.
Rexha Andi, Kern Roman, Dragoni Mauro , Kröll Mark
2016
With different social media and commercial platforms, users express their opinion about products in a textual form. Automatically extracting the polarity (i.e. whether the opinion is positive or negative) of a user can be useful for both actors: the online platform incorporating the feedback to improve their product as well as the client who might get recommendations according to his or her preferences. Different approaches for tackling the problem, have been suggested mainly using syntactic features. The “Challenge on Semantic Sentiment Analysis” aims to go beyond the word-level analysis by using semantic information. In this paper we propose a novel approach by employing the semantic information of grammatical unit called preposition. We try to drive the target of the review from the summary information, which serves as an input to identify the proposition in it. Our implementation relies on the hypothesis that the proposition expressing the target of the summary, usually containing the main polarity information.
Atzmüller Martin, Alvin Chin, Trattner Christoph
2016
Dennerlein Sebastian, Treasure-Jones Tamsin, Lex Elisabeth, Ley Tobias
2016
Background: Teamworking, within and acrosshealthcare organisations, is essential to deliverexcellent integrated care. Drawing upon an alternationof collaborative and cooperative phases, we exploredthis teamworking and respective technologicalsupport within UK Primary Care. Participants usedBits&Pieces (B&P), a sensemaking tool for tracedexperiences that allows sharing results and mutuallyelaborating them: i.e. cooperating and/orcollaborating.Summary of Work: We conducted a two month-longcase study involving six healthcare professionals. InB&P, they reviewed organizational processes, whichrequired the involvement of different professions ineither collaborative and/or cooperative manner. Weused system-usage data, interviews and qualitativeanalysis to understand the interplay of teamworkingpracticeand technology.Summary of Results: Within our analysis we mainlyidentified cooperation phases. In a f2f-meeting,professionals collaboratively identified subtasks andassigned individuals leading collaboration on them.However, these subtasks were undertaken asindividual sensemaking efforts and finally combined(i.e. cooperation). We found few examples ofreciprocal interpretation processes (i.e. collaboration):e.g. discussing problems during sensemaking ormonitoring other’s sensemaking-outcomes to makesuggestions.Discussion: These patterns suggest that collaborationin healthcare often helps to construct a minimalshared understanding (SU) of subtasks to engage incooperation, where individuals trust in other’scompetencies and autonomous completion. However,we also found that professionals with positivecollaboration history and deepened SU were willing toundertake subtasks collaboratively. It seems thatacquiring such deepened SU of concepts andmethods, leads to benefits that motivate professionalsto collaborate more.Conclusion: Healthcare is a challenging environmentrequiring interprofessional work across organisations.For effective teamwork, a deepened SU is crucial andboth cooperation and collaboration are required.However, we found a tendency of staff to rely mainlyon cooperation when working in teams and not fullyexplore benefits of collaboration.Take Home Messages: To maximise benefits ofinterprofessional working, tools for teamworkingshould support both cooperation and collaborationprocesses and scaffold the move between them
Thalmann Stefan, Manhart Markus
2016
Organizations join networks to acquire external knowledge. This is especially important for SMEs since they often lack resources and are dependent on external knowledge to achieve and sustain competitive advantage. However, finding the right balance between measures facilitating knowledge sharing and measures protecting knowledge is a challenge. Whilst sharing is the raison d’être of networks, neglecting knowledge protection can be also detrimental to network, e.g., lead to one-sided skimming of knowledge. We identified four practices SMEs currently apply to balance protection of competitive knowledge and knowledge sharing in the network: (a) share in subgroups with high trust, (b) share partial aspects of the knowledge base, (c) share with people with low proximities, and (d) share common knowledge and protect the crucial. We further found that the application of the practices depends on the maturity of the knowledge. Further, we discuss how the practices relate to organizational protection capabilities and how the network can provide IT to support the development of these capabilities.
Thalmann Stefan, Ilvonen Ilona, Manhart Markus , Sillaber Christian
2016
New ways of combining digital and physical innovations, as well as intensified inter-organizational collaborations, create new challenges to the protection of organizational knowledge. Existing research on knowledge protection is at an early stage and scattered among various research domains. This research-in-progress paper presents a plan for a structured literature review on knowledge protection, integrating the perspectives of the six base domains of knowledge, strategic, risk, intellectual property rights, innovation, and information technology security management. We define knowledge protection as a set of capabilities comprising and enforcing technical, organizational, and legal mechanisms to protect tacit and explicit knowledge necessary to generate or adopt innovations.
Cik Michael, Hebenstreit Cornelia, Horn Christopher, Schulze Gunnar, Traub Matthias, Schweighofer Erich, Hötzendorf Walter, Fellendorf Martin
2016
Guaranteeing safety during mega events has always played a role for organizers, their security guards and the action force. This work was realized to enhance safety at mega events and demonstrations without the necessity of fixed installations. Therefore a low cost monitoring system supporting the organization and safety personnel was developed using cell phone data and social media data in combination with safety concepts to monitor safety during the event in real time. To provide the achieved results in real time to the event and safety personnel an application for a Tablet-PC was established. Two representative events were applied as case studies to test and evaluate the results and to check response and executability of the app on site. Because data privacy is increasingly important, legal experts were closely involved and provided legal support.
Ziak Hermann, Kern Roman
2016
Within this work represents the documentation of our ap-proach on the Social Book Search Lab 2016 where we took part in thesuggestion track. The main goal of the track was to create book recom-mendation for readers only based on their stated request within a forum.The forum entry contained further contextual information, like the user’scatalogue of already read books and the list of example books mentionedin the user’s request. The presented approach is mainly based on themetadata included in the book catalogue provided by the organizers ofthe task. With the help of a dedicated search index we extracted severalpotential book recommendations which were re-ranked by the use of anSVD based approach. Although our results did not meet our expectationwe consider it as first iteration towards a competitive solution.
Luzhnica Granit, Öjeling Christoffer, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup
Silva Nelson, Shao Lin, Schreck Tobias, Eggeling Eva, Fellner Dieter W.
2016
We present a new open-source prototype framework to exploreand visualize eye-tracking experiments data. Firstly, standard eyetrackersare used to record raw eye gaze data-points on user experiments.Secondly, the analyst can configure gaze analysis parameters,such as, the definition of areas of interest, multiple thresholdsor the labeling of special areas, and we upload the data to a searchserver. Thirdly, a faceted web interface for exploring and visualizingthe users’ eye gaze on a large number of areas of interest isavailable. Our framework integrates several common visualizationsand it also includes new combined representations like an eye analysisoverview and a clustered matrix that shows the attention timestrength between multiple areas of interest. The framework can bereadily used for the exploration of eye tracking experiments data.We make available the source code of our prototype framework foreye-tracking data analysis.
Silva Nelson, Caldera Christian, Krispel Ulrich, Eggeling Eva, Sunk Alexander, Reisinger Gerhard, Sihn Wilfried, Fellner Dieter W.
2016
Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis – the acquisition of the current state map – is still created using pen & paper by physically visiting the production line. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning as a future state map by using a meta description together with a dependency graph. With VASCO we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over analyzing, planning, comparison up to simulation of alternative future state maps.We call this a holistic approach for Value stream mapping including detailed analysis of lead time, productivity, space, distance, material disposal, energy and carbon dioxide equivalents – depending in a change of calculated direct product costs.
Silva Nelson, Shao Lin, Schreck Tobias, Eggeling Eva, Fellner Dieter W.
2016
Effective visual exploration of large data sets is an important problem. A standard tech- nique for mapping large data sets is to use hierarchical data representations (trees, or dendrograms) that users may navigate. If the data sets get large, so do the hierar- chies, and effective methods for the naviga- tion are required. Traditionally, users navi- gate visual representations using desktop in- teraction modalities, including mouse interac- tion. Motivated by recent availability of low- cost eye-tracker systems, we investigate ap- plication possibilities to use eye-tracking for controlling the visual-interactive data explo- ration process. We implemented a proof-of- concept system for visual exploration of hier- archic data, exemplified by scatter plot dia- grams which are to be explored for grouping and similarity relationships. The exploration includes usage of degree-of-interest based dis- tortion controlled by user attention read from eye-movement behavior. We present the basic elements of our system, and give an illustra- tive use case discussion, outlining the applica- tion possibilities. We also identify interesting future developments based on the given data views and captured eye-tracking information. (13) Visual Exploration of Hierarchical Data Using Degree-of-Interest Controlled by Eye-Tracking. Available from: https://www.researchgate.net/publication/309479681_Visual_Exploration_of_Hierarchical_Data_Using_Degree-of-Interest_Controlled_by_Eye-Tracking [accessed Oct 3, 2017].
Berndt Rene, Silva Nelson, Edtmayr Thomas, Sunk Alexander, Krispel Ulrich, Caldera Christian, Eggeling Eva, Fellner Dieter W., Sihn Wilfried
2016
Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis - the acquisition of the current state - is still created using pen & paper by physically visiting the production place. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning by using a meta description together with a dependency graph. With our Value Stream Creator and explOrer (VASCO) we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over planning, comparison with previous realities, up to simulation of future possible states.
Gursch Heimo, Körner Stefan, Krasser Hannes, Kern Roman
2016
Painting a modern car involves applying many coats during a highly complex and automated process. The individual coats not only serve a decoration purpose but are also curial for protection from damage due to environmental influences, such as rust. For an optimal paint job, many parameters have to be optimised simultaneously. A forecasting model was created, which predicts the paint flaw probability for a given set of process parameters, to help the production managers modify the process parameters to achieve an optimal result. The mathematical model was based on historical process and quality observations. Production managers who are not familiar with the mathematical concept of the model can use it via an intuitive Web-based Graphical User Interface (Web-GUI). The Web-GUI offers production managers the ability to test process parameters and forecast the expected quality. The model can be used for optimising the process parameters in terms of quality and costs.
Gursch Heimo, Kern Roman
2016
Many different sensing, recording and transmitting platforms are offered on today’s market for Internet of Things (IoT) applications. But taking and transmitting measurements is just one part of a complete system. Also long time storage and processing of recorded sensor values are vital for IoT applications. Big Data technologies provide a rich variety of processing capabilities to analyse the recorded measurements. In this paper an architecture for recording, searching, and analysing sensor measurements is proposed. This architecture combines existing IoT and Big Data technologies to bridge the gap between recording, transmission, and persistency of raw sensor data on one side, and the analysis of data on Hadoop clusters on the other side. The proposed framework emphasises scalability and persistence of measurements as well as easy access to the data from a variety of different data analytics tools. To achieve this, a distributed architecture is designed offering three different views on the recorded sensor readouts. The proposed architecture is not targeted at one specific use-case, but is able to provide a platform for a large number of different services.
Hasani-Mavriqi Ilire, Geigl Florian, Pujari Suhbash Chandra, Lex Elisabeth, Helic Denis
2016
In this paper, we study the process of opinion dynamics and consensus building in online collaboration systems, in which users interact with each other following their common interests and their social profiles. Specifically, we are interested in how users similarity and their social status in the community, as well as the interplay of those two factors influence the process of consensus dynamics. For our study, we simulate the diffusion of opinions in collaboration systems using the well-known Naming Game model, which we extend by incorporating an interaction mechanism based on user similarity and user social status. We conduct our experiments on collaborative datasets extracted from the Web. Our findings reveal that when users are guided by their similarity to other users, the process of consensus building in online collaboration systems is delayed. A suitable increase of influence of user social status on their actions can in turn facilitate this process. In summary, our results suggest that achieving an optimal consensus building process in collaboration systems requires an appropriate balance between those two factors.
Czech Paul
2016
Needs, opportunities and challenges
Goldgruber Eva, Gutounig Robert, Schweiger Stefan, Dennerlein Sebastian
2016
Gutounig Robert, Goldgruber Eva, Dennerlein Sebastian, Schweiger Stefan
2016
Dennerlein Sebastian, Gutounig Robert, Goldgruber Eva , Schweiger Stefan
2016
There are many web-based tools like social networks, collaborative writing, or messaging tools that connectorganizations in accordance with web 2.0 principles. Slack is such a web 2.0 instant messaging tool. As per developer, itintegrates the entire communication, file-sharing, real-time messaging, digital archiving and search at one place. Usage inline with these functionalities would reflect expected appropriation, while other usage would account for unexpectedappropriation. We explored which factors of web 2.0 tools determine actual usage and how they affect knowledgemanagement (KM). Therefore, we investigated the relation between the three influencing factors, proposed tool utility fromdeveloper side, intended usage of key implementers, and context of application, to the actual usage in terms of knowledgeactivities (generate, acquire, organize, transfer and save knowledge). We conducted episodic interviews with keyimplementers in five different organizational contexts to understand how messaging tools affect KM by analyzing theappropriation of features. Slack was implemented with the intention to enable exchange between project teams, connectingdistributed project members, initiate a community of learners and establish a communication platform. Independent of thecontext, all key implementers agreed on knowledge transfer, organization and saving in accordance with Slack’s proposedutility. Moreover, results revealed that a usage intention of internal management does not lead to acquisition of externalknowledge, and usage intention of networking not to generation of new knowledge. These results suggest that it is not thecontext of application, but the intended usage that mainly affects the tool's efficacy with respect to KM: I.e. intention seemsto affect tool selection, first, explaining commonalities with respect to knowledge activities (expected appropriation) and,subsequently, intention also affects unexpected appropriation beyond the developers’ tool utility. A messaging tool is, hence,not only a messaging tool, but it is ‘what you make of it!’
Kowald Dominik, Lex Elisabeth, Kopeinik Simone
2016
In recent years, a number of recommendation algorithmshave been proposed to help learners find suitable learning resources online.Next to user-centered evaluations, offline-datasets have been usedto investigate new recommendation algorithms or variations of collaborativefiltering approaches. However, a more extensive study comparinga variety of recommendation strategies on multiple TEL datasets ismissing. In this work, we contribute with a data-driven study of recommendationstrategies in TEL to shed light on their suitability forTEL datasets. To that end, we evaluate six state-of-the-art recommendationalgorithms for tag and resource recommendations on six empiricaldatasets: a dataset from European Schoolnets TravelWell, a dataset fromthe MACE portal, which features access to meta-data-enriched learningresources from the field of architecture, two datasets from the socialbookmarking systems BibSonomy and CiteULike, a MOOC dataset fromthe KDD challenge 2015, and Aposdle, a small-scale workplace learningdataset. We highlight strengths and shortcomings of the discussed recommendationalgorithms and their applicability to the TEL datasets.Our results demonstrate that the performance of the algorithms stronglydepends on the properties and characteristics of the particular dataset.However, we also find a strong correlation between the average numberof users per resource and the algorithm performance. A tag recommenderevaluation experiment reveals that a hybrid combination of a cognitiveinspiredand a popularity-based approach consistently performs best onall TEL datasets we utilized in our study.
Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman
2016
To bring bibliometrics and information retrieval closer together, we propose to add the concept of author attribution into the pre-processing of scientific publications. Presently, common bibliographic metrics often attribute the entire article to all the authors affecting author-specific retrieval processes. We envision a more finegrained analysis of scientific authorship by attributing particular segments to authors. To realize this vision, we propose a new feature representation of scientific publications that captures the distribution of tylometric features. In a classification setting, we then seek to predict the number of authors of a scientific article. We evaluate our approach on a data set of ~ 6100 PubMed articles and achieve best results by applying random forests, i.e., 0.76 precision and 0.76 recall averaged over all classes.
Fessl Angela, Pammer-Schindler Viktoria, Blunk Oliver, Prilla Michael
2016
Reflective learning has been established as a process that deepenslearning in both educational and work-related settings. We present a literaturereview on various approaches and tools (e.g., prompts, journals, visuals)providing guidance for facilitating reflective learning. Research consideredin this review coincides common understanding of reflective learning, hasapplied and evaluated a tool supporting reflection and presents correspondingresults. Literature was analysed with respect to timing of reflection, reflectionparticipants, type of reflection guidance, and results achieved regardingreflection. From this analysis, we were able to derive insights, guidelinesand recommendations for the design of reflection guidance functionality incomputing systems: (i) ensure that learners understand the purpose of reflectivelearning, (ii) combine reflective learning tools with reflective questions either inform of prompts or with peer-to-peer or group discussions, (iii) for work-relatedsettings consider the time with regard to when and how to motivate to reflect.
Rexha Andi, Kröll Mark, Kern Roman
2016
Monitoring (social) media represents one means for companies to gain access to knowledge about, for instance, competitors, products as well as markets. As a consequence, social media monitoring tools have been gaining attention to handle amounts of data nowadays generated in social media. These tools also include summarisation services. However, most summarisation algorithms tend to focus on (i) first and last sentences respectively or (ii) sentences containing keywords.In this work we approach the task of summarisation by extracting 4W (who, when, where, what) information from (social)media texts. Presenting 4W information allows for a more compact content representation than traditional summaries. Inaddition, we depart from mere named entity recognition (NER) techniques to answer these four question types by includingnon-rigid designators, i.e. expressions which do not refer to the same thing in all possible worlds such as “at the main square”or “leaders of political parties”. To do that, we employ dependency parsing to identify grammatical characteristics for each question type. Every sentence is then represented as a 4W block. We perform two different preliminary studies: selecting sentences that better summarise texts by achieving an F1-measure of 0.343, as well as a 4W block extraction for which we achieve F1-measures of 0.932; 0.900; 0.803; 0.861 for “who”, “when”, “where” and “what” category respectively. In a next step the 4W blocks are ranked by relevance. The top three ranked blocks, for example, then constitute a summary of the entire textual passage. The relevance metric can be customised to the user’s needs, for instance, ranked by up-to-dateness where the sentences’ tense is taken into account. In a user study we evaluate different ranking strategies including (i) up-todateness,(ii) text sentence rank, (iii) selecting the firsts and lasts sentences or (iv) coverage of named entities, i.e. based on the number of named entities in the sentence. Our 4W summarisation method presents a valuable addition to a company’s(social) media monitoring toolkit, thus supporting decision making processes.
Pimas Oliver, Rexha Andi, Kröll Mark, Kern Roman
2016
The PAN 2016 author profiling task is a supervised classification problemon cross-genre documents (tweets, blog and social media posts). Our systemmakes use of concreteness, sentiment and syntactic information present in thedocuments. We train a random forest model to identify gender and age of a document’sauthor. We report the evaluation results received by the shared task.
Trattner Christoph, Kuśmierczyk Tomasz, Rokicki Markus, Herder Eelco
2016
Historically, there have always been differences in how men andwomen cook or eat. The reasons for this gender divide have mostlygone in Western culture, but still there is qualitative and anecdotalevidence that men prefer heftier food, that women take care of everydaycooking, and that men cook to impress. In this paper, weshow that these differences can also quantitatively be observed in alarge dataset of almost 200 thousand members of an online recipecommunity. Further, we show that, using a set of 88 features, thegender of the cooks can be predicted with fairly good accuracy of75%, with preference for particular dishes, the use of spices andthe use of kitchen utensils being the strongest predictors. Finally,we show the positive impact of our results on online food reciperecommender systems that take gender information into account.
Kern Roman, Klampfl Stefan, Rexha Andi
2016
This report describes our contribution to the 2nd ComputationalLinguistics Scientific Document Summarization Shared Task (CLSciSumm2016), which asked to identify the relevant text span in a referencepaper that corresponds to a citation in another document that citesthis paper. We developed three different approaches based on summarisationand classification techniques. First, we applied a modified versionof an unsupervised summarisation technique, TextSentenceRank, to thereference document, which incorporates the similarity of sentences tothe citation on a textual level. Second, we employed classification to selectfrom candidates previously extracted through the original TextSentenceRankalgorithm. Third, we used unsupervised summarisation of therelevant sub-part of the document that was previously selected in a supervisedmanner.
Trattner Christoph, Kuśmierczyk Tomasz, Nørvåg Kjetil
2016
Gursch Heimo, Ziak Hermann, Kröll Mark, Kern Roman
2016
Modern knowledge workers need to interact with a large number of different knowledge sources with restricted or public access. Knowledge workers are thus burdened with the need to familiarise and query each source separately. The EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project aims at developing a recommender system providing relevant and novel content to its users. Based on the user’s work context, the EEXCESS system can either automatically recommend useful content, or support users by providing a single user interface for a variety of knowledge sources. In the design process of the EEXCESS system, recommendation quality, scalability and security where the three most important criteria. This paper investigates the scalability aspect achieved by federated design of the EEXCESS recommender system. This means that, content in different sources is not replicated but its management is done in each source individually. Recommendations are generated based on the context describing the knowledge worker’s information need. Each source offers result candidates which are merged and re-ranked into a single result list. This merging is done in a vector representation space to achieve high recommendation quality. To ensure security, user credentials can be set individually by each user for each source. Hence, access to the sources can be granted and revoked for each user and source individually. The scalable architecture of the EEXCESS system handles up to 100 requests querying up to 10 sources in parallel without notable performance deterioration. The re-ranking and merging of results have a smaller influence on the system's responsiveness than the average source response rates. The EEXCESS recommender system offers a common entry point for knowledge workers to a variety of different sources with only marginally lower response times as the individual sources on their own. Hence, familiarisation with individual sources and their query language is not necessary.
Rexha Andi, Dragoni Mauro, Kern Roman, Kröll Mark
2016
Ontology matching in a multilingual environment consists of finding alignments between ontologies modeled by using more than one language. Such a research topic combines traditional ontology matching algorithms with the use of multilingual resources, services, and capabilities for easing multilingual matching. In this paper, we present a multilingual ontology matching approach based on Information Retrieval (IR) techniques: ontologies are indexed through an inverted index algorithm and candidate matches are found by querying such indexes. We also exploit the hierarchical structure of the ontologies by adopting the PageRank algorithm for our system. The approaches have been evaluated using a set of domain-specific ontologies belonging to the agricultural and medical domain. We compare our results with existing systems following an evaluation strategy closely resembling a recommendation scenario. The version of our system using PageRank showed an increase in performance in our evaluations.
Traub Matthias, Lacic Emanuel, Kowald Dominik, Kahr Martin, Lex Elisabeth
2016
In this paper, we present work-in-progress on a recommender system designed to help people in need find the best suited social care institution for their personal issues. A key requirement in such a domain is to assure and to guarantee the person's privacy and anonymity in order to reduce inhibitions and to establish trust. We present how we aim to tackle this barely studied domain using a hybrid content-based recommendation approach. Our approach leverages three data sources containing textual content, namely (i) metadata from social care institutions, (ii) institution specific FAQs, and (iii) questions that a specific institution has already resolved. Additionally, our approach considers the time context of user questions as well as negative user feedback to previously provided recommendations. Finally, we demonstrate an application scenario of our recommender system in the form of a real-world Web system deployed in Austria.
Lacic Emanuel
2016
Recommender systems are acknowledged as an essential instru- ment to support users in finding relevant information. However, adapting to different domain specific data models is a challenge, which many recommender frameworks neglect. Moreover, the ad- vent of the big data era has posed the need for high scalability and real-time processing of frequent data updates, and thus, has brought new challenges for the recommender systems’ research community. In this work, we show how different item, social and location data features can be utilized and supported to provide real-time recom- mendations. We further show how to process data updates online and capture user’s real-time interest without recalculating recom- mendations. The presented recommendation framework provides a scalable and customizable architecture suited for providing real- time recommendations to multiple domains. We further investigate the impact of an increasing request load and show how the runtime can be decreased by scaling the framework.
Stanisavljevic Darko, Hasani-Mavriqi Ilire, Lex Elisabeth, Strohmaier M., Helic Denis
2016
In this paper we assess the semantic stability of Wikipedia by investigat-ing the dynamics of Wikipedia articles’ revisions over time. In a semantically stablesystem, articles are infrequently edited, whereas in unstable systems, article contentchanges more frequently. In other words, in a stable system, the Wikipedia com-munity has reached consensus on the majority of articles. In our work, we measuresemantic stability using the Rank Biased Overlap method. To that end, we prepro-cess Wikipedia dumps to obtain a sequence of plain-text article revisions, whereaseach revision is represented as a TF-IDF vector. To measure the similarity betweenconsequent article revisions, we calculate Rank Biased Overlap on subsequent termvectors. We evaluate our approach on 10 Wikipedia language editions includingthe five largest language editions as well as five randomly selected small languageeditions. Our experimental results reveal that even in policy driven collaborationnetworks such as Wikipedia, semantic stability can be achieved. However, there aredifferences on the velocity of the semantic stability process between small and largeWikipedia editions. Small editions exhibit faster and higher semantic stability than large ones. In particular, in large Wikipedia editions, a higher number of successiverevisions is needed in order to reach a certain semantic stability level, whereas, insmall Wikipedia editions, the number of needed successive revisions is much lowerfor the same level of semantic stability.
Kopeinik Simone, Kowald Dominik, Hasani-Mavriqi Ilire, Lex Elisabeth
2016
Classic resource recommenders like Collaborative Filteringtreat users as just another entity, thereby neglecting non-linear user-resource dynamics that shape attention and in-terpretation. SUSTAIN, as an unsupervised human cate-gory learning model, captures these dynamics. It aims tomimic a learner’s categorization behavior. In this paper, weuse three social bookmarking datasets gathered from Bib-Sonomy, CiteULike and Delicious to investigate SUSTAINas a user modeling approach to re-rank and enrich Collab-orative Filtering following a hybrid recommender strategy.Evaluations against baseline algorithms in terms of recom-mender accuracy and computational complexity reveal en-couraging results. Our approach substantially improves Col-laborative Filtering and, depending on the dataset, success-fully competes with a computationally much more expen-sive Matrix Factorization variant. In a further step, we ex-plore SUSTAIN’s dynamics in our specific learning task andshow that both memorization of a user’s history and clus-tering, contribute to the algorithm’s performance. Finally,we observe that the users’ attentional foci determined bySUSTAIN correlate with the users’ level of curiosity, iden-tified by the SPEAR algorithm. Overall, the results ofour study show that SUSTAIN can be used to efficientlymodel attention-interpretation dynamics of users and canhelp improve Collaborative Filtering for resource recommen-dations.
Kraker Peter, Kittel Christopher, Enkhbayar Asuraa
2016
The goal of Open Knowledge Maps is to create a visual interface to the world’s scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
Mutlu Belgin, Sabol Vedran, Gursch Heimo, Kern Roman
2016
Graphical interfaces and interactive visualisations are typical mediators between human users and data analytics systems. HCI researchers and developers have to be able to understand both human needs and back-end data analytics. Participants of our tutorial will learn how visualisation and interface design can be combined with data analytics to provide better visualisations. In the first of three parts, the participants will learn about visualisations and how to appropriately select them. In the second part, restrictions and opportunities associated with different data analytics systems will be discussed. In the final part, the participants will have the opportunity to develop visualisations and interface designs under given scenarios of data and system settings.
Gursch Heimo, Wuttei Andreas, Gangloff Theresa
2016
Highly optimised assembly lines are commonly used in various manufacturing domains, such as electronics, microchips, vehicles, electric appliances, etc. In the last decades manufacturers have installed software systems to control and optimise their shop foor processes. Machine Learning can enhance those systems by providing new insights derived from the previously captured data. This paper provides an overview of Machine Learning felds and an introduction to manufacturing management systems. These are followed by a discussion of research projects in the feld of applying Machine Learning solutions for condition monitoring, process control, scheduling, and predictive maintenance.
Santos Tiago, Kern Roman
2016
This paper provides an overview of current literature on timeseries classification approaches, in particular of early timeseries classification.A very common and effective time series classification ap-proach is the 1-Nearest Neighbor classifier, with differentdistance measures such as the Euclidean or dynamic timewarping distances. This paper starts by reviewing thesebaseline methods.More recently, with the gain in popularity in the applica-tion of deep neural networks to the field of computer vision,research has focused on developing deep learning architec-tures for time series classification as well. The literature inthe field of deep learning for time series classification hasshown promising results.Early time series classification aims to classify a time se-ries with as few temporal observations as possible, whilekeeping the loss of classification accuracy at a minimum.Prominent early classification frameworks reviewed by thispaper include, but are not limited to, ECTS, RelClass andECDIRE. These works have shown that early time seriesclassification may be feasible and performant, but they alsoshow room for improvement
Kern Roman, Ziak Hermann
2016
Context-driven query extraction for content-basedrecommender systems faces the challenge of dealing with queriesof multiple topics. In contrast to manually entered queries, forautomatically generated queries this is a more frequent problem. For instances if the information need is inferred indirectly viathe user's current context. Especially for federated search systemswere connected knowledge sources might react vastly differentlyon such queries, an algorithmic way how to deal with suchqueries is of high importance. One such method is to split mixedqueries into their individual subtopics. To gain insight how amulti topic query can be split into its subtopics we conductedan evaluation where we compared a naive approach against amore complex approaches based on word embedding techniques:One created using Word2Vec and one created using GloVe. Toevaluate these two approaches we used the Webis-QSeC-10 queryset, consisting of about 5,000 multi term queries. Queries of thisset were concatenated and passed through the algorithms withthe goal to split those queries again. Hence the naive approach issplitting the queries into several groups, according to the amountof joined queries, assuming the topics are of equal query termcount. In the case of the Word2Vec and GloVe based approacheswe relied on the already pre-trained datasets. The Google Newsmodel and a model trained with a Wikipedia dump and theEnglish Gigaword newswire text archive. The out of this datasetsresulting query term vectors were grouped into subtopics usinga k-Means clustering. We show that a clustering approach basedon word vectors achieves better results in particular when thequery is not in topical order. Furthermore we could demonstratethe importance of the underlying dataset.
Trattner Christoph, Elsweiler David, Howard Simon
2016
One government response to increasing incidence of lifestyle related illnesses, such as obesity, has been to encourage people to cook for themselves. The healthiness of home cooking will, nevertheless, depend on what people cook and how they cook it. In this article one common source of cooking inspiration - Internet-sourced recipes - is investigated in depth. The energy and macronutrient content of 5237 main meal recipes from the food website Allrecipes.com are compared with those of 100 main meal recipes from five bestselling cookery books from popular celebrity chefs and 100 ready meals from the three leading UK supermarkets. The comparison is made using nutritional guidelines published by the World Health Organisation and the UK Food Standards Agency. The main conclusions drawn from our analyses are that Internet recipes sourced from Allrecipes.com are less healthy than TV-chef recipes and ready meals from leading UK supermarkets. Only 6 out of 5237 Internet recipes fully complied with the WHO recommendations. Internet recipes were more likely to meet the WHO guidelines for protein than both other classes of meal (10.88% v 7% (TV), p<0.01; 10.86% v 9% (ready), p<0.01). However, the Internet recipes were less likely to meet the criteria for fat (14.28% v 24% (TV) v 37% (ready); p<0.01), saturated fat (25.05% v 33% (TV) v 34% (ready); p<0.01) and fibre (compared to ready meals 16.50% v 56%; p<0.01). More Internet recipes met the criteria for sodium density than ready meals (19.63% v 4%; p<0.01), but fewer than the TV-chef meals (19.32% v 36%; p<0.01). For sugar, no differences between Internet recipes and TV-chef recipes were observed (81.1% v 81% (TV); p=0.86), although Internet recipes were less likely to meet the sugar criteria than ready meals (81.1% v 83 % (ready); p<0.01). Repeating the analyses for each year of available data shows that the results are very stable over time.
Kraker Peter, Dennerlein Sebastian, Dörler, D, Ferus, A, Gutounig Robert, Heigl, F., Kaier, C., Rieck Katharina, Šimukovic, E., Vignoli Michela
2016
Between April 2015 and June 2016, members of the Open Access Network Aus- tria (OANA) working group “Open Access and Scholarly Communication” met in Vienna to discuss a fundamental reform of the scholarly communication system.By scholarly communication we mean the processes of producing, reviewing, organising, disseminating and preserving scholarly knowledge1. Scholarly communication does not only concern researchers, but also society at large, especially students, educators, policy makers, public administrators, funders, librarians, journalists, practitioners, publishers, public and private organisations, and interested citizens.
Tschinkel Gerwald, Hasitschka Peter, Sabol Vedran, Hafner R
2016
Faceted search is a well known and broadly imple- mented paradigm for filtering information with various types of structured information. In this paper we introduce a multiple-view faceted interface, consisting of one main visualisation for exploring the data and multiple minia- turised visualisations showing the filters. The Recommen- dation Dashboard tool provides several interactive visual- isations for analysing recommender results along various faceted dimensions specific to cultural heritage and scien- tific content. As our aim is to reduce the user load and opti- mise the use of screen area, we permit only one main visu- alisation to be visible at a time, and introduce the concept of micro-visualisations – small, simplified views conveying only the necessary information – to provide natural, easy to understand representation of the the active filter set.
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup.Index Terms: H.5.2 [HCI]: User Interfaces—Input devicesand strategies; H.5.1 [HCI]: Multimedia Information Systems—Artificial, augmented, and virtual realities
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
The movement towards cyberphysical systems and Industry 4.0promises to imbue each and every stage of production with a myr-iad of sensors. The open question is how people are to comprehendand interact with data originating from industrial machinery. Wepropose a metaphor that compares machines with natural beingsthat appeal to people by representing machine states with patternsoccurring in nature. Our approach uses augmented reality (AR)to represent machine states as trees of different shapes and col-ors (BioAR). We performed a study on pre-attentive processing ofvisual features in AR to determine if our BioAR metaphor con-veys fast changes unambiguously and accurately. Our results indi-cate that the visual features in our BioAR metaphor are processedpre-attentively. In contrast to previous research, for the BioARmetaphor, variations in form induced less errors than variations inhue in the target detection task.
Luzhnica Granit, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper investigates the communication of natural lan-guage messages using a wearable haptic display. Our re-search spans both the design of the haptic display, as wellas the methods for communication that use it. First, threewearable configurations are proposed basing on haptic per-ception fundamentals. To encode symbols, we devise an over-lapping spatiotemporal stimulation (OST) method, that dis-tributes stimuli spatially and temporally with a minima gap.An empirical study shows that, compared with spatial stimu-lation, OST is preferred in terms of recall. Second, we pro-pose an encoding for the entire English alphabet and a train-ing method for letters, words and phrases. A second study in-vestigates communication accuracy. It puts four participantsthrough five sessions, for an overall training time of approx-imately 5 hours per participant. Results reveal that after onehour of training, participants were able to discern 16 letters,and identify two- and three-letter words. They could discernthe full English alphabet (26letters,92%accuracy) after ap-proximately three hours of training, and after five hours par-ticipants were able to interpret words transmitted at an aver-age duration of0.6s per word
Luzhnica Granit, Pammer-Schindler Viktoria, Fessl Angela, Mutlu Belgin, Veas Eduardo Enrique
2016
Especially in lifelong or professional learning, the picture of a continuous learning analytics process emerges. In this proces s, het- erogeneous and changing data source applications provide data relevant to learning, at the same time as questions of learners to data cha nge. This reality challenges designers of analytics tools, as it req uires ana- lytics tools to deal with data and analytics tasks that are unk nown at application design time. In this paper, we describe a generic vi sualiza- tion tool that addresses these challenges by enabling the vis ualization of any activity log data. Furthermore, we evaluate how well parti cipants can answer questions about underlying data given such generic versus custom visualizations. Study participants performed better in 5 out of 10 tasks with the generic visualization tool, worse in 1 out of 1 0 tasks, and without significant difference when compared to the visuali zations within the data-source applications in the remaining 4 of 10 ta sks. The experiment clearly showcases that overall, generic, standalon e visualiza- tion tools have the potential to support analytical tasks suffi ciently well
Trattner Christoph, Schäfer Hanna, Said Alan, Ludwig Bernd, Elsweiler David
2016
Busy lifestyles, abundant options, lack of knowledge ... there are many reasons why people make poor decisions relating to their health. Yet these poor decisions are leading to epidemics, which represent some of the greatest challenges we face as a society today. Noncommunicable Diseases (NCDs), which include cardiovascular diseases, cancer, chronic respiratory diseases and diabetes, account for ∼60% of total deaths worldwide. These diseases share the same four behavioural risk factors: tobacco use, unhealthy diet, physical inactivity and harmful consumption of alcohol and can be prevented and sometimes even reversed with simple lifestyle changes. Eating more healthily, exercising more appropriately, sleeping and relaxing more, as well as simply being more aware of one’s state of health are all things that would lead to improved health. Yet knowing exactly what to change and how, implementing changes and maintaining changes over long time periods are all things people find challenging. These are also problems, for which we believe recommender systems can provide assistance by offering specific, tailored suggestions for behavioural change. In recent years recommender systems for health has become a popular topic within the RecSys community and a selection of empirical contributions and demo systems have been published. Efforts to date, however have been sporadic and lack coordination. We lack shared infrastructure such as datasets, appropriate cross-disciplinary knowledge, even agreed upon goals. It is our aim to use this workshop as a vehicle to:
Atzmüller Martin, Chin Alvin, Trattner Christoph
2016
For the 7h International Workshop on Modeling Social Media, we aim to attract researchers from all over the world working in the field of behavioral analytics using web and social media data. Behavioral analytics is an important topic, e.g., concerning web applications as well as extensions in mobile and ubiquitous applications, for understanding user behavior. We would also like to invite researchers in the data and web mining community to lend their expertise to help to increase our understanding of the web and social media.
Eberhard Lukas, Trattner Christoph
2016
Social information such as stated interests or geographic check-insin social networks has shown to be useful in many recommendertasks recently. Although many successful examples exist, not muchattention has been put on exploring the extent to which social im-pact is useful for the task of recommending sellers to buyers in vir-tual marketplaces. To contribute to this sparse field of research wecollected data of a marketplace and a social network in the virtualworld of Second Life and introduced several social features andsimilarity metrics that we used as input for a user-basedk-nearestneighbor collaborative filtering method. As our results reveal, mostof the types of social information and features which we used areuseful to tackle the problem we defined. Social information suchas joined groups or stated interests are more useful, while otherssuch as places users have been checking in, do not help much forrecommending sellers to buyers. Furthermore, we find that some ofthe features significantly vary in their predictive power over time,while others show more stable behaviors. This research is rele-vant for researchers interested in recommender systems and onlinemarketplace research as well as for engineers interested in featureengineering.
Trattner Christoph, Oberegger Alexander, Eberhard Lukas, Parra Denis, Marinho Leandro
2016
POI (point of interest) recommender systems for location-based social network services, such as Foursquare or Yelp,have gained tremendous popularity in the past few years.Much work has been dedicated into improving recommenda-tion services in such systems by integrating different featuresthat are assumed to have an impact on people’s preferencesfor POIs, such as time and geolocation. Yet, little atten-tion has been paid to the impact of weather on the users’final decision to visit a recommended POI. In this paper wecontribute to this area of research by presenting the firstresults of a study that aims to predict the POIs that userswill visit based on weather data. To this end, we extend thestate-of-the-art Rank-GeoFM POI recommender algorithmwith additional weather-related features, such as tempera-ture, cloud cover, humidity and precipitation intensity. Weshow that using weather data not only significantly increasesthe recommendation accuracy in comparison to the origi-nal algorithm, but also outperforms its time-based variant.Furthermore, we present the magnitude of impact of eachfeature on the recommendation quality, showing the need tostudy the weather context in more detail in the light of POIrecommendation systems.
Kusmierczyk Tomasz, Trattner Christoph, Nørvåg Kjetil
2016
Studying online food patterns has recently become an active fieldof research. While there are a growing body of studies that investi-gate how online food in consumed, little effort has been devoted yetto understand how online food recipes are being created. To con-tribute to this lack of knowledge in the area, we present in this paperthe results of a large-scale study that aims at understanding howhistorical, social and temporal factors impact on the online foodcreation process. Several experiments reveal the extent to whichvarious factors are useful in predicting future recipe production.
Trattner Christoph, Kowald Dominik, Seitlinger Paul, Ley Tobias
2016
Several successful tag recommendation mechanisms have been developed, including algorithms built upon Collaborative Filtering, Tensor Factorization, graph-based and simple "most popular tags" approaches. From an economic perspective, the latter approach has been convincing since calculating frequencies is computationally efficient and effective with respect to different recommender evaluation metrics. In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory in order to extend these conventional "most popular tags" approaches. Based on a theory of human memory, the approach estimates a tag's reuse probability as a function of usage frequency and recency in the user's past (base-level activation) as well as of the current semantic context (associative component).Using four real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike, Delicious and Flickr, we show how refining frequency-based estimates by considering recency and semantic context outperforms conventional "most popular tags" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as Collaborative Filtering, FolkRank and Pairwise Interaction Tensor Factorization with respect to recommender accuracy and runtime. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. Moreover, we demonstrate how effective principles of recommender systems can be designed and implemented if human memory processes are taken into account.
Fessl Angela, Wesiak Gudrun, Pammer-Schindler Viktoria
2016
Reflective learning is an important strategy to keep the vast body of theoretical knowledge fresh, stay up-to-date with new knowledge, and to relate theoretical knowledge to practical experience. In this work, we present a study situated in a qualification program for stroke nurses in Germany. In the seven-week study, $21$ stroke nurses used a quiz on medical knowledge as additional learning instrument. The quiz contained typical quiz questions (``content questions'') as well as reflective questions that aimed at stimulating nurses to reflect on the practical relevance of the learned knowledge.We particularly looked at how reflective questions can support the transfer of theoretical knowledge to practice.The results show that by playful learning and presenting reflective questions at the right time, participants were motivated to reflect, deepened their knowledge and related theoretical knowledge to practical experience. Subsequently, they were able to better understand patient treatments and increased their self-confidence.
Simon Jörg Peter, Schmidt Peter, Pammer-Schindler Viktoria
2016
Synchronisation algorithms are central to collaborative editing software. As collaboration is increasingly mediated by mobile devices, the energy efficiency for such algorithms is interest to a wide community of application developers. In this paper we explore the differential synchronisation (diffsync) algorithm with respect to energy consumption on mobile devices. Discussions within this paper are based on real usage data of PDF annotations via the Mendeley iOS app, which requires realtime synchronisation. We identify three areas for optimising diffsync: a.) Empty cycles in which no changes need to be processed b.) tail energy by adapting cycle intervals and c.) computational complexity. Following these considerations, we propose a push-based diffsync strategy in which synchronisation cycles are triggered when a device connects to the network or when a device is notified of changes.
Dennerlein Sebastian, Lex Elisabeth, Ruiz-Calleja Adolfo, Ley Elisabeth
2016
This paper reports the design and development of a visual Dashboard, called the SSS Dashboard, which visualizes data from informal workplace learning processes from different viewpoints. The SSS Dashboard retrieves its data from the Social Semantic Server (SSS), an infrastructure that integrates data from several workplace learning applications into a semantically-enriched Artifact-Actor Network. A first evaluation with end users in a course for professional teachers gave promising results. Both a trainer and a learner could understand the learning process from different perspectives using the SSS Dashboard. The results obtained will pave the way for the development of future Learning Analytics applications that exploit the data collected by the SSS.
Malarkodi C. S., Lex Elisabeth, Sobha Lalitha Devi
2016
Agricultural data have a major role in the planning and success of rural development activi ties. Agriculturalists, planners, policy makers, gover n- ment officials, farmers and researchers require relevant information to trigger decision making processes. This paper presents our approach towards extracting named entities from real - world agricultura l data from different areas of agricu l- ture using Conditional Random Fields (CRFs). Specifically, we have created a Named Entity tagset consisting of 19 fine grained tags. To the best of our knowledge, there is no specific tag set and annotated corpus avail able for the agricultural domain. We have performed several experiments using different combination of features and obtained encouraging results. Most of the issues observed in an error analysis have been addressed by post - processing heuristic rules, which resulted in a significant improvement of our system’s accuracy
Luzhnica Granit, Simon Jörg Peter, Lex Elisabeth, Pammer-Schindler Viktoria
2016
This paper explores the recognition of hand gestures based on a dataglove equipped with motion, bending and pressure sensors. We se-lected 31 natural and interaction-oriented hand gestures that canbe adopted for general-purpose control of and communication withcomputing systems. The data glove is custom-built, and contains13 bend sensors, 7 motion sensors, 5 pressure sensors and a magne-tometer. We present the data collection experiment, as well as thedesign, selection and evaluation of a classification algorithm. As weuse a sliding window approach to data processing, our algorithm issuitable for stream data processing. Algorithm selection and featureengineering resulted in a combination of linear discriminant anal-ysis and logistic regression with which we achieve an accuracy ofover 98. 5% on a continuous data stream scenario. When removingthe computationally expensive FFT-based features, we still achievean accuracy of 98. 2%.
Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2016
Air travel is one of the most frequently used means of transportation in our every-day life. Thus, it is not surprising that an increasing number of travelers share their experiences with airlines and airports in form of online reviews on the Web. In this work, we thrive to explain and uncover the features of airline reviews that contribute most to traveler satisfaction. To that end, we examine reviews crawled from the Skytrax air travel review portal. Skytrax provides four review categories to review airports, lounges, airlines and seats. Each review category consists of several five-star ratings as well as free-text review content. In this paper, we conducted a comprehensive feature study and we find that not only five-star rating information such as airport queuing time and lounge comfort highly correlate with traveler satisfaction but also textual features in the form of the inferred review text sentiment. Based on our findings, we created classifiers to predict traveler satisfaction using the best performing rating features. Our results reveal that given our methodology, traveler satisfaction can be predicted with high accuracy. Additionally, we find that training a model on the sentiment of the review text provides a competitive alternative when no five star rating information is available. We believe that our work is of interest for researchers in the area of modeling and predicting user satisfaction based on available review data on the Web.
Dennerlein Sebastian, Ley Tobias, , Lex Elisabeth, Seitlinger Paul
2016
In the digital realm, meaning making is reflected in the reciprocal manipulation of mediating artefacts. We understand uptake, i.e. interaction with and understanding of others’ artefact interpretations, as central mechanism and investigate its impact on individual and social learning at work. Results of our social tagging field study indicate that increased uptake of others’ tags is related to a higher shared understanding of collaborators as well as narrower and more elaborative exploration in individual information search. We attribute the social and individual impact to accommodative processes in the high uptake condition.
Kraker Peter, Peters Isabella, Lex Elisabeth, Gumpenberger Christian , Gorraiz Juan
2016
In this study, we explore the citedness of research data, its distribution overtime and its relation to the availability of a digital object identifier (DOI) in the ThomsonReuters database Data Citation Index (DCI). We investigate if cited research data ‘‘im-pacts’’ the (social) web, reflected by altmetrics scores, and if there is any relationshipbetween the number of citations and the sum of altmetrics scores from various social mediaplatforms. Three tools are used to collect altmetrics scores, namely PlumX, ImpactStory,and Altmetric.com, and the corresponding results are compared. We found that out of thethree altmetrics tools, PlumX has the best coverage. Our experiments revealed thatresearch data remain mostly uncited (about 85 %), although there has been an increase inciting data sets published since 2008. The percentage of the number of cited research datawith a DOI in DCI has decreased in the last years. Only nine repositories are responsible for research data with DOIs and two or more citations. The number of cited research datawith altmetrics ‘‘foot-prints’’ is even lower (4–9 %) but shows a higher coverage ofresearch data from the last decade. In our study, we also found no correlation between thenumber of citations and the total number of altmetrics scores. Yet, certain data types (i.e.survey, aggregate data, and sequence data) are more often cited and also receive higheraltmetrics scores. Additionally, we performed citation and altmetric analyses of allresearch data published between 2011 and 2013 in four different disciplines covered by theDCI. In general, these results correspond very well with the ones obtained for research datacited at least twice and also show low numbers in citations and in altmetrics. Finally, weobserved that there are disciplinary differences in the availability and extent of altmetricsscores.
Santos Patricia, Dennerlein Sebastian, Theiler Dieter, Cook John, Treasure-Jones Tamsin, Holley Debbie, Kerr Micky , Atwell Graham, Kowald Dominik, Lex Elisabeth
2016
Social learning networks enable the sharing, transfer and enhancement of knowledge in the workplace that builds the ground to exchange informal learning practices. In this work, three healthcare networks are studied in order to understand how to enable the building, maintaining and activation of new contacts at work and the exchange of knowledge between them. By paying close attention to the needs of the practitioners, we aimed to understand how personal and social learning could be supported by technological services exploiting social networks and the respective traces reflected in the semantics. This paper presents a case study reporting on the results of two co-design sessions and elicits requirements showing the importance of scaffolding strategies in personal and shared learning networks. Besides, the significance of these strategies to aggregate trust among peers when sharing resources and decision-support when exchanging questions and answers. The outcome is a set of design criteria to be used for further technical development for a social tool. We conclude with the lessons learned and future work.
Kowald Dominik, Lex Elisabeth
2016
In this paper, we study factors that in uence tag reuse behavior in social tagging systems. Our work is guided by the activation equation of the cognitive model ACT-R, which states that the usefulness of information in human memory depends on the three factors usage frequency, recency and semantic context. It is our aim to shed light on the in uence of these factors on tag reuse. In our experiments, we utilize six datasets from the social tagging systems Flickr, CiteULike, BibSonomy, Delicious, LastFM and MovieLens, covering a range of various tagging settings. Our results con rm that frequency, recency and semantic context positively in uence the reuse probability of tags. However, the extent to which each factor individually in uences tag reuse strongly depends on the type of folksonomy present in a social tagging system. Our work can serve as guideline for researchers and developers of tag-based recommender systems when designing algorithms for social tagging environments.
Klampfl Stefan, Kern Roman
2016
Semantic enrichment of scientific publications has an increasing impact on scholarly communication. This document describes our contribution to Semantic Publishing Challenge 2016, which aims at investigating novel approaches for improving scholarly publishing through semantic technologies. We participated in Task 2 of this challenge, which requires the extraction of information from the content of a paper given as PDF. The extracted information allows answering queries about the paper’s internal organisation and the context in which it was written. We build upon our contribution to the previous edition of the challenge, where we categorised meta-data, such as authors and affiliations, and extracted funding information. Here we use unsupervised machine learning techniques in order to extend the analysis of the logical structure of the document as to identify section titles and captions of figures and tables. Furthermore, we employ clustering techniques to create the hierarchical table of contents of the article. Our system is modular in nature and allows a separate training of different stages on different training sets.
Urak Günter, Ziak Hermann, Kern Roman
2016
The core approach to distributed knowledge bases is federated search. Two of the main challenges for federated search are the source representation and source selection. Different solutions to these problems were proposed in the literature. Within this work we present our novel approach for query-based sampling by relying on knowledge bases. We show the basic correctness of our approach and we came to the insight that the ambiguity of the probing terms has just a minor impact on the representation of the collection. Finally, we show that our method can be used to distinguish between niche and encyclopedic knowledge bases.
Horn Christopher, Gursch Heimo, Kern Roman, Cik Michael
2016
Models describing human travel patterns are indispensable to plan and operate road, rail and public transportation networks. For most kind of analyses in the field of transportation planning, there is a need for origin-destination (OD) matrices, which specify the travel demands between the origin and destination zones in the network. The preparation of OD matrices is traditionally a time consuming and cumbersome task. The presented system, QZTool, reduces the necessary effort as it is capable of generating OD matrices automatically. These matrices are produced starting from floating phone data (FPD) as raw input. This raw input is processed by a Hadoop-based big data system. A graphical user interface allows for an easy usage and hides the complexity from the operator. For evaluation, we compare a FDP-based OD matrix to an OD matrix created by a traffic demand model. Results show that both matrices agree to a high degree, indicating that FPD-based OD matrices can be used to create new, or to validate or amend existing OD matrices.
Falk Stefan, Rexha Andi, Kern Roman
2016
This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2.The challenge demands to find domain specific target expressions on sentence level thatrefer to reviewed entities. The detection of target words is achieved by using word vectorsand their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words tothe whole target phrase. Our system achievedan F1 score of 56.816% for this task.
Yusuke Fukazawa, Kröll Mark, Strohmaier M., Ota Jun
2016
Task-models concretize general requests to support users in real-world scenarios. In this paper, we present an IR based algorithm (IRTML) to automate the construction of hierarchically structured task-models. In contrast to other approaches, our algorithm is capable of assigning general tasks closer to the top and specific tasks closer to the bottom. Connections between tasks are established by extending Turney’s PMI-IR measure. To evaluate our algorithm, we manually created a ground truth in the health-care domain consisting of 14 domains. We compared the IRTML algorithm to three state-of-the-art algorithms to generate hierarchical structures, i.e. BiSection K-means, Formal Concept Analysis and Bottom-Up Clustering. Our results show that IRTML achieves a 25.9% taxonomic overlap with the ground truth, a 32.0% improvement over the compared algorithms.
Dragoni Mauro, Rexha Andi, Kröll Mark, Kern Roman
2016
Twitter is one of the most popular micro-blogging serviceson the web. The service allows sharing, interaction and collaboration viashort, informal and often unstructured messages called tweets. Polarityclassification of tweets refers to the task of assigning a positive or a nega-tive sentiment to an entire tweet. Quite similar is predicting the polarityof a specific target phrase, for instance@Microsoftor#Linux,whichiscontained in the tweet.In this paper we present a Word2Vec approach to automatically pre-dict the polarity of a target phrase in a tweet. In our classification setting,we thus do not have any polarity information but use only semantic infor-mation provided by a Word2Vec model trained on Twitter messages. Toevaluate our feature representation approach, we apply well-establishedclassification algorithms such as the Support Vector Machine and NaiveBayes. For the evaluation we used theSemeval 2016 Task #4dataset.Our approach achieves F1-measures of up to∼90 % for the positive classand∼54 % for the negative class without using polarity informationabout single words.
Pimas Oliver, Klampfl Stefan, Kohl Thomas, Kern Roman, Kröll Mark
2016
Patents and patent applications are important parts of acompany’s intellectual property. Thus, companies put a lot of effort indesigning and maintaining an internal structure for organizing their ownpatent portfolios, but also in keeping track of competitor’s patent port-folios. Yet, official classification schemas offered by patent offices (i) areoften too coarse and (ii) are not mappable, for instance, to a company’sfunctions, applications, or divisions. In this work, we present a first steptowards generating tailored classification. To automate the generationprocess, we apply key term extraction and topic modelling algorithmsto 2.131 publications of German patent applications. To infer categories,we apply topic modelling to the patent collection. We evaluate the map-ping of the topics found via the Latent Dirichlet Allocation method tothe classes present in the patent collection as assigned by the domainexpert.
Steinbauer Florian, Kröll Mark
2016
Social media monitoring has become an important means for business analytics and trend detection, for instance, analyzing the senti-ment towards a certain product or decision. While a lot of work has beendedicated to analyze sentiment for English texts, much less effort hasbeen put into providing accurate sentiment classification for the Germanlanguage. In this paper, we analyze three established classifiers for theGerman language with respect to Facebook posts. We then present ourown hierarchical approach to classify sentiment and evaluate it using adata set of∼640 Facebook posts from corporate as well as governmentalFacebook pages. We compare our approach to three sentiment classifiersfor German, i.e. AlchemyAPI, Semantria and SentiStrength. With anaccuracy of 70 %, our approach performs better than the other classi-fiers. In an application scenario, we demonstrate our classifier’s abilityto monitor changes in sentiment with respect to the refugee crisis.
Ziak Hermann, Rexha Andi, Kern Roman
2016
This paper describes our system for the mining task of theSocial Book Search Lab in 2016. The track consisted of two task, theclassification of book request postings and the task of linking book identifierswith references mentioned within the text. For the classificationtask we used text mining features like n-grams and vocabulary size, butalso included advanced features like average spelling errors found withinthe text. Here two datasets were provided by the organizers for this taskwhich were evaluated separately. The second task, the linking of booktitles to a work identifier, was addressed by an approach based on lookuptables. For the dataset of the first task our approach was ranked third,following two baseline approaches of the organizers with an accuracy of91 percent. For the second dataset we achieved second place with anaccuracy of 82 percent. Our approach secured the first place with anF-score of 33.50 for the second task.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2016
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce uRank and investigate interactive methods for understanding, refining and reorganizing documents on-the-fly as information needs evolve. uRank includes views summarizing the contents of a recommendation set and interactive methods conveying the role of users' interests through a recommendation ranking. A formal evaluation showed that gathering items relevant to a particular topic of interest with uRank incurs in lower cognitive load compared to a traditional ranked list. A second study consisting in an ecological validation reports on usage patterns and usability of the various interaction techniques within a free, more natural setting.
Wertner Alfred, Pammer-Schindler Viktoria, Czech Paul
2015
Fall detection is a classical use case for mobile phone sensing.Nonetheless, no open dataset exists that could be used totrain, test and compare fall detection algorithms.We present a dataset for mobile phone sensing-based fall detection.The dataset contains both accelerometer and gyroscopedata. Data were labelled with four types of falls(e.g., “stumbling”) and ten types of non-fall activities (e.g.,“sit down”). The dataset was collected with martial artistswho simulated falls. We used five different state-of-the-artAndroid smartphone models worn on the hip in a small bag.Due to the datasets properties of using multiple devices andbeing labelled with multiple fall- and non-fall categories, weargue that it is suitable to serve as benchmark dataset.
2015
Fall detection is a classical use case for mobile phone sensing.Nonetheless, no open dataset exists that could be used totrain, test and compare fall detection algorithms.We present a dataset for mobile phone sensing-based fall detection.The dataset contains both accelerometer and gyroscopedata. Data were labelled with four types of falls(e.g., “stumbling”) and ten types of non-fall activities (e.g.,“sit down”). The dataset was collected with martial artistswho simulated falls. We used five different state-of-the-artAndroid smartphone models worn on the hip in a small bag.Due to the datasets properties of using multiple devices andbeing labelled with multiple fall- and non-fall categories, weargue that it is suitable to serve as benchmark dataset.
Kraker Peter, Schlögl Christian, Jack Kris, Lindstaedt Stefanie
2015
Given the enormous amount of scientific knowledge that is produced each and every day, the need for better ways of gaining–and keeping–an overview of research fields is becoming more and more apparent. In a recent paper published in the Journal of Informetrics [1], we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating such overviews. First, we investigated the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69% of the publications in an average user library can be attributed to a single subject area. Then, we used co-readership patterns to map the field of educational technology. The resulting knowledge domain visualization, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80% of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent.
2015
2015
Various recommender frameworks have been proposed, but still there is a lack of work that addresses important aspects like: immediately considering streaming data within the recommendation process; scalability of the recommender system; real-time recommendation based on different context dependent data. To bridge these gaps, we contribute with a novel recommender framework and show how different context dependent data sources can be supported within a real-world scenario.
2015
In this paper, we present work-in-progress on a recommender system based on Collaborative Filtering that exploits location information gathered by indoor positioning systems. This approach allows us to provide recommendations for "extreme" cold-start users with absolutely no item interaction data available, where methods based on Matrix Factorization would not work. We simulate and evaluate our proposed system using data from the location-based FourSquare system and show that we can provide substantially better recommender accuracy results than a simple MostPopular baseline that is typically used when no interaction data is available.
2015
Informal learning at the workplace includes a multitude of processes. Respective activities can be categorized into multiple perspectives on informal learning, such as reflection, sensemaking, help seeking and maturing of collective knowledge. Each perspective raises requirements with respect to the technical support, this is why an integrated solution relying on social, adaptive and semantic technologies is needed. In this paper, we present the Social Semantic Server, an extensible, open-source application server that equips client-side tools with services to support and scale informal learning at the workplace. More specifically, the Social Semantic Server semantically enriches social data that is created at the workplace in the context of user-to-user or user-artifact interactions. This enriched data can then in turn be exploited in informal learning scenarios to, e.g., foster help seeking by recommending collaborators, resources, or experts. Following the design-based research paradigm, the Social Semantic Server has been implemented based on design principles, which were derived from theories such as Distributed Cognition and Meaning Making. We illustrate the applicability and efficacy of the Social Semantic Server in the light of three real-world applications that have been developed using its social semantic services. Furthermore, we report preliminary results of two user studies that have been carried out recently.
2015
Underspecified search queries can be performed via result list diversification approaches, which are often computationally complex and require longer response times. In this paper, we explore an alternative, and more efficient way to diversify the result list based on query expansion. To that end, we used a knowledge base pseudo-relevance feedback algorithm. We compared our algorithm to IA-Select, a state-of-the-art diversification method, using its intent-aware version of the NDCG (Normalized Discounted Cumulative Gain) metric. The results indicate that our approach can guarantee a similar extent of diversification as IA-Select. In addition, we showed that the supported query language of the underlying search engines plays an important role in the query expansion based on diversification. Therefore, query expansion may be an alternative when result diversification is not feasible, for example in federated search systems where latency and the quantity of handled search results are critical issues.
2015
The objective of the EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project is to develop a system that can automatically recommend helpful and novel content to knowledge workers. The EEXCESS system can be integrated into existing software user interfaces as plugins which will extract topics and suggest the relevant material automatically. This recommendation process simplifies the information gathering of knowledge workers. Recommendations can also be triggered manually via web frontends. EEXCESS hides the potentially large number of knowledge sources by semi or fully automatically providing content suggestions. Hence, users only have to be able to in use the EEXCESS system and not all sources individually. For each user, relevant sources can be set or auto-selected individually. EEXCESS offers open interfaces, making it easy to connect additional sources and user program plugins.
2015
The analysis of temporal relationships in large amounts of graph data has gained significance in recent years. Information providers such as journalists seek to bring order into their daily work when dealing with temporally distributed events and the network of entities, such as persons, organisations or locations, which are related to these events. In this paper we introduce a time-oriented graph visualisation approach which maps temporal information to visual properties such as size, transparency and position and, combined with advanced graph navigation features, facilitates the identification and exploration of temporal relationships. To evaluate our visualisation, we compiled a dataset of ~120.000 news articles from international press agencies including Reuters, CNN, Spiegel and Aljazeera. Results from an early pilot study show the potentials of our visualisation approach and its usefulness for nalysing temporal relationships in large data sets.
2015
2015
2015
The amount of information available on the internet and within enterprises has reached an incredible dimension. Efficiently finding and understanding information and thereby saving resources remains one of the major challenges in our daily work. Powerful text analysis methods, a scalable faceted retrieval engine and a well-designed interactive user interface are required to address the problem. Besides providing means for drilling-down to the relevant piece of information, a part of the challenge arises from the need of analysing and visualising data to discover relationships and correlations, gain an overview of data distributions and unveil trends. Visual interfaces leverage the enormous bandwidth of the human visual system to support pattern discovery in large amounts of data. Our Knowminer search builds upon the well-known faceted search approach which is extended with interactive visualisations allowing users to analyse different aspects of the result set. Additionally, our system provides functionality for organising interesting search results into portfolios, and also supports social features for rating and boosting search results and for sharing and annotating portfolios.
2015
The overwhelming majority of scientific publications are authored by multiple persons; yet, bibliographic metrics are only assigned to individual articles as single entities. In this paper, we aim at a more fine-grained analysis of scientific authorship. We therefore adapt a text segmentation algorithm to identify potential author changes within the main text of a scientific article, which we obtain by using existing PDF extraction techniques. To capture stylistic changes in the text, we employ a number of stylometric features. We evaluate our approach on a small subset of PubMed articles consisting of an approximately equal number of research articles written by a varying number of authors. Our results indicate that the more authors an article has the more potential author changes are identified. These results can be considered as an initial step towards a more detailed analysis of scientific authorship, thereby extending the repertoire of bibliometrics.
2015
Fall detection is a classical use case for mobile phone sensing.Nonetheless, no open dataset exists that could be used totrain, test and compare fall detection algorithms.We present a dataset for mobile phone sensing-based fall detection.The dataset contains both accelerometer and gyroscopedata. Data were labelled with four types of falls(e.g., “stumbling”) and ten types of non-fall activities (e.g.,“sit down”). The dataset was collected with martial artistswho simulated falls. We used five different state-of-the-artAndroid smartphone models worn on the hip in a small bag.Due to the datasets properties of using multiple devices andbeing labelled with multiple fall- and non-fall categories, weargue that it is suitable to serve as benchmark dataset.
2015
With the emergence of Web 2.0, tag recommenders have become important tools, which aim to support users in nding descriptive tags for their bookmarked resources. Although current algorithms provide good results in terms of tag prediction accuracy, they are often designed in a data-driven way and thus, lack a thorough understanding of the cognitive processes that play a role when people assign tags to resources. This thesis aims at modeling these cognitive dynamics in social tagging in order to improve tag recommendations and to better understand the underlying processes. As a rst attempt in this direction, we have implemented an interplay between individual micro-level (e.g., categorizing resources or temporal dynamics) and collective macrolevel (e.g., imitating other users' tags) processes in the form of a novel tag recommender algorithm. The preliminary results for datasets gathered from BibSonomy, CiteULike and Delicious show that our proposed approach can outperform current state-of-the-art algorithms, such as Collaborative Filtering, FolkRank or Pairwise Interaction Tensor Factorization. We conclude that recommender systems can be improved by incorporating related principles of human cognition.
2015
In this paper, we propose an approach to deriving public transportation timetables of a region (i.e. country) based on (i) large- scale, non-GPS cell phone data and (ii) a dataset containing geographic information of public transportation stations. The presented algorithm is designed to work with movements data, which are scarce and have a low spatial accuracy but exists in vast amounts (large-scale). Since only aggregated statistics are used, our algorithm copes well with anonymized data. Our evaluation shows that 89% of the departure times of popular train connections are correctly recalled with an allowed deviation of 5 minutes. The timetable can be used as feature for transportation mode detection to separate public from private transport when no public timetable is available.
2015
2015
People willingly provide more and more information about themselves on social media platforms. This personal information about users’ emotions (sentiment) or goals (intent) is particularly valuable, for instance, for monitoring tools. So far, sentiment and intent analysis were conducted separately. Yet, both aspects can complement each other thereby informing processes such as explanation and reasoning. In this paper, we investigate the relation between intent and sentiment in weblogs. We therefore extract ~90,000 human goal instances from the ICWSM 2009 Spinn3r dataset and assign respective sentiments. Our results indicate that associating intent with sentiment represents a valuable addition to research areas such as text analytics and text understanding.
Simon Jörg Peter, Schmidt Peter, Pammer-Schindler Viktoria
2015
Synchronisation algorithms are central components of collab- orative editing software. The energy efficiency for such algo- rithms becomes of interest to a wide community of mobile application developers. In this paper we explore the differen- tial synchronisation (diffsync) algorithm with respect to en- ergy consumption on mobile devices. We identify three areas for optimisation: a.) Empty cycles where diffsync is executed although no changes need to be processed b.) tail energy by adapting cycle intervals and c.) computational complexity. We propose a push-based diffsync strategy in which synchronisation cycles are triggered when a device connects to the network or when a device is notified of changes. Discussions within this paper are based on real usage data of PDF annotations via the Mendeley iOS app.
2015
Recent research has unveiled the importance of online social networks for improving the quality of recommender systems and encouraged the research community to investigate better ways of exploiting the social information for recommendations. To contribute to this sparse field of research, in this paper we exploit users’ interactions along three data sources (marketplace, social network and location-based) to assess their performance in a barely studied domain: recommending products and domains of interests (i.e., product categories) to people in an online marketplace environment. To that end we defined sets of content- and network-based user similarity features for each data source and studied them isolated using an user-based Collaborative Filtering (CF) approach and in combination via a hybrid recommender algorithm, to assess which one provides the best recommendation performance. Interestingly, in our experiments conducted on a rich dataset collected from SecondLife, a popular online virtual world, we found that recommenders relying on user similarity features obtained from the social network data clearly yielded the best results in terms of accuracy in case of predicting products, whereas the features obtained from the marketplace and location-based data sources also obtained very good results in case of predicting categories. This finding indicates that all three types of data sources are important and should be taken into account depending on the level of specialization of the recommendation task.
2015
We assume that recommender systems are more successful, when they are based on a thorough understanding of how people process information. In the current paper we test this assumption in the context of social tagging systems. Cognitive research on how people assign tags has shown that they draw on two interconnected levels of knowledge in their memory: on a conceptual level of semantic fields or LDA topics, and on a lexical level that turns patterns on the semantic level into words. Another strand of tagging research reveals a strong impact of time-dependent forgetting on users' tag choices, such that recently used tags have a higher probability being reused than "older" tags. In this paper, we align both strands by implementing a computational theory of human memory that integrates the two-level conception and the process of forgetting in form of a tag recommender. Furthermore, we test the approach in three large-scale social tagging datasets that are drawn from BibSonomy, CiteULike and Flickr. As expected, our results reveal a selective effect of time: forgetting is much more pronounced on the lexical level of tags. Second, an extensive evaluation based on this observation shows that a tag recommender interconnecting the semantic and lexical level based on a theory of human categorization and integrating time-dependent forgetting on the lexical level results in high accuracy predictions and outperforms other wellestablished algorithms, such as Collaborative Filtering, Pairwise Interaction Tensor Factorization, FolkRank and two alternative time-dependent approaches. We conclude that tag recommenders will benefit from going beyond the manifest level of word co-occurrences, and from including forgetting processes on the lexical level.
2015
In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory. Based on a theory of human memory, the approach estimates a tag's probability being applied by a particular user as a function of usage frequency and recency of the tag in the user's past. This probability is further refined by considering the in uence of the current semantic context of the user's tagging situation. Using three real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike and Flickr, we show how refining frequency-based estimates by considering usage recency and contextual in uence outperforms conventional "most popular tags" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as FolkRank, Pairwise Interaction Tensor Factorization and Collaborative Filtering. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. We demonstrate how effective principles of recommender systems can be designed and implemented if human memory processes are taken into account.
Lindstaedt Stefanie , Ley Tobias, Sack Harald
2015
Kravcik Milos, Mikroyannidis Alexander, Pammer-Schindler Viktoria, Prilla Michael , Ullmann T.D.
2015
Pammer-Schindler Viktoria, Bratic Marina, Feyertag Sandra, Faltin Nils
2015
We report two 6-week studies, each with 10 participants, on improving time management. In each study a different interventions was administered, in parallel to otherwise regular work: In the self-tracking setting, participants used only an activity logging tool to track their time use and a reflective practice, namely daily review of time use, to improve time management. In the coaching setting, participants did the same, but additionally received weekly bilateral coaching. In both settings, participants reported learning about time management. This is encouraging, as such self-directed learning is clearly cheaper than coaching. Only participants in the coaching setting however improved their self-assessment with respect to predefined time management best practices. The Value of Self-tracking and the Added Value of Coaching in the Case of Improving Time Management. Available from: https://www.researchgate.net/publication/300259607_The_Value_of_Self-tracking_and_the_Added_Value_of_Coaching_in_the_Case_of_Improving_Time_Management [accessed Oct 24 2017].
Scherer Reinhold, Schwarz Andreas , Müller-Putz G. R. , Pammer-Schindler Viktoria, Lloria Garcia Mariano
2015
Mutual brain-machine co-adaptation is the mostcommon approach used to gain control over spontaneouselectroencephalogram (EEG) based brain-computer interfaces(BCIs). Co-adaptation means the concurrent or alternating useof machine learning and the brain’s reinforcement learningmechanisms. Results from the literature, however, suggest thatcurrent implementations of this approach does not lead todesired results (“BCI inefficiency”). In this paper, we proposean alternative strategy that implements some recommendationsfrom educational psychology and instructional design. We presenta jigsaw puzzle game for Android devices developed to train theBCI skill in individuals with cerebral palsy (CP). Preliminaryresults of a supporting study in four CP users suggest high useracceptance. Three out of the four users achieved better thanchance accuracy in arranging pieces to form the puzzle.Index Terms—Brain-Computer Interface, Electroencephalo-gram, Human-Computer Interaction, Game-based learning,Cerebral palsy.
Wozelka Ralph, Kröll Mark, Sabol Vedran
2015
The analysis of temporal relationships in large amounts of graph data has gained significance in recent years. In-formation providers such as journalists seek to bring order into their daily work when dealing with temporally dis-tributed events and the network of entities, such as persons, organisations or locations, which are related to these events. In this paper we introduce a time-oriented graph visualisation approach which maps temporal information to visual properties such as size, transparency and position and, combined with advanced graph navigation features, facilitates the identification and exploration of temporal relationships. To evaluate our visualisation, we compiled a dataset of ~120.000 news articles from international press agencies including Reuters, CNN, Spiegel and Aljazeera. Results from an early pilot study show the potentials of our visualisation approach and its usefulness for analysing temporal relationships in large data sets.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2015
uRankis a Web-based tool combining lightweight text analyticsand visual methods for topic-wise exploration of document sets.It includes a view summarizing the content of the document setin meaningful terms, a dynamic document ranking view and a de-tailed view for further inspection of individual documents. Its ma-jor strength lies in how it supports users in reorganizing documentson-the-fly as their information interests change. We present a pre-liminary evaluation showing that uRank helps to reduce cognitiveload compared to a traditional list-based representation.
Tatzgern Markus, Grasset Raphael, Veas Eduardo Enrique, Schmalstieg Dieter
2015
Augmented reality (AR) enables users to retrieve additional information about real world objects and locations. Exploring such location-based information in AR requires physical movement to different viewpoints, which may be tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric exploration techniques for handheld AR that allow users to access information freely using a virtual copy metaphor. We focus on the design of techniques that allow the exploration of large real world objects. We evaluated our interfaces in a series of studies in controlled conditions and compared them to a 3D map interface, which is a more common method for accessing location-based information. Based on our findings, we put forward design recommendations that should be considered by future generations of location-based AR browsers, 3D tourist guides or situated urban planning.
Silva Nelson, Eggeling Eva, Schreck Tobias, Fellner Dieter W.
2015
Gursch Heimo, Ziak Hermann, Kern Roman
2015
The objective of the EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project is to develop a system that can automatically recommend helpful and novel content to knowledge workers. The EEXCESS system can be integrated into existing software user interfaces as plugins which will extract topics and suggest the relevant material automatically. This recommendation process simplifies the information gathering of knowledge workers. Recommendations can also be triggered manually via web frontends. EEXCESS hides the potentially large number of knowledge sources by semi or fully automatically providing content suggestions. Hence, users only have to be able to in use the EEXCESS system and not all sources individually. For each user, relevant sources can be set or auto-selected individually. EEXCESS offers open interfaces, making it easy to connect additional sources and user program plugins.
Dennerlein Sebastian, Rella Matthias, Tomberg Vladimir, Theiler Dieter, Treasure-Jones Tamsin, Kerr Micky, Ley Tobias, Al-Smadi Mohammad, Trattner Christoph
2015
Sensemaking at the workplace and in educational contexts has beenextensively studied for decades. Interestingly, making sense out of the own wealthof learning experiences at the workplace has been widely ignored. To tackle thisissue, we have implemented a novel sensemaking interface for healthcare professionalsto support learning at the workplace. The proposed prototype supportsremembering of informal experiences from episodic memory followed by sensemakingin semantic memory. Results from an initial study conducted as part ofan iterative co-design process reveal the prototype is being perceived as usefuland supportive for informal sensemaking by study participants from the healthcaredomain. Furthermore, we find first evidence that re-evaluation of collectedinformation is a potentially necessary process that needs further exploration tofully understand and support sensemaking of informal learning experiences.
Parra Denis, Gomez M., Hutardo D., Wen X., Lin Yu-Ru, Trattner Christoph
2015
Twitter is often referred to as a backchannel for conferences. While the main conference takes place in a physicalsetting, on-site and off-site attendees socialize, introduce new ideas or broadcast information by microblogging on Twitter.In this paper we analyze scholars’ Twitter usage in 16 Computer Science conferences over a timespan of five years. Ourprimary finding is that over the years there are differences with respect to the uses of Twitter, with an increase ofinformational activity (retweets and URLs), and a decrease of conversational usage (replies and mentions), which alsoimpacts the network structure – meaning the amount of connected components – of the informational and conversationalnetworks. We also applied topic modeling over the tweets’ content and found that when clustering conferences accordingto their topics the resulting dendrogram clearly reveals the similarities and differences of the actual research interests ofthose events. Furthermore, we also analyzed the sentiment of tweets and found persistent differences among conferences.It also shows that some communities consistently express messages with higher levels of emotions while others do it in amore neutral manner. Finally, we investigated some features that can help predict future user participation in the onlineTwitter conference activity. By casting the problem as a classification task, we created a model that identifies factors thatcontribute to the continuing user participation. Our results have implications for research communities to implementstrategies for continuous and active participation among members. Moreover, our work reveals the potential for the useof information shared on Twitter in order to facilitate communication and cooperation among research communities, byproviding visibility to new resources or researchers from relevant but often little known research communities.
Trattner Christoph, Parra Denis, Brusilovsky Peter, Marinho Leandro
2015
The use of contexts –side information associated to information tasks– has been one ofthe most important dimensions for the improvement of Information Retrieval tasks, helpingto clarify the information needs of the users which usually start from a few keywords in atext box. Particularly, the social context has been leveraged in search and personalizationsince the inception of the Social Web, but even today we find new scenarios of informationfiltering, search, recommendation and personalization where the use of social signals canproduce a steep improvement. In addition, the action of searching has become a social processon the Web, making traditional assumptions of relevance obsolete and requiring newparadigms for matching the most useful resources that solve information needs. This escenariohas motivated us for organizing the Social Personalization and Search (SPS) workshop,a forum aimed at sharing and discussing research that leverage social data for improvingclassic personalization models for information access and to revisiting search from individualphenomena to a collaborative process.
Ruiz-Calleja Adolfo, Dennerlein Sebastian, Tomberg Vladimir , Pata Kai, Ley Tobias, Theiler Dieter, Lex Elisabeth
2015
This paper presents the potential of a social semantic infrastructure that implements an Actor Artifact Network (AAN) with the final goal of supporting learning analytics at the workplace. Two applications were built on top of such infrastructure and make use of the emerging relations of such a AAN. A preliminary evaluation shows that an AAN can be created out of the usage of both applications, thus opening the possibility to implement learning analytics at the workplace.
Ruiz-Calleja Adolfo, Dennerlein Sebastian, Tomberg Vladimir , Ley Tobias , Theiler Dieter, Lex Elisabeth
2015
This paper presents our experiences using a social semantic infrastructure that implements a semantically-enriched Actor Artifact Network (AAN) to support informal learning at the workplace. Our previous research led us to define the Model of Scaling Informal Learning, to identify several common practices when learning happens at the workplace, and to propose a social semantic infrastructure able to support them. This paper shows this support by means of two illustrative examples where practitioners employed several applications integrated into the infrastructure. Thus, this paper clarifies how workplace learning processes can be supported with such infrastructure according to the aforementioned model. The initial analysis of these experiences gives promising results since it shows how the infrastructure mediates in the sharing of contextualized learning artifacts and how it builds up an AAN that makes explicit the relationships between actors and artifacts when learning at the workplace.
Cook John, Ley Tobias, Maier Ronald, Mor Yishay, Santos Patricia, Lex Elisabeth, Dennerlein Sebastian, Trattner Christoph, Holley Debbie
2015
In this paper we define the notion of the Hybrid Social Learning Network. We propose mechanisms for interlinking and enhancing both the practice of professional learning and theories on informal learning. Our approach shows how we employ empirical and design work and a participatory pattern workshop to move from (kernel) theories via Design Principles and prototypes to social machines articulating the notion of a HSLN. We illustrate this approach with the example of Help Seeking for healthcare professionals.
Fessl Angela, Feyertag Sandra, Pammer-Schindler Viktoria
2015
This paper presents a case study on co-designing digitaltechnologies for knowledge management and data-driven businessfor an SME. The goal of the case study was to analysethe status quo of technology usage and to develop designsuggestions in form of mock-ups tailored to the company’sneeds. We used both requirements engineering and interactivesystem design methods such as interviews, workshops,and mock-ups for work analysis and system design. The casestudy illustrates step-by-step the processes of knowledge extractionand combination (analysis) and innovation creation(design). These processes resulted in non-functional mockups,which are planned to be implemented within the SME.
Traub Matthias, Kowald Dominik, Lacic Emanuel, Lex Elisabeth, Schoen Pepjin, Supp Gernot
2015
In this paper, we present a scalable hotel recommender system for TripRebel, a new online booking portal. On the basis of the open-source enterprise search platform Apache Solr, we developed a system architecture with Web-based services to interact with indexed data at large scale as well as to provide hotel recommendations using various state-of-the-art recommender algorithms. We demonstrate the efficiency of our system directly using the live TripRebel portal where, in its current state, hotel alternatives for a given hotel are calculated based on data gathered from the Expedia AffiliateNetwork (EAN).
Pujari Subhash Chandra, Hadgu Asmelah Teka, Lex Elisabeth, Jäschke Robert
2015
In this work, we study social and academic network activities of researchers from Computer Science. Using a recently proposed framework, we map the researchers to their Twitter accounts and link them to their publications. This enables us to create two types of networks: first, networks that reflect social activities on Twitter, namely the researchers’ follow, retweet and mention networks and second, networks that reflect academic activities, that is the co-authorship and citation networks. Based on these datasets, we (i) compare the social activities of researchers with their academic activities, (ii) investigate the consistency and similarity of communities within the social and academic activity networks, and (iii) investigate the information flow between different areas of Computer Science in and between both types of networks. Our findings show that if co-authors interact on Twitter, their relationship is reciprocal, increasing with the numbers of papers they co-authored. In general, the social and the academic activities are not correlated. In terms of community analysis, we found that the three social activity networks are most consistent with each other, with the highest consistency between the retweet and mention network. A study of information flow revealed that in the follow network, researchers from Data Management, HumanComputer Interaction, and Artificial Intelligence act as a source of information for other areas in Computer Science.
Dennerlein Sebastian, Kowald Dominik, Lex Elisabeth, Lacic Emanuel, Theiler Dieter, Ley Tobias
2015
Informal learning at the workplace includes a multitude of processes. Respective activities can be categorized into multiple perspectives on informal learning, such as reflection, sensemaking, help seeking and maturing of collective knowledge. Each perspective raises requirements with respect to the technical support, this is why an integrated solution relying on social, adaptive and semantic technologies is needed. In this paper, we present the Social Semantic Server, an extensible, open-source application server that equips clientside tools with services to support and scale informal learning at the workplace. More specifically, the Social Semantic Server semantically enriches social data that is created at the workplace in the context of user-to-user or user-artifact interactions. This enriched data can then in turn be exploited in informal learning scenarios to, e.g., foster help seeking by recommending collaborators, resources, or experts. Following the design-based research paradigm, the Social Semantic Server has been implemented based on design principles, which were derived from theories such as Distributed Cognition and Meaning Making. We illustrate the applicability and efficacy of the Social Semantic Server in the light of three real-world applications that have been developed using its social semantic services. Furthermore, we report preliminary results of two user studies that have been carried out recently.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2015
Whenever we gather or organize knowledge, the task of searching inevitably takes precedence. As exploration unfolds, it becomes cumbersome to reorganize resources along new interests, as any new search brings new results. Despite huge advances in retrieval and recommender systems from the algorithmic point of view, many real-world interfaces have remained largely unchanged: results appear in an infinite list ordered by relevance with respect to the current query. We introduce uRank, a user-driven visual tool for exploration and discovery of textual document recommendations. It includes a view summarizing the content of the recommendation set, combined with interactive methods for understanding, refining and reorganizing documents on-the-fly as information needs evolve. We provide a formal experiment showing that uRank users can browse the document collection and efficiently gather items relevant to particular topics of interest with significantly lower cognitive load compared to traditional list-based representations.
Lacic Emanuel, Traub Matthias, Kowald Dominik, Lex Elisabeth
2015
In this paper, we present our approach towards an effective scalable recommender framework termed ScaR. Our framework is based on the microservices architecture and exploits search technology to provide real-time recommendations. Since it is our aim to create a system that can be used in a broad range of scenarios, we designed it to be capable of handling various data streams and sources. We show its efficacy and scalability with an initial experiment on how the framework can be used in a large-scale setting.
Lacic Emanuel, Luzhnica Granit, Simon Jörg Peter, Traub Matthias, Lex Elisabeth, Kowald Dominik
2015
In this paper, we present work-in-progress on a recommender system based on Collaborative Filtering that exploits location information gathered by indoor positioning systems. This approach allows us to provide recommendations for "extreme" cold-start users with absolutely no item interaction data available, where methods based on Matrix Factorization would not work. We simulate and evaluate our proposed system using data from the location-based FourSquare system and show that we can provide substantially better recommender accuracy results than a simple MostPopular baseline that is typically used when no interaction data is available.
Kowald Dominik, Lex Elisabeth
2015
To date, the evaluation of tag recommender algorithms has mostly been conducted in limited ways, including p-core pruned datasets, a small set of compared algorithms and solely based on recommender accuracy. In this study, we use an open-source evaluation framework to compare a rich set of state-of-the-art algorithms in six unfiltered, open datasets via various metrics, measuring not only accuracy but also the diversity, novelty and computational costs of the approaches. We therefore provide a transparent and reproducible tag recommender evaluation in real-world folksonomies. Our results suggest that the efficacy of an algorithm highly depends on the given needs and thus, they should be of interest to both researchers and developers in the field of tag-based recommender systems.
Schulze Gunnar, Horn Christopher, Kern Roman
2015
This paper presents an approach for matching cell phone trajectories of low spatial and temporal accuracy to the underlying road network. In this setting, only the position of the base station involved in a signaling event and the timestamp are known, resulting in a possible error of several kilometers. No additional information, such as signal strength, is available. The proposed solution restricts the set of admissible routes to a corridor by estimating the area within which a user is allowed to travel. The size and shape of this corridor can be controlled by various parameters to suit different requirements. The computed area is then used to select road segments from an underlying road network, for instance OpenStreetMap. These segments are assembled into a search graph, which additionally takes the chronological order of observations into account. A modified Dijkstra algorithm is applied for finding admissible candidate routes, from which the best one is chosen. We performed a detailed evaluation of 2249 trajectories with an average sampling time of 260 seconds. Our results show that, in urban areas, on average more than 44% of each trajectory are matched correctly. In rural and mixed areas, this value increases to more than 55%. Moreover, an in-depth evaluation was carried out to determine the optimal values for the tunable parameters and their effects on the accuracy, matching ratio and execution time. The proposed matching algorithm facilitates the use of large volumes of cell phone data in Intelligent Transportation Systems, in which accurate trajectories are desirable.
Fessl Angela, Wesiak Gudrun, Feyertag Sandra, Rivera-Pelayo Verónica
2015
In-app reflection guidance for workplace learning means motivating and guiding users to reflect on their working and learning, based on users' activities captured by the app. In this paper, we present ageneric concept for such in-app reflection guidance for workplace learning, its implementation in three dierent applications, and its evaluation in three dierent settings (one setting per app). From this experience, we draw the following lessons learned: First, the implemented in-appreflection guidance components are perceived as useful tools for reflective learning and their usefulness increases with higher usage rates. Second, smart technological support is sufficient to trigger reflection, however with different implemented components also reflective learning takesplace on dierent stages. A sophisticated, unobtrusive integration in the working environment is not trivial at all. Automatically created prompts need a sensible timing in order to be perceived as useful and must not disrupt the current working processes.
Dennerlein Sebastian, Theiler Dieter, Marton Peter, Lindstaedt Stefanie , Lex Elisabeth, Santos Patricia, Cook John
2015
We present KnowBrain (KB), an open source Dropbox-like knowledge repository with social features for informal workplace learning. KB enables users (i) to share and collaboratively structure knowledge, (ii) to access knowledge via sophisticated content- and metadatabased search and recommendation, and (iii) to discuss artefacts by means of multimedia-enriched Q&A. As such, KB can support, integrate and foster various collaborative learning processes related to daily work-tasks.
Ziak Hermann, Kern Roman
2015
Cross vertical aggregated search is a special form of meta search, were multiple search engines from different domains and varying behaviour are combined to produce a single search result for each query. Such a setting poses a number of challenges, among them the question of how to best evaluate the quality of the aggregated search results. We devised an evaluation strategy together with an evaluation platform in order to conduct a series of experiments. In particular, we are interested whether pseudo relevance feedback helps in such a scenario. Therefore we implemented a number of pseudo relevance feedback techniques based on knowledge bases, where the knowledge base is either Wikipedia or a combination of the underlying search engines themselves. While conducting the evaluations we gathered a number of qualitative and quantitative results and gained insights on how different users compare the quality of search result lists. In regard to the pseudo relevance feedback we found that using Wikipedia as knowledge base generally provides a benefit, unless for entity centric queries, which are targeting single persons or organisations. Our results will enable to help steering the development of cross vertical aggregated search engines and will also help to guide large scale evaluation strategies, for example using crowd sourcing techniques.
Pimas Oliver, Kröll Mark, Kern Roman
2015
Our system for the PAN 2015 authorship verification challenge is basedupon a two step pre-processing pipeline. In the first step we extract different fea-tures that observe stylometric properties, grammatical characteristics and purestatistical features. In the second step of our pre-processing we merge all thosefeatures into a single meta feature space. We train an SVM classifier on the gener-ated meta features to verify the authorship of an unseen text document. We reportthe results from the final evaluation as well as on the training datasets
Trattner Christoph, Balby Marinho Leandro, Parra Denis
2015
Large scale virtual worlds such as massive multiplayer online gamesor 3D worlds gained tremendous popularity over the past few years.With the large and ever increasing amount of content available, virtualworld users face the information overload problem. To tacklethis issue, game-designers usually deploy recommendation serviceswith the aim of making the virtual world a more joyful environmentto be connected at. In this context, we present in this paper the resultsof a project that aims at understanding the mobility patternsof virtual world users in order to derive place recommenders forhelping them to explore content more efficiently. Our study focuson the virtual world SecondLife, one of the largest and mostprominent in recent years. Since SecondLife is comparable to realworldLocation-based Social Networks (LBSNs), i.e., users canboth check-in and share visited virtual places, a natural approach isto assume that place recommenders that are known to work well onreal-world LBSNs will also work well on SecondLife. We have putthis assumption to the test and found out that (i) while collaborativefiltering algorithms have compatible performances in both environments,(ii) existing place recommenders based on geographicmetadata are not useful in SecondLife.
Larrain Santiago, Parra Denis, Graells-Garrido Eduardo, Nørvåg Kjetil, Trattner Christoph
2015
In this paper, we present work-in-progress of a recently startedproject that aims at studying the effect of time in recommendersystems in the context of social tagging. Despite the existence ofprevious work in this area, no research has yet made an extensiveevaluation and comparison of time-aware recommendation methods.With this motivation, this paper presents results of a studywhere we focused on understanding (i) “when” to use the temporalinformation into traditional collaborative filtering (CF) algorithms,and (ii) “how” to weight the similarity between users and itemsby exploring the effect of different time-decay functions. As theresults of our extensive evaluation conducted over five social taggingsystems (Delicious, BibSonomy, CiteULike, MovieLens, andLast.fm) suggest, the step (when) in which time is incorporated inthe CF algorithm has substantial effect on accuracy, and the typeof time-decay function (how) plays a role on accuracy and coveragemostly under pre-filtering on user-based CF, while item-basedshows stronger stability over the experimental conditions.
Dennerlein Sebastian, Treasure-Jones Tamsin, Tomberg Vladimir, Theiler Dieter, Lex Elisabeth, Ley Tobias
2015
Sensemaking at the workplace and in educational contexts has been extensively studied for decades. Interestingly, making sense out of the own wealth of learning experiences at the workplace has been widely ignored. To tackle this issue, we have implemented a novel sensemaking interface for healthcare professionals to support learning at the workplace. The proposed prototype supports remembering of informal experiences from episodic memory followed by sensemaking in semantic memory. Results from an initial study conducted as part of an iterative co-design process reveal the prototype is being perceived as useful and supportive for informal sensemaking by study participants from the healthcare domain. Furthermore, we find first evidence that re-evaluation of collected information is a potentially necessary process that needs further exploration to fully understand and support sensemaking of informal learning experiences.
Dennerlein Sebastian, Kaiser Rene_DB, Barreiros Carla, Gutounig Robert , Rauter Romana
2015
Barcamps are events for open knowledge exchange. They are generally open to everyone, irrespective of background or discipline, and request no attendance fee. Barcamps are structured by only a small set of common rules and invite participants to an interactive and interdisciplinary discourse on an equal footing. In contrast to scientific conferences, the program is decided by the participants themselves on-site. Barcamps are often called un-conferences or ad-hoc conferences. Since barcamps are typically attended by people in their spare time, their motivation to actively engage and benefit from participating is very high. This paper presents a case study conducted at the annual Barcamp Graz in Austria. Within the case study, two field studies (quantitative and qualitative) and a parallel participant observation were carried out between 2010 and 2014. In these investigations we elaborated on the differences of the barcamp to scientific conferences, inferred characteristics of barcamps for knowledge generation, sharing and transfer in organizations and propose three usages of barcamps in organizations: further education of employees, internal knowledge transfer and getting outside knowledge in. Barcamps can be used as further education for employees enabling not only knowledge sharing, generation and transfer via the participating employees, but also for informally promoting a company’s competences. With respect to internal knowledge transfer, hierarchical boundaries can be temporarily broken by allowing informal and interactive discussion. This can lead to the elicitation of ‘hidden’ knowledge, knowledge transfer resulting in more efficient teamwork and interdepartmental cooperation. Finally, external stakeholders such as customers and partners can be included in this process to get outside knowledge in and identify customer needs, sketch first solutions and to start concrete projects. As a result of the case study, we hypothesise as a step towards further research that organisations can benefit from utilising this format as knowledge strategy.
Rubien Raoul, Ziak Hermann, Kern Roman
2015
Underspecified search queries can be performed via result list diversification approaches, which are often compu- tationally complex and require longer response times. In this paper, we explore an alternative, and more efficient way to diversify the result list based on query expansion. To that end, we used a knowledge base pseudo-relevance feedback algorithm. We compared our algorithm to IA-Select, a state-of-the-art diversification method, using its intent-aware version of the NDCG (Normalized Discounted Cumulative Gain) metric. The results indicate that our approach can guarantee a similar extent of diversification as IA-Select. In addition, we showed that the supported query language of the underlying search engines plays an important role in the query expansion based on diversification. Therefore, query expansion may be an alternative when result diversification is not feasible, for example in federated search systems where latency and the quantity of handled search results are critical issues.
Kraker Peter
2015
In this paper, I present the evaluation of a novel knowledge domain visualization of educational technology. The interactive visualization is based on readership patterns in the online reference management system Mendeley. It comprises of 13 topic areas, spanning psychological, pedagogical, and methodological foundations, learning methods and technologies, and social and technological developments. The visualization was evaluated with (1) a qualitative comparison to knowledge domain visualizations based on citations, and (2) expert interviews. The results show that the co-readership visualization is a recent representation of pedagogical and psychological research in educational technology. Furthermore, the co-readership analysis covers more areas than comparable visualizations based on co-citation patterns. Areas related to computer science, however, are missing from the co-readership visualization and more research is needed to explore the interpretations of size and placement of research areas on the map.
Hasani-Mavriqi Ilire, Geigl Florian, Pujari Subhash Chandra, Lex Elisabeth, Helic Denis
2015
In this paper, we analyze the influence of socialstatus on opinion dynamics and consensus building in collaborationnetworks. To that end, we simulate the diffusion of opinionsin empirical collaboration networks by taking into account boththe network structure and the individual differences of peoplereflected through their social status. For our simulations, weadapt a well-known Naming Game model and extend it withthe Probabilistic Meeting Rule to account for the social statusof individuals participating in a meeting. This mechanism issufficiently flexible and allows us to model various situations incollaboration networks, such as the emergence or disappearanceof social classes. In this work, we concentrate on studyingthree well-known forms of class society: egalitarian, ranked andstratified. In particular, we are interested in the way these societyforms facilitate opinion diffusion. Our experimental findingsreveal that (i) opinion dynamics in collaboration networks isindeed affected by the individuals’ social status and (ii) thiseffect is intricate and non-obvious. In particular, although thesocial status favors consensus building, relying on it too stronglycan slow down the opinion diffusion, indicating that there is aspecific setting for each collaboration network in which socialstatus optimally benefits the consensus building process.
Trattner Christoph, Steurer Michael
2015
Existing approaches to identify the tie strength between users involve typically only one type of network. To date, no studies exist that investigate the intensity of social relations and in particular partnership between users across social networks. To fill this gap in the literature, we studied over 50 social proximity features to detect the tie strength of users defined as partnership in two different types of networks: location-based and online social networks. We compared user pairs in terms of partners and non-partners and found significant differences between those users. Following these observations, we evaluated the social proximity of users via supervised and unsupervised learning approaches and establish that location-based social networks have a great potential for the identification of a partner relationship. In particular, we established that location-based social networks and correspondingly induced features based on events attended by users could identify partnership with 0.922 AUC, while online social network data had a classification power of 0.892 AUC. When utilizing data from both types of networks, a partnership could be identified to a great extent with 0.946 AUC. This article is relevant for engineers, researchers and teachers who are interested in social network analysis and mining.
Lin Yi-ling, Trattner Christoph, Brusilovsky Peter , He Daqing
2015
Crowdsourcing has been emerging to harvest social wisdom from thousands of volunteers to perform series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design to user tagging behavior. While images’ titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering an insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon’s Mechanical Turk’s crowdworkers – with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, etc. In addition, the study was carried out to examine the impact of image description on supporting users’ information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the “with description” condition are more diverse and more specific than tags from the “without description” condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced “with description” shortened the path to the target results, while tags produced without description increased user success in the search task
Trattner Christoph, Parra Denis , Brusilovsky Peter, , Marinho Leandro
2015
Veas Eduardo Enrique, di Sciascio Maria Cecilia
2015
This paper presents a visual interface developed on the basis of control and transparency to elicit preferences in the scientific and cultural domain. Preference elicitation is a recognized challenge in user modeling for personalized recommender systems. The amount of feedback the user is willing to provide depends on how trustworthy the system seems to be and how invasive the elicitation process is. Our approach ranks a collection of items with a controllable text analytics model. It integrates control with the ranking and uses it as implicit preference for content based recommendations.
Veas Eduardo Enrique, di Sciascio Maria Cecilia
2015
The ability to analyze and organize large collections,to draw relations between pieces of evidence, to buildknowledge, are all part of an information discovery process.This paper describes an approach to interactivetopic analysis, as an information discovery conversationwith a recommender system. We describe a modelthat motivates our approach, and an evaluation comparinginteractive topic analysis with state-of-the-art topicanalysis methods.
Wertner Alfred, Czech Paul, Pammer-Schindler Viktoria
2015
Fall detection is a classical use case for mobile phone sensing.Nonetheless, no open dataset exists that could be used totrain, test and compare fall detection algorithms.We present a dataset for mobile phone sensing-based fall detection.The dataset contains both accelerometer and gyroscopedata. Data were labelled with four types of falls(e.g., “stumbling”) and ten types of non-fall activities (e.g.,“sit down”). The dataset was collected with martial artistswho simulated falls. We used five different state-of-the-artAndroid smartphone models worn on the hip in a small bag.Due to the datasets properties of using multiple devices andbeing labelled with multiple fall- and non-fall categories, weargue that it is suitable to serve as benchmark dataset.
Rauch Manuela, Klieber Hans-Werner, Wozelka Ralph, Singh Santokh, Sabol Vedran
2015
The amount of information available on the internet and within enterprises has reached an incredible dimension.Efficiently finding and understanding information and thereby saving resources remains one of the major challenges in our daily work. Powerful text analysis methods, a scalable faceted retrieval engine and a well-designed interactive user interface are required to address the problem. Besides providing means for drilling-down to the relevant piece of information, a part of the challenge arises from the need of analysing and visualising data to discover relationships and correlations, gain an overview of data distributions and unveil trends. Visual interfaces leverage the enormous bandwidth of the human visual system to support pattern discovery in large amounts of data. Our Knowminer search builds upon the well-known faceted search approach which is extended with interactive visualisations allowing users to analyse different aspects of the result set. Additionally, our system provides functionality for organising interesting search results into portfolios, and also supports social features for rating and boosting search results and for sharing and annotating portfolios.
Tschinkel Gerwald, di Sciascio Maria Cecilia, Mutlu Belgin, Sabol Vedran
2015
Recommender systems are becoming common tools supportingautomatic, context-based retrieval of resources.When the number of retrieved resources grows large visualtools are required that leverage the capacity of humanvision to analyse large amounts of information. Weintroduce a Web-based visual tool for exploring and organisingrecommendations retrieved from multiple sourcesalong dimensions relevant to cultural heritage and educationalcontext. Our tool provides several views supportingfiltering in the result set and integrates a bookmarkingsystem for organising relevant resources into topic collections.Building upon these features we envision a systemwhich derives user’s interests from performed actions anduses this information to support the recommendation process.We also report on results of the performed usabilityevaluation and derive directions for further development.
Veas Eduardo Enrique, Sabol Vedran, Singh Santokh, Ulbrich Eva Pauline
2015
An information landscape is commonly used to represent relatedness in large, high-dimensional datasets, such as text document collections. In this paper we present interactive metaphors, inspired in map reading and visual transitions, that enhance the landscape representation for the analysis of topical changes in dynamic text repositories. The goal of interactive visualizations is to elicit insight, to allow users to visually formulate hypotheses about the underlying data and to prove them. We present a user study that investigates how users can elicit information about topics in a large document set. Our study concentrated on building and testing hypotheses using the map reading metaphors. The results show that people indeed relate topics in the document set from spatial relationships shown in the landscape, and capture the changes to topics aided by map reading metaphors.
Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman
2015
The overwhelming majority of scientific publications are authored by multiple persons; yet, bibliographic metrics are only assigned to individual articles as single entities. In this paper, we aim at a more fine-grained analysis of scientific authorship. We therefore adapt a text segmentation algorithm to identify potential author changes within the main text of a scientific article, which we obtain by using existing PDF extraction techniques. To capture stylistic changes in the text, we employ a number of stylometric features. We evaluate our approach on a small subset of PubMed articles consisting of an approximately equal number of research articles written by a varying number of authors. Our results indicate that the more authors an article has the more potential author changes are identified. These results can be considered as an initial step towards a more detailed analysis of scientific authorship, thereby extending the repertoire of bibliometrics.
Peters Isabella, Kraker Peter, Lex Elisabeth, Gumpenberger Christian, Gorraiz, Juan
2015
The study explores the citedness of research data, its distribution over time and how it is related to the availability of a DOI (Digital Object Identifier) in Thomson Reuters' DCI (Data Citation Index). We investigate if cited research data "impact" the (social) web, reflected by altmetrics scores, and if there is any relationship between the number of citations and the sum of altmetrics scores from various social media-platforms. Three tools are used to collect and compare altmetrics scores, i.e. PlumX, ImpactStory, and Altmetric.com. In terms of coverage, PlumX is the most helpful altmetrics tool. While research data remain mostly uncited (about 85%), there has been a growing trend in citing data sets published since 2007. Surprisingly, the percentage of the number of cited research data with a DOI in DCI has decreased in the last years. Only nine repositories account for research data with DOIs and two or more citations. The number of cited research data with altmetrics scores is even lower (4 to 9%) but shows a higher coverage of research data from the last decade. However, no correlation between the number of citations and the total number of altmetrics scores is observable. Certain data types (i.e. survey, aggregate data, and sequence data) are more often cited and receive higher altmetrics scores.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph, Sabol Vedran
2015
isualizations have a distinctive advantage when dealing with the information overload problem: being grounded in basic visual cognition, many people understand visualizations. However, when it comes to creating them, it requires specific expertise of the domain and underlying data to determine the right representation. Although there are rules that help generate them, the results are too broad as these methods hardly account for varying user preferences. To tackle this issue, we propose a novel recommender system that suggests visualizations based on (i) a set of visual cognition rules and (ii) user preferences collected in Amazon-Mechanical Turk. The main contribution of this paper is the introduction and the evaluation of a novel approach called VizRec that is able suggest an optimal list of top-n visualizations for heterogeneous data sources in a personalized manner.
Kröll Mark, Strohmaier M.
2015
People willingly provide more and more information about themselves on social media platforms. This personal information about users’ emotions (sentiment) or goals (intent) is particularly valuable, for instance, for monitoring tools. So far, sentiment and intent analysis were conducted separately. Yet, both aspects can complement each other thereby informing processes such as explanation and reasoning. In this paper, we investigate the relation between intent and sentiment in weblogs. We therefore extract ~90,000 human goal instances from the ICWSM 2009 Spinn3r dataset and assign respective sentiments. Our results indicate that associating intent with sentiment represents a valuable addition to research areas such as text analytics and text understanding.
Klampfl Stefan, Kern Roman
2015
Scholarly publishing increasingly requires automated systems that semantically enrich documents in order to support management and quality assessment of scientific output.However, contextual information, such as the authors' affiliations, references, and funding agencies, is typically hidden within PDF files.To access this information we have developed a processing pipeline that analyses the structure of a PDF document incorporating a diverse set of machine learning techniques.First, unsupervised learning is used to extract contiguous text blocks from the raw character stream as the basic logical units of the article.Next, supervised learning is employed to classify blocks into different meta-data categories, including authors and affiliations.Then, a set of heuristics are applied to detect the reference section at the end of the paper and segment it into individual reference strings.Sequence classification is then utilised to categorise the tokens of individual references to obtain information such as the journal and the year of the reference.Finally, we make use of named entity recognition techniques to extract references to research grants, funding agencies, and EU projects.Our system is modular in nature.Some parts rely on models learnt on training data, and the overall performance scales with the quality of these data sets.
Horn Christopher, Kern Roman
2015
In this paper, we propose an approach to deriving public transportation timetables of a region (i.e. country) based on (i) large- scale, non-GPS cell phone data and (ii) a dataset containing geographic information of public transportation stations. The presented algorithm is designed to work with movements data, which are scarce and have a low spatial accuracy but exists in vast amounts (large-scale). Since only aggregated statistics are used, our algorithm copes well with anonymized data. Our evaluation shows that 89% of the departure times of popular train connections are correctly recalled with an allowed deviation of 5 minutes. The timetable can be used as feature for transportation mode detection to separate public from private transport when no public timetable is available.
Lex Elisabeth, Dennerlein Sebastian
2015
Today's complex scientific problems often require interdisciplinary, team-oriented approaches: the expertise of researchers from different disciplines is needed to collaboratively reach a solution. Interdisciplinary teams yet face many challenges such as differences in research practice, terminology, communication , and in the usage of tools. In this paper, we therefore study concrete mechanisms and tools of two real-world scientific projects with the aim to examine their efficacy and influence on interdisciplinary teamwork. For our study, we draw upon Bronstein's model of interdisciplinary collaboration. We found that it is key to use suitable environments for communication and collaboration, especially when teams are geographically distributed. Plus, the willingness to share (domain) knowledge is not a given and requires strong common goals and incentives. Besides, structural barriers such as financial aspects can hinder interdisciplinary work, especially in applied, industry funded research. Furthermore, we observed a kind of cold-start problem in interdisciplinary collaboration, when there is no work history and when the disciplines are rather different, e.g. in terms of wording. HowTo: Scientific Work in Interdisciplinary and Distributed Teams (PDF Download Available). Available from: https://www.researchgate.net/publication/282813815_HowTo_Scientific_Work_in_Interdisciplinary_and_Distributed_Teams [accessed Jul 13, 2017].
Kraker Peter, Schlögl C. , Jack K., Lindstaedt Stefanie
2015
Given the enormous amount of scientific knowledgethat is produced each and every day, the need for better waysof gaining – and keeping – an overview of research fields isbecoming more and more apparent. In a recent paper publishedin the Journal of Informetrics [1], we analyze the adequacy andapplicability of readership statistics recorded in social referencemanagement systems for creating such overviews. First, weinvestigated the distribution of subject areas in user librariesof educational technology researchers on Mendeley. The resultsshow that around 69% of the publications in an average userlibrary can be attributed to a single subject area. Then, we usedco-readership patterns to map the field of educational technology.The resulting knowledge domain visualization, based on the mostread publications in this field on Mendeley, reveals 13 topicareas of educational technology research. The visualization isa recent representation of the field: 80% of the publicationsincluded were published within ten years of data collection. Thecharacteristics of the readers, however, introduce certain biasesto the visualization. Knowledge domain visualizations based onreadership statistics are therefore multifaceted and timely, but itis important that the characteristics of the underlying sample aremade transparent.
Kraker Peter, Enkhbayar Asuraa, Lex Elisabeth
2015
In a scientific publishing environment that is increasingly moving online,identifiers of scholarly work are gaining in importance. In this paper, weanalysed identifier distribution and coverage of articles from the discipline ofquantitative biology using arXiv, Mendeley and CrossRef as data sources.The results show that when retrieving arXiv articles from Mendeley, we wereable to find more papers using the DOI than the arXiv ID. This indicates thatDOI may be a better identifier with respect to findability. We also find thatcoverage of articles on Mendeley decreases in the most recent years, whereasthe coverage of DOIs does not decrease in the same order of magnitude. Thishints at the fact that there is a certain time lag involved, before articles arecovered in crowd-sourced services on the scholarly web.
Vignoli Michela, Kraker Peter, Sevault A.
2015
Science 2.0 is the current trend towards using Web 2.0 tools in research and practising a more open science. We are currently at the beginning of a transition phase in which traditional structures, processes, value systems, and means of science communication are being put to the proof. New strategies and models under the label of “open” are being explored and partly implemented. This situation implies a number of insecurities for scientists as well as for policy makers and demands a rethinking and overcoming of some habits and conventions persisting since an era before the internet. This paper lists current barriers to practising Open Science from the point of view of researchers and reflects which measures could help overcoming them. The central question is which initiatives should be taken on institutional or political level and which ones on level of the community or the individual scientist to support the transition to Science 2.0.
Renner Bettina, Wesiak Gudrun, Cress, U.
2015
Purpose: This contribution relates the Quantified Self approach to computer supported workplace learning. It shows results of a large field study where 12 different apps where used in several work contexts. Design/Methodology: Participants used the apps during their work and during training sessions to track their behaviour and mood at work and capture problematic experiences. Data capturing was either automatically, e.g. tracking program usage on a computer, or by participants manually documenting their experiences. Users then reflected individually or collaboratively about their experiences. Results: Results show that participants liked the apps and used the opportunity to learn something from their work experiences. Users evaluated apps as useful for professional training and having long-term benefits when used in the work life. Computer supported reflection about own data and experiences seems to have especially potential where new processes happen, e.g. with unexperienced workers or in training settings. Limitations: Apps were used in the wild so control about potential external influencing factors is limited. Research/Practical Implications: Results show a successful application of apps supporting individual learning in the work life. This shows that the concept of Quantified Self is not limited to private life but also has chances to foster vocational development. Originality/Value: This contribution combines the pragmatic Quantified Self approach with the theoretical background of reflective learning. It presents data from a broad-based study of using such apps in real work life. The results of the study give insights about its potential in this area and about possible influencing factors and barriers.
Kowald Dominik
2015
With the emergence of Web 2.0, tag recommenders have becomeimportant tools, which aim to support users in ndingdescriptive tags for their bookmarked resources. Althoughcurrent algorithms provide good results in terms of tag predictionaccuracy, they are often designed in a data-drivenway and thus, lack a thorough understanding of the cognitiveprocesses that play a role when people assign tags toresources. This thesis aims at modeling these cognitive dynamicsin social tagging in order to improve tag recommendationsand to better understand the underlying processes.As a rst attempt in this direction, we have implementedan interplay between individual micro-level (e.g., categorizingresources or temporal dynamics) and collective macrolevel(e.g., imitating other users' tags) processes in the formof a novel tag recommender algorithm. The preliminaryresults for datasets gathered from BibSonomy, CiteULikeand Delicious show that our proposed approach can outperformcurrent state-of-the-art algorithms, such as CollaborativeFiltering, FolkRank or Pairwise Interaction TensorFactorization. We conclude that recommender systems canbe improved by incorporating related principles of humancognition.
Mutlu Belgin, Sabol Vedran
2015
The steadily increasing amount of scientific publications demands for more powerful, user-oriented technologiessupporting querying and analyzing scientific facts therein. Current digital libraries that provide services to accessscientific content are rather closed in a way that they deploy their own meta-models and technologies to query and analysethe knowledge contained in scientific publications. The goal of the research project CODE is to realize a framework basedon Linked Data principles which aims to provide methods for federated querying within scientific data, and interfacesenabling user to easily perform exploration and analysis tasks on received content. The main focus in this paper lieson the one hand on extraction and organization of scientific facts embedded in publications and on the other hand on anintelligent framework facilitating search and visual analysis of scientific facts through suggesting visualizations appropriatefor the underlying data.
Wesiak Gudrun, Al-Smadi Mohammad, Gütl Christian, Höfler Margit
2015
Computer-supported collaborative learning (CSCL) is already a central element of online learningenvironments, but is also gaining increasing importance in traditional classroom settings where coursework is carried out in groups. For these situations social interaction, sharing and construction ofknowledge among the group members are important elements of the learning process. The use ofcomputers and the internet facilitates such group work by allowing asynchronous as well as synchronouscontributions toto foster CSCL is the employment of Wiki systems, e.g. for collaboratively working on a writingassignment. We developed an enhanced Wiki system with self- and peer assessment, visualizations, and-science students showed its usefulness for collaborative course work. However, results from studies withtech-savvy participants, who are typically familiar with the benefits as well as drawbacks of such tools,are often limited regarding the generalizability to other populations. Thus, we introduced the Wiki in anon-technological environment and evaluated it with respect to usability, usefulness, and motivationalcomponents. Thirty psychology students used the co-writing Wiki to work collaboratively on a shortpaper. Besides providing an interface for generating and changing a document, the co-writing Wiki offerstools for formative assessment activities (integrated self-, peer-, and group assessment activities) as well-data(activity tracking) as well as questionnaire data gathered at before and after working with the Wiki.Additionally, the instructor evaluated the co-writing Wiki concerning its usefulness for CSCL activities inacademic settings. Despite technical problems and consequently low system usability scores, participantsperceived the offered functionalities as helpful to keep a good overview on the current status of theirpaper and the contributions of their group members. The integrated self-assessment tool helped them toget aware of their strengths and weaknesses. In addition, students showed a high intrinsic motivationwhile working with the co-Writing Wiki, which did not change over the course of the study. From the-writing Wiki allowed to effectively monitor the progress of the groups andenabled formative feedback by the instructor. Summarizing, the results indicate that using Wikis forCSCL is a promising way to also support students with no technological background.environments, but is also gaining increasing importance in traditional classroom settings where coursework is carried out in groups. For these situations social interaction, sharing and construction ofknowledge among the group members are important elements of the learning process. The use ofcomputers and the internet facilitates such group work by allowing asynchronous as well as synchronous contributions to a common learning object independent of student’s working time and location. One way to foster CSCL is the employment of Wiki systems, e.g. for collaboratively working on a writing assignment. We developed an enhanced Wiki system with self- and peer assessment, visualizations, and functionalities for continuous teacher feedback. First evaluations of this ‘co-writing Wiki’ with computer science students showed its usefulness for collaborative course work. However, results from studies with tech-savvy participants, who are typically familiar with the benefits as well as drawbacks of such tools, are often limited regarding the generalizability to other populations. Thus, we introduced the Wiki in a non-technological environment and evaluated it with respect to usability, usefulness, and motivational components. Thirty psychology students used the co-writing Wiki to work collaboratively on a short paper. Besides providing an interface for generating and changing a document, the co-writing Wiki offers tools for formative assessment activities (integrated self-, peer-, and group assessment activities) as well as monitoring the progress of the group’s collaboration. The evaluation of the tool is based on log-data (activity tracking) as well as questionnaire data gathered at before and after working with the Wiki. Additionally, the instructor evaluated the co-writing Wiki concerning its usefulness for CSCL activities in academic settings. Despite technical problems and consequently low system usability scores, participants perceived the offered functionalities as helpful to keep a good overview on the current status of their paper and the contributions of their group members. The integrated self-assessment tool helped them to get aware of their strengths and weaknesses. In addition, students showed a high intrinsic motivation while working with the co-Writing Wiki, which did not change over the course of the study. From the instructor’s perspective, the co-writing Wiki allowed to effectively monitor the progress of the groups and enabled formative feedback by the instructor. Summarizing, the results indicate that using Wikis for CSCL is a promising way to also support students with no technological background.
Buschmann Katrin, Kasberger Stefan, Mayer Katja, Reckling Falk, Rieck Katharina, Vignoli Michela, Kraker Peter
2015
Insbesondere in den letzten zwei Jahren hat Österreichim Bereich Open Science, vor allem was Open Accessund Open Data betrifft, nennenswerte Fortschritte gemacht.Die Gründung des Open Access Networks Austria(OANA) und das Anfang 2014 gestartete Projekt e-InfrastructuresAustria können als wichtige Grundsteine fürden Ausbau einer österreichischen Open-Science-Landschaftgesehen werden. Auch das österreichische Kapitelder Open Knowledge Foundation leistet in den BereichenOpen Science Praxis- und Bewusstseinsbildung grundlegendeArbeit. Unter anderem bilden diese Initiativendie Grundlage für den Aufbau einer nationalen Open-Access-Strategie sowie einer ganz Österreich abdeckendenInfrastruktur für Open Access und Open (Research) Data.Dieser Beitrag gibt einen Überblick über diese und ähnlichenationale sowie lokale Open-Science-Projekte und-Initiativen und einen Ausblick in die mögliche Zukunftvon Open Science in Österreich.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph, Sabol Vedran
2015
Identifying and using the information from distributed and heterogeneous information sources is a challenging task in many application fields. Even with services that offer welldefined structured content, such as digital libraries, it becomes increasingly difficult for a user to find the desired information. To cope with an overloaded information space, we propose a novel approach – VizRec– combining recommender systems (RS) and visualizations. VizRec suggests personalized visual representations for recommended data. One important aspect of our contribution and a prerequisite for VizRec are user preferences that build a personalization model. We present a crowd based evaluation and show how such a model of preferences can be elicited.
Kraker Peter, Lex Elisabeth, Gorraiz Juan, Gumpenberger Christian, Peters Isabella
2015
Veas Eduardo Enrique, Mutlu Belgin, di Sciascio Maria Cecilia, Tschinkel Gerwald, Sabol Vedran
2015
Supporting individuals who lack experience or competence to evaluate an overwhelming amout of informationsuch as from cultural, scientific and educational content makes recommender system invaluable to cope withthe information overload problem. However, even recommended information scales up and users still needto consider large number of items. Visualization takes a foreground role, letting the user explore possiblyinteresting results. It leverages the high bandwidth of the human visual system to convey massive amounts ofinformation. This paper argues the need to automate the creation of visualizations for unstructured data adaptingit to the user’s preferences. We describe a prototype solution, taking a radical approach considering bothgrounded visual perception guidelines and personalized recommendations to suggest the proper visualization.
Seitlinger Paul, Kowald Dominik, Kopeinik Simone, Hasani-Mavriqi Ilire, Ley Tobias, Lex Elisabeth
2015
Classic resource recommenders like Collaborative Filtering(CF) treat users as being just another entity, neglecting non-linear user-resource dynamics shaping attention and inter-pretation. In this paper, we propose a novel hybrid rec-ommendation strategy that re nes CF by capturing thesedynamics. The evaluation results reveal that our approachsubstantially improves CF and, depending on the dataset,successfully competes with a computationally much moreexpensive Matrix Factorization variant.
Lacic Emanuel, Kowald Dominik, Eberhard Lukas, Trattner Christoph, Parra Denis, Marinho Leandro
2015
Recent research has unveiled the importance of online social networks for improving the quality of recommender systems and encouraged the research community to investigate better ways of exploiting the social information for recommendations. To contribute to this sparse field of research, in this paper we exploit users’ interactions along three data sources (marketplace, social network and location-based) to assess their performance in a barely studied domain: recommending products and domains of interests (i.e., product categories) to people in an online marketplace environment. To that end we defined sets of content- and network-based user similarity features for each data source and studied them isolated using an user-based Collaborative Filtering (CF) approach and in combination via a hybrid recommender algorithm, to assess which one provides the best recommendation performance. Interestingly, in our experiments conducted on a rich dataset collected from SecondLife, a popular online virtual world, we found that recommenders relying on user similarity features obtained from the social network data clearly yielded the best results in terms of accuracy in case of predicting products, whereas the features obtained from the marketplace and location-based data sources also obtained very good results in case of predicting categories. This finding indicates that all three types of data sources are important and should be taken into account depending on the level of specialization of the recommendation task.
Kowald Dominik, Seitlinger Paul, Kopeinik Simone, Ley Tobias, Trattner Christoph
2015
We assume that recommender systems are more successful,when they are based on a thorough understanding of how people processinformation. In the current paper we test this assumption in the contextof social tagging systems. Cognitive research on how people assign tagshas shown that they draw on two interconnected levels of knowledge intheir memory: on a conceptual level of semantic fields or LDA topics,and on a lexical level that turns patterns on the semantic level intowords. Another strand of tagging research reveals a strong impact oftime-dependent forgetting on users' tag choices, such that recently usedtags have a higher probability being reused than "older" tags. In thispaper, we align both strands by implementing a computational theory ofhuman memory that integrates the two-level conception and the processof forgetting in form of a tag recommender. Furthermore, we test theapproach in three large-scale social tagging datasets that are drawn fromBibSonomy, CiteULike and Flickr.As expected, our results reveal a selective effect of time: forgetting ismuch more pronounced on the lexical level of tags. Second, an extensiveevaluation based on this observation shows that a tag recommender interconnectingthe semantic and lexical level based on a theory of humancategorization and integrating time-dependent forgetting on the lexicallevel results in high accuracy predictions and outperforms other wellestablishedalgorithms, such as Collaborative Filtering, Pairwise InteractionTensor Factorization, FolkRank and two alternative time-dependentapproaches. We conclude that tag recommenders will benefit from goingbeyond the manifest level of word co-occurrences, and from includingforgetting processes on the lexical level.
Kowald Dominik, Kopeinik S., Seitlinger Paul, Trattner Christoph, Ley Tobias
2015
In this paper, we introduce a tag recommendation algorithmthat mimics the way humans draw on items in their long-term memory.Based on a theory of human memory, the approach estimates a tag'sprobability being applied by a particular user as a function of usagefrequency and recency of the tag in the user's past. This probability isfurther refined by considering the inuence of the current semantic contextof the user's tagging situation. Using three real-world folksonomiesgathered from bookmarks in BibSonomy, CiteULike and Flickr, we showhow refining frequency-based estimates by considering usage recency andcontextual inuence outperforms conventional "most popular tags" approachesand another existing and very effective but less theory-driven,time-dependent recommendation mechanism.By combining our approach with a simple resource-specific frequencyanalysis, our algorithm outperforms other well-established algorithms,such as FolkRank, Pairwise Interaction Tensor Factorization and CollaborativeFiltering. We conclude that our approach provides an accurateand computationally efficient model of a user's temporal tagging behavior.We demonstrate how effective principles of recommender systemscan be designed and implemented if human memory processes are takeninto account.
Simon Jörg Peter, Pammer-Schindler Viktoria, Schmidt Peter
2015
Synchronisation algorithms are central components of collab- orative editing software. The energy efficiency for such algo- rithms becomes of interest to a wide community of mobile application developers. In this paper we explore the differen- tial synchronisation (diffsync) algorithm with respect to en- ergy consumption on mobile devices.We identify three areas for optimisation: a.) Empty cycles where diffsync is executed although no changes need to be processed b.) tail energy by adapting cycle intervals and c.) computational complexity. We propose a push-based diffsync strategy in which synchronisation cycles are triggered when a device connects to the network or when a device is notified of changes. Discussions within this paper are based on real usage data of PDF annotations via the Mendeley iOS app.
Kraker Peter, Lindstaedt Stefanie , Schlögl C., Jack K.
2015
In this paper, we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating knowledge domain visualizations. First, we investigate the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69% of the publications in an average user library can be attributed to a single subject area. Then, we use co-readership patterns to map the field of educational technology. The resulting visualization prototype, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80% of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent.
Kern Roman, Frey Matthias
2015
Table recognition and table extraction are important tasks in information extraction, especially in the domain of schol- arly communication. In this domain tables are commonplace and contain valuable information. Many different automatic approaches for table recognition and extraction exist. Com- mon to many of these approaches is the need for ground truth datasets, to train algorithms or to evaluate the results. In this paper we present the PDF Table Annotator, a web based tool for annotating elements and regions in PDF doc- uments, in particular tables. The annotated data is intended to serve as a ground truth useful to machine learning algo- rithms for detecting table regions and table structure. To make the task of manual table annotation as convenient as possible, the tool is designed to allow an efficient annotation process that may spawn multiple session by multiple users. An evaluation is conducted where we compare our tool to three alternative ways of creating ground truth of tables in documents. Here we found that our tool overall provides an efficient and convenient way to annotate tables. In addition, our tool is particularly suitable for complex table structures, where it provided the lowest annotation time and the highest accuracy. Furthermore, our tool allows to annotate tables following a logical or a functional model. Given that by the use of our tool ground truth datasets for table recognition and extraction are easier to produce, the quality of auto- matic tables extraction should greatly benefit. General
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph
2015
Visualizations have a distinctive advantage when dealing with the information overload problem: since theyare grounded in basic visual cognition, many people understand them. However, creating the appropriaterepresentation requires specific expertise of the domain and underlying data. Our quest in this paper is tostudy methods to suggest appropriate visualizations autonomously. To be appropriate, a visualization hasto follow studied guidelines to find and distinguish patterns visually, and encode data therein. Thus, a visu-alization tells a story of the underlying data; yet, to be appropriate, it has to clearly represent those aspectsof the data the viewer is interested in. Which aspects of a visualization are important to the viewer? Canwe capture and use those aspects to recommend visualizations? This paper investigates strategies to recom-mend visualizations considering different aspects of user preferences. A multi-dimensional scale is used toestimate aspects of quality for charts for collaborative filtering. Alternatively, tag vectors describing chartsare used to recommend potentially interesting charts based on content. Finally, a hybrid approach combinesinformation on what a chart is about (tags) and how good it is (ratings). We present the design principlesbehindVizRec, our visual recommender. We describe its architecture, the data acquisition approach with acrowd sourced study, and the analysis of strategies for visualization recommendation
Stegmaier Florian, Seifert Christin, Kern Roman, Höfler Patrick, Bayerl Sebastian, Granitzer Michael, Kosch Harald, Lindstaedt Stefanie , Mutlu Belgin, Sabol Vedran, Schlegel Kai
2014
Research depends to a large degree on the availability and quality of primary research data, i.e., data generated through experiments and evaluations. While the Web in general and Linked Data in particular provide a platform and the necessary technologies for sharing, managing and utilizing research data, an ecosystem supporting those tasks is still missing. The vision of the CODE project is the establishment of a sophisticated ecosystem for Linked Data. Here, the extraction of knowledge encapsulated in scientific research paper along with its public release as Linked Data serves as the major use case. Further, Visual Analytics approaches empower end users to analyse, integrate and organize data. During these tasks, specific Big Data issues are present.
Fessl Angela, Bratic Marina, Pammer-Schindler Viktoria
2014
A continuous learning solution was sought which allows strokenurses to keep the vast body of theoretical knowledge fresh, stay up-to-datewith new knowledge, and relate theoretical knowledge to practical experience.Based on the theoretical background of learning in the medical domain,reflective and game-based learning, we carried out a user-oriented designprocess that involved a focus group and a design workshop. In this process, aquiz that includes both content-based and reflection questions was identified asa viable means of transportation for theoretical knowledge. In this paper wepresent the result of trialling a quiz with both content-based and metacognitive(reflective) questions in two settings: In one trial the quiz was used by nursesas part of a qualification programme for stroke nurses, in the second trial bynurses outside such a formal continuous learning setting. Both trials weresuccessful in terms of user acceptance, user satisfaction and learning. Beyondthis success report, we discuss barriers to integrating a quiz into work processeswithin an emergency ward such as a stroke unit.
Pammer-Schindler Viktoria, Simon Jörg Peter, Wilding Karin, Keller Stephan, Scherer Reinhold
2014
Brain-computer interface (BCI) technology translatesbrain activity to machine-intelligible patterns, thusserving as input “device” to computers. BCI traininggames make the process of acquiring training data forthe machine learning more engaging for the users. Inthis work, we discuss the design space for BCI traininggames based on existing literature, and a traininggame in form of a Jigsaw Puzzle. The game wastrialled with four cerebral palsy patients. All patientswere very acceptant of the involved technology, which,we argue, relates back to the concept of BCI traininggames plus the adaptations we made. On the otherhand, the data quality was unsatisfactory. Hence, infuture work both concept and implementation need tobe finetuned to achieve a balance between useracceptance and data quality.
Rauch Manuela, Wozelka Ralph, Veas Eduardo Enrique, Sabol Vedran
2014
Graphs are widely used to represent relationshipsbetween entities. Indeed, their simplicity in depicting connect-edness backed by a mathematical formalism, make graphs anideal metaphor to convey relatedness between entities irrespec-tive of the domain. However, graphs pose several challenges forvisual analysis. A large number of entities or a densely con-nected set quickly render the graph unreadable due to clutter.Typed relationships leading to multigraphs cannot clearly berepresented in hierarchical layout or edge bundling, commonclutter reduction techniques. We propose a novel approach tovisual analysis of complex graphs based on two metaphors:semantic blossom and selective expansion. Instead of showingthe whole graph, we display only a small representative subsetof nodes, each with a compressed summary of relations in asemantic blossom. Users apply selective expansion to traversethe graph and discover the subset of interest. A preliminaryevaluation showed that our approach is intuitive and usefulfor graph exploration and provided insightful ideas for futureimprovements.
Tschinkel Gerwald, Veas Eduardo Enrique, Mutlu Belgin, Sabol Vedran
2014
Providing easy to use methods for visual analysis of LinkedData is often hindered by the complexity of semantic technologies. Onthe other hand, semantic information inherent to Linked Data providesopportunities to support the user in interactively analysing the data. Thispaper provides a demonstration of an interactive, Web-based visualisa-tion tool, the “Vis Wizard”, which makes use of semantics to simplify theprocess of setting up visualisations, transforming the data and, most im-portantly, interactively analysing multiple datasets using brushing andlinking method
Sabol Vedran, Albert Dietrich, Veas Eduardo Enrique, Mutlu Belgin, Granitzer Michael
2014
Linked Data has grown to become one of the largest availableknowledge bases. Unfortunately, this wealth of data remains inaccessi-ble to those without in-depth knowledge of semantic technologies. Wedescribe a toolchain enabling users without semantic technology back-ground to explore and visually analyse Linked Data. We demonstrateits applicability in scenarios involving data from the Linked Open DataCloud, and research data extracted from scientific publications. Our fo-cus is on the Web-based front-end consisting of querying and visuali-sation tools. The performed usability evaluations unveil mainly positiveresults confirming that the Query Wizard simplifies searching, refiningand transforming Linked Data and, in particular, that people using theVisualisation Wizard quickly learn to perform interactive analysis taskson the resulting Linked Data sets. In making Linked Data analysis ef-fectively accessible to the general public, our tool has been integratedin a number of live services where people use it to analyse, discover anddiscuss facts with Linked Data.
Granitzer MIchael, Veas Eduardo Enrique, Seifert C.
2014
In an interconnected world, Linked Data is more importantthan ever before. However, it is still quite dicult to accessthis new wealth of semantic data directly without havingin-depth knowledge about SPARQL and related semantictechnologies. Also, most people are currently used to consumingdata as 2-dimensional tables. Linked Data is by de-nition always a graph, and not that many people are used tohandle data in graph structures. Therefore we present theLinked Data Query Wizard, a web-based tool for displaying,accessing, ltering, exploring, and navigating Linked Datastored in SPARQL endpoints. The main innovation of theinterface is that it turns the graph structure of Linked Datainto a tabular interface and provides easy-to-use interactionpossibilities by using metaphors and techniques from currentsearch engines and spreadsheet applications that regular webusers are already familiar with.
Mutlu Belgin, Tschinkel Gerwald, Veas Eduardo Enrique, Sabol Vedran, Stegmaier Florian, Granitzer Michael
2014
Research papers are published in various digital libraries, which deploy their own meta-models and tech-nologies to manage, query, and analyze scientific facts therein. Commonly they only consider the meta-dataprovided with each article, but not the contents. Hence, reaching into the contents of publications is inherentlya tedious task. On top of that, scientific data within publications are hardcoded in a fixed format (e.g. tables).So, even if one manages to get a glimpse of the data published in digital libraries, it is close to impossibleto carry out any analysis on them other than what was intended by the authors. More effective querying andanalysis methods are required to better understand scientific facts. In this paper, we present the web-basedCODE Visualisation Wizard, which provides visual analysis of scientific facts with emphasis on automatingthe visualisation process, and present an experiment of its application. We also present the entire analyticalprocess and the corresponding tool chain, including components for extraction of scientific data from publica-tions, an easy to use user interface for querying RDF knowledge bases, and a tool for semantic annotation ofscientific data set
Silva Nelson
2014
Silva Nelson
2014
Silva Nelson, Settgast Volker, Eggeling Eva, Grill Florian, Zeh Theodor, Fellner Dieter W.
2014
Ullrich Torsten, Silva Nelson, Eggeling Eva, Fellner Dieter W.
2014
Lex Elisabeth, Kraker Peter, Dennerlein Sebastian
2014
Today’s data driven world requires interdisciplinary, teamoriented approaches: experts from different disciplines are needed to collaboratively solve complex real-world problems. Interdisciplinary teams face a set of challenges that are not necessarily encountered by unidisciplinary teams, such as organisational culture, mental and financial barriers. We share our experiences with interdisciplinary teamwork based on a real-world example. We found that models of interdisciplinary teamwork from Social Sciences and Web Science can guide interdisciplinary teamwork in the domain of pharmaceutical knowledge management. Additionally, we identified potential extensions of the models’ components as well as novel influencing factors such the willingness to explicate and share domain knowledge.
Dennerlein Sebastian, Cook John, Kravcik Milos, Kunzmann Christine, Pata Kai, Purma Jukka, Sandars John, Santos Patricia , Schmidt Andreas, Al-Smadi Mohammad, Trattner Christoph, Ley Tobias
2014
Workplace learning happens in the process and context of work, is multi-episodic, often informal, problem based and takes place on a just-in-time basis. While this is a very effective means of delivery, it also does not scale very well beyond the immediate context. We review three types of technologies that have been suggested to scale learning and three connected theoretical discourses around learning and its support. Based on these three strands and an in-depth contextual inquiry into two workplace learning domains, health care and building and construction, four design-based research projects were conducted that have given rise to designs for scaling informal learning with technology. The insights gained from the design and contextual inquiry contributed to a model that provides an integrative view on three informal learning processes at work and how they can be supported with technology: (1) task performance, reflection and sensemaking; (2) help seeking, guidance and support; and (3) emergence and maturing of collective knowledge. The model fosters our understanding of how informal learning can be scaled and how an orchestrated set of technologies can support this process.
Lindstaedt Stefanie , Reiter, T., Cik, M., Haberl, M., Breitwieser, C., Scherer, R., Kröll Mark, Horn Christopher, Müller-Putz, G., Fellendorf, M.
2013
Today, proper traffic incident management (IM) has to deal increasingly with problems such as traffic congestion and environmental sustainability. Therefore, IM intends to clear the road for traffic as quickly as possible after an incident has happened. Electronic data verifiably has great potential for supporting traffic incident management. As a consequence, this paper presents an innovative incident detection method using anonymized mobile communications data. The aim is to outline suitable methods for depicting the traffic situation of a designated test area. In order to be successful, the method needs to be able to calculate the traffic situation in-time and report anomalies back to the motorway operator. The resulting procedures are compared to data from real incidents and are thus validated. Special attention is turned to the question whether incidents can be detected quicker with the aid of mobile phone data than with conventional methods. Also, a focus is laid on the quicker deregistration of the incident, so that the traffic management can react superiorly.
Divitini Monica, Lindstaedt Stefanie , Pammer-Schindler Viktoria, Ley Tobias
2013
With this workshop, we intend to bring together the European communities of technology-enhanced learning, which typically meets at the ECTEL, and of computersupported cooperative work, which typically meets at the ECSCW. While the ECTEL community has traditionally focused on technology support for learning, be it in formal learning environments like schools, universities, etc. or in informal learning environments like workplaces, the ECSCW community has traditionally investigated how computers can and do mediate and influence collaborative work, in settings as diverse as the typical “gainful employment” situations, project work within university courses, volunteer settings in NGOs etc. Despite overlapping areas of concerns, the two communities are also exploiting different theories and methodological approaches. Within this workshop, we discuss issues that are relevant for both communities, and have the potential to contribute to a more lively communication between both communities.
Trattner Christoph, Smadi Mohammad, Theiler Dieter, Dennerlein Sebastian, Kowald Dominik, Rella Matthias, Kraker Peter, Barreto da Rosa Isaías, Tomberg Vladimir, Kröll Mark, Treasure-Jones Tamsin, Kerr Micky, Lindstaedt Stefanie , Ley Tobias
2013
Höfler Patrick, Granitzer Michael, Sabol Vedran, Lindstaedt Stefanie
2013
Linked Data has become an essential part of the Semantic Web. A lot of Linked Data is already available in the Linked Open Data cloud, which keeps growing due to an influx of new data from research and open government activities. However, it is still quite difficult to access this wealth of semantically enriched data directly without having in-depth knowledge about SPARQL and related semantic technologies. In this paper, we present the Linked Data Query Wizard, a prototype that provides a Linked Data interface for non-expert users, focusing on keyword search as an entry point and a tabular interface providing simple functionality for filtering and exploration.
Breitweiser Christian, Terbu Oliver, Holzinger Andreas, Brunner Clemens, Lindstaedt Stefanie , Müller-Putz Gernot
2013
We developed an iOS based application called iScope to monitor biosignals online. iScope is able to receive different signal types via a wireless network connection and is able to present them in the time or the frequency domain. Thus it is possible to inspect recorded data immediately during the recording process and detect potential artifacts early without the need to carry around heavy equipment like laptops or complete PC workstations. The iScope app has been tested during various measurements on the iPhone 3GS as well as on the iPad 1 and is fully functional.
Kraker Peter, Trattner Christoph, Jack Kris, Lindstaedt Stefanie , Schlgl Christian
2013
At the beginning of a scientific study, it is usually quite hardto get an overview of a research field. We aim to addressthis problem of classic literature search using web data. Inthis extended abstract, we present work-in-progress on aninteractive visualization of research fields based on readershipstatistics from the social reference management systemMendeley. To that end, we use library co-occurrences as ameasure of subject similarity. In a first evaluation, we findthat the visualization covers current research areas withineducational technology but presents a view that is biasedby the characteristics of readers. With our presentation, wehope to elicit feedback from the Websci’13 audience on (1)the usefulness of the prototype, and (2) how to overcomethe aforementioned biases using collaborative constructiontechniques.
Tatzgern Markus, Grasset Raphael, Veas Eduardo Enrique, Kalkofen Denis, Schmalstieg Dieter
2013
Augmented reality (AR) enables users to retrieve additional information about the real world objects and locations.Exploring such location-based information in AR requires physical movement to different viewpoints, which maybe tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric explorationtechniques for handheld AR that allow users to access information freely using a virtual copy metaphor to explorelarge real world objects. We evaluated our interfaces in controlled conditions and collected first experiences in areal world pilot study. Based on our findings, we put forward design recommendations that should be consideredby future generations of location-based AR browsers, 3D tourist guides, or in situated urban plannin
Kalkofen Denis, Veas Eduardo Enrique, Zollmann Stefanie, Steinberger Markus, Schmalstieg Dieter
2013
In Augmented Reality (AR), ghosted views allow a viewer to ex-plore hidden structure within the real-world environment. A bodyof previous work has explored which features are suitable to sup-port the structural interplay between occluding and occluded ele-ments. However, the dynamics of AR environments pose seriouschallenges to the presentation of ghosted views. While a modelof the real world may help determine distinctive structural features,changes in appearance or illumination detriment the composition ofoccluding and occluded structure. In this paper, we present an ap-proach that considers the information value of the scene before andafter generating the ghosted view. Hereby, a contrast adjustment ofpreserved occluding features is calculated, which adaptively variestheir visual saliency within the ghosted view visualization. This al-lows us to not only preserve important features, but to also supporttheir prominence after revealing occluded structure, thus achieving a positive effect on the perception of ghosted views.
Ullrich Torsten, Silva Nelson, Eggeling Eva, Fellner Dieter W.
2013
Ullrich Torsten, Silva Nelson, Eggeling Eva, Fellner Dieter W.
2013
Silva Nelson
2013
Dennerlein Sebastian, Gutounig Robert, Kraker Peter, Kaiser Rene_DB, Rauter Romana , Ausserhofer Julian
2013
Barcamps are informal conferences whose content is not de-fined in advance, often referred to as ad-hoc conferences orun-conferences. Therefore, the outcomes of a barcamp arelargely unknown before the event. This raises the question ofthe participants’ motivations to attend and contribute. Toanswer this question, we conducted an exploratory empiricalstudy at Barcamp Graz 2012. We applied a mixed-methodapproach: first we used a socio-demographic questionnaire(n=99) which allowed us to characterize the ’typical barcamper’.Second, we conducted qualitative interviews (n=10) toget a deeper understanding of the participants’ motivationsto attend, expectations, and the use of social media in thatcontext. We identified three concepts, which could be deductedfrom the interviews: people, format and topics. Wefound that the motivation to attend and even a commonidentity is quite strongly based on these three factors. Furthermore,the results indicate that participants share a set ofactivities and methods by following the barcamp’s inherentrules and make extensive use of social media.
Cook John, Santos Patricia, Ley Tobias, Dennerlein Sebastian, Pata Kai, Colley Joanna, Sandars John, Treasure-Jones Tamsin
2013
Dennerlein Sebastian, Santos Patricia, Kämäräinen Pekka , Deitmer Ludger , Heinemann Lars , Campbell Melanie, Dertl Michael, Bachl Martin, Trattner Christoph, Bauters Merja
2013
Being able to connect informal and formal learning experiences is thekey to successful apprenticeships. For instance the knowledge emerging out ofpractice should be used to extend and refine formal leaning experiences, andvice versa. Currently such scenarios are not supported appropriately withtechnology in many different domains. This paper focuses on the constructiondomain, which is one of the test-beds in the recently started large-scale EUproject ‘Learning Layers’. We suggest a model for bridging this gap betweenformal and informal learning by co-designing with construction sectorrepresentatives to identify how web services, apps and mobile devices can beorchestrated to connect informal and formal learning with the goal of enhancingcollaboration and supporting contextual learning at the workplace.
Dennerlein Sebastian
2013
This dissertation will elaborate on the understanding of intersubjective meaning making by analyzing the traces of collaborative knowledge construction users leave behind in socio-technical systems. Therefore, it will draw upon more theoretical and more formal models of cognitive psychology to describe and explain the underlying process in detail. This is done with the goal to support intersubjective meaning making and thus elevate informal collaborative knowledge construction in nowadays affordances of social media.
Kraker Peter, Dennerlein Sebastian
2013
In this position paper, we argue that the different disciplinesin Web Science do not work together in an interdisciplinaryway. We attribute this to a fundamental difference in approachingresearch between social scientists and computerscientists, which we call the patterns vs. model problem.We reason that interdisciplinary teamwork is needed toovercome the patterns vs. model problem. We then discusstwo theoretical strains in social science which we see asrelevant in the context of interdisciplinary teamwork. Finally,we sketch a model of interdisciplinary teamwork in WebScience based on the interplay of collaboration and cooperation.
Ley Tobias, Cook John, Dennerlein Sebastian, Kravcik Milos, Kunzmann Christine, Laanpere Mart, Pata Kai, Purma Jukka, Sandars John, Santos Patricia, Schmidt Andreas
2013
While several technological advances have been suggested to scale learning at the workplace, none has been successful to scale informal learning. We review three theoretical discourses and suggest an integrated systems model of scaffolding informal workplace learning that has been created to tackle this challenge. We derive research questions that emerge from this model and illustrate these with an in-depth analysis of two workplace learning domains.
Dennerlein Sebastian, Moskaliuk Johannes , Ley Tobias, Kump Barbara
2013
The co-evolution model of collaborative knowledge building by Cress & Kimmerle (2008)assumes that cognitive and social processes interact when users build knowledge with shareddigital artifacts. While these assumptions have been tested in various lab experiments, a testunder natural field conditions in educational settings has not been conducted. Here, wepresent a field experiment where we triggered knowledge co-evolution in an accommodationand an assimilation condition, and measured effects on student knowledge building outsidethe laboratory in the context of two university courses. Therefore, 48 students receiveddifferent kinds of prompts that triggered external accommodation and assimilation whilewriting a wiki text. Knowledge building was measured with a content analysis of the students‟texts and comments (externalization), and with concept maps and association tests(internalization). The findings reveal that (a) different modes of externalization(accommodation and assimilation) could be triggered with prompts, (b) across bothconditions, this externalization co-occurred with internalization (student learning), and (c)there is some evidence that external assimilation and accommodation had differential effectson internal assimilation and accommodation. Thus, the field experiment supports theassumptions of the co-evolution model in a realistic course setting. On a more general note,the study provides an example of how wikis can be used successfully for collaborativeknowledge building within educational contexts.
Fellendorf Martin, Brandstätter Michael, Reiter Thomas, Lindstaedt Stefanie , Breitwieser Christian, Haberl Michael, Hebenstreit Cornelia, Scherer Reinhold, Kraschl-Hirschman Karin, Kröll Mark, Ruthner Thomas, Walther Bernhard
2012
Das mobile Verkehrsmanagementsystem MOVEMENTS soll als einfaches und zuverlässiges System entwickelt werden, das durch mobile Anzeigemöglichkeiten mit dezentralen Ansteuerungsmöglichkeiten und zentraler Überwachungsfunktion flächendeckend einsetzbar ist. Bei den Anzeigetafeln ist auf Lesbarkeit und Verständlichkeit von Texten und Piktogrammen zu achten, um für die Verkehrsteilnehmer auch unter schlechten Sichtbedingungen wahrnehmbar zu sein. Die mobile Anzeige soll sowohl für planbare Ereignisse (Veranstaltungen, Baustellen, ...), als auch für ungeplante Ereignisse längerer Dauer (Unfälle mit verkehrsbeeinträchtigender Wirkung, Straßensperren durch Naturereignisse, wie Hangrutschungen, ...) eingesetzt werden. Generell sollen durch den Einsatz von MOVEMENTS die Lenkungs- und Informationsmöglichkeiten der ASFINAG in Netzteilen ohne Verkehrsbeeinflussungsanlagen verbessert werden
Seitlinger Christian, Schöfegger Karin, Lindstaedt Stefanie , Ley Tobias
2012
Ravenscroft Andrew, Lindstaedt Stefanie , Delgado Kloos Carlos, Hernández-Leo Davinia
2012
This book constitutes the refereed proceedings of the 7th European Conference on Technology Enhanced Learning, EC-TEL 2012, held in Saarbrücken, Germany, in September 2012. The 26 revised full papers presented were carefully reviewed and selected from 130 submissions. The book also includes 12 short papers, 16 demonstration papers, 11 poster papers, and 1 invited paper. Specifically, the programme and organizing structure was formed through the themes: mobile learning and context; serious and educational games; collaborative learning; organisational and workplace learning; learning analytics and retrieval; personalised and adaptive learning; learning environments; academic learning and context; and, learning facilitation by semantic means.
Drachsler Hendrik, Verbert Katrien, Manouselis Nikos, Vuorikari Riina, Wolpers Martin, Lindstaedt Stefanie
2012
Technology Enhanced Learning is undergoing a significant shift in paradigm towards more data driven systems that will make educational systems more transparent and predictable. Data science and data-driven tools will change the evaluation of educational practice and didactical interventions for individual learners and educational institutions. We summarise these developments and new challenges in the preface of this Special Issue under the keyword dataTEL that stands for ‘Data-Supported Technology-Enhanced Learning’.
Kump Barbara, Seifer Christin, Beham Günter, Lindstaedt Stefanie , Ley Tobias
2012
User knowledge levels in adaptive learning systems can be assessed based on user interactions that are interpreted as Knowledge Indicating Events (KIE). Such an approach makes complex inferences that may be hard to understand for users, and that are not necessarily accurate. We present MyExperiences, an open learner model designed for showing the users the inferences about them, as well as the underlying data. MyExperiences is one of the first open learner models based on tree maps. It constitutes an example of how research into open learner models and information visualization can be combined in an innovative way.
Pammer-Schindler Viktoria, Kump Barbara, Lindstaedt Stefanie
2012
Collaborative tagging platforms allow users to describe resources with freely chosen keywords, so called tags. The meaning of a tag as well as the precise relation between a tag and the tagged resource are left open for interpretation to the user. Although human users mostly have a fair chance at interpreting this relation, machines do not. In this paper we study the characteristics of the problem to identify descriptive tags, i.e. tags that relate to visible objects in a picture. We investigate the feasibility of using a tag-based algorithm, i.e. an algorithm that ignores actual picture content, to tackle the problem. Given the theoretical feasibility of a well-performing tag-based algorithm, which we show via an optimal algorithm, we describe the implementation and evaluation of a WordNet-based algorithm as proof-of-concept. These two investigations lead to the conclusion that even relatively simple and fast tag-based algorithms can yet predict human ratings of which objects a picture shows. Finally, we discuss the inherent difficulty both humans and machines have when deciding whether a tag is descriptive or not. Based on a qualitative analysis, we distinguish between definitional disagreement, difference in knowledge, disambiguation and difference in perception as reasons for disagreement between raters.
Shahzad Syed K, Granitzer Michael, Helic Denis
2011
Ontology and Semantic Framework has becomepervasive in computer science. It has huge impact at database,business logic and user interface for a range of computerapplications. This framework is also being introduced, presentedor plugged at user interfaces for various software and websites.However, establishment of structured and standardizedontological model based user interface development environmentis still a challenge. This paper talks about the necessity of such anenvironment based on User Interface Ontology (UIO). To explorethis phenomenon, this research focuses at the User Interfaceentities, their semantics, uses and relationships among them. Thefirst part focuses on the development of User Interface Ontology.In the second step, this ontology is mapped to the domainontology to construct a User Interface Model. Finally, theresulting model is quantified and instantiated for a user interfacedevelopment to support our framework. This UIO is anextendable framework that allows defining new sub-conceptswith their ontological relationships and constraints.
Lindstaedt Stefanie , Christl Conny
2011
This chapter presents a domain-independent computational environment which supports work-integrated learning at the professional workplace. The Advanced Process-Oriented Self-Directed Learning Environment (APOSDLE) provides learning support during the execution of work tasks (instead of beforehand), within the work environment of the user (instead of within a separate learning system), and repurposes content which was not originally intended for learning (instead of relying on the expensive manual creation of learning material). Since this definition of work-integrated learning might differ from other definitions employed within this book, a short summary of the theoretical background is provided. Along the example of the company Innovation Service Network (ISN), a network of SME’s, a rich and practical description of the deployment and usage of APOSDLE is given. The chapter provides the reader with firsthand experiences and discusses efforts and lessons learned, backed up with experiences gained in two other application settings, namely EADS in France and a Chamber of Commerce and industry in Germany.
Lindstaedt Stefanie , Kraker Peter, Wild Fridolin, Ullmann Thomas, Duval Erik, Parra Gonzalo
2011
This deliverable reports on first usage experiences and evaluations of the STELLAR Science 2.0 Infrastructure. Usage experiences were available predominantly for the "mature" part of the infrastructure provided by standard Web 2.0 tools adapted to STELLAR needs. Evaluations are provided for newly developed tools. We first provide an overview of the whole STELLAR Science 2.0 Infrastructure and the relationships between the building blocks. While the individual building blocks already benefit researchers, the integration between them is the key for a positive usage experience. The publication meta data ecosystem for example provides researchers with an easy to retrieve set of TEL related data. Tools like the ScienceTable, Muse, the STELLAR latest publication widget, and the STELLAR BuRST search show already several scenarios of how to make use of this infrastructure. Especially a strong focus on anlytical tools based on publication and social media data seem useful. In order to highlight the relevance of the infrastructure to the individual capacitiy building activties within STELLAR, the usage experiences of individual building blocks are then reported with respect to Researcher Capacity (e.g. Deliverable Wikis, More! application), Doctoral Academy Capacity (e.g. DoCoP), Community Level Capacity (e.g TELeurope), and Leadership Capacity (e.g. Meeting of Minds, Podcast Series). Here we draw from 11 scientific papers published. The reader will find an overview of all these papers in the Appendix. Based on the usage experiences and evaluations we have identified a number of ideas which might be worth considering for future developments. For example, the experiences gained with the Deliverable Wikis show how the modification of the standard Wiki history can provide useful analytical insights into the collaboration of living deliverables and can return the focus on authorship (which is intentionally masked in Wikis, because of their strong notion on the product and not on authors). We conclude with main findings and an outlook on the development plan and evaluation plan which are currently being developed and which will influence D6.6. Particularly, we close with the notion of a Personal Research Environment (PRE) which draws from the concept of Personal Learning Environments (PLE).
Horn Christopher, Pimas Oliver, Granitzer Michael, Lex Elisabeth
2011
In this paper, we outline our experiments carried out at theTREC Microblog Track 2011. Our system is based on a plain text indexextracted from Tweets crawled from twitter.com. This index hasbeen used to retrieve candidate Tweets for the given topics. The resultingTweets were post-processed and then analyzed using three differentapproaches: (i) a burst detection approach, (ii) a hashtag analysis, and(iii) a Retweet analysis. Our experiments consisted of four runs: Firstly,a combination of the Lucene ranking with the burst detection, and secondly,a combination of the Lucene ranking, the burst detection, and thehashtag analysis. Thirdly, a combination of the Lucene ranking, the burstdetection, the hashtag analysis, and the Retweet analysis, and fourthly,again a combination of the Lucene ranking with the burst detection butin this case with more sophisticated query language and post-processing.We achieved the best MAP values overall in the fourth run.
Lindstaedt Stefanie , Kump Barbara, Rath Andreas S.
2011
Within this chapter we first outline the important role learning plays within knowledge work and its impact on productivity. As a theoretical background we introduce the paradigm of Work-Integrated Learning (WIL) which conceptualizes informal learning at the workplace and takes place tightly intertwined with the execution of work tasks. Based on a variety of in-depth knowledge work studies we identify key requirements for the design of work-integrated learning support. Our focus is on providing learning support during the execution of work tasks (instead of beforehand), within the work environment of the user (instead of within a separate learning system), and by repurposing content for learning which was not originally intended for learning (instead of relying on the expensive manual creation of learning material). In order to satisfy these requirements we developed a number of context-aware knowledge services. These services integrate semantic technologies with statistical approaches which perform well in the face of uncertainty. These hybrid knowledge services include the automatic detection of a user’s work task, the ‘inference’ of the user’s competencies based on her past activities, context-aware recommendation of content and colleagues, learning opportunities, etc. A summary of a 3 month in-depth summative workplace evaluation at three testbed sites concludes the chapter.
Erdmann Michael, Hansch Daniel, Pammer-Schindler Viktoria, Rospocher Marco, Ghidini Chiara, Lindstaedt Stefanie , Serafini Luciano
2011
This chapter describes some extensions to and applications of the Semantic MediaWiki. It complements the discussion of the SMW in Chap. 3. Semantic enterprise wikis combine the strengths of traditional content management systems, databases, semantic knowledge management systems and collaborative Web 2.0 platforms. Section 12.1 presents SMW+, a product for developing semantic enterprise applications. The section describes a number of real-world applications that are realized with SMW+. These include content management, project management and semantic data integration. Section 12.2 presents MoKi, a semantic wiki for modeling enterprise processes and application domains. Example applications of MoKi include modeling tasks and topics for work-integrated learning, collaboratively building an ontology and modeling clinical protocols. The chapter illustrates the wealth of activities which semantic wikis support.
Kump Barbara, Knipfer Kristin, Pammer-Schindler Viktoria, Schmidt Andreas, Maier Ronald, Kunzmann Christine, Cress Ulrike, Lindstaedt Stefanie
2011
The Knowledge Maturing Phase Model has been presented as a model aligning knowledge management and organizational learning. The core argument underlying the present paper is that maturing organizational knowhow requires individual and collaborative reflection at work. We present an explorative interview study that analyzes reflection at the workplace in four organizations in different European countries. Our qualitative findings suggest that reflection is not equally self-evident in different settings. A deeper analysis of the findings leads to the hypothesis that different levels of maturity of processes come along with different expectations towards the workers with regard to compliance and flexibility, and to different ways of how learning at work takes place. Furthermore, reflection in situations where the processes are in early maturing phases seems to lead to consolidation of best practice, while reflection in situations where processes are highly standardized may lead to a modification of these standard processes. Therefore, in order to support the maturing of organizational know-how by providing reflection support, one should take into account the degree of standardisation of the processes in the target group.
Seifert Christin, Ulbrich Eva Pauline, Granitzer Michael
2011
In text classification the amount and quality of training datais crucial for the performance of the classifier. The generation of trainingdata is done by human labelers - a tedious and time-consuming work. Wepropose to use condensed representations of text documents instead ofthe full-text document to reduce the labeling time for single documents.These condensed representations are key sentences and key phrases andcan be generated in a fully unsupervised way. The key phrases are presentedin a layout similar to a tag cloud. In a user study with 37 participantswe evaluated whether document labeling with these condensedrepresentations can be done faster and equally accurate by the humanlabelers. Our evaluation shows that the users labeled word clouds twiceas fast but as accurately as full-text documents. While further investigationsfor different classification tasks are necessary, this insight couldpotentially reduce costs for the labeling process of text documents.
Granitzer Michael, Lindstaedt Stefanie
2011
Moskaliuk, J., Rath, A.S., Devaurs, D., Weber, N., Lindstaedt Stefanie , Kimmerle, J., Cress, U.
2011
Jointly working on shared digital artifacts – such as wikis – is a well-tried method of developing knowledge collectively within a group or organization. Our assumption is that such knowledge maturing is an accommodation process that can be measured by taking the writing process itself into account. This paper describes the development of a tool that detects accommodation automatically with the help of machine learning algorithms. We applied a software framework for task detection to the automatic identification of accommodation processes within a wiki. To set up the learning algorithms and test its performance, we conducted an empirical study, in which participants had to contribute to a wiki and, at the same time, identify their own tasks. Two domain experts evaluated the participants’ micro-tasks with regard to accommodation. We then applied an ontology-based task detection approach that identified accommodation with a rate of 79.12%. The potential use of our tool for measuring knowledge maturing online is discussed.
Kraker Peter, Wagner Claudia, Jeanquartier Fleur, Lindstaedt Stefanie
2011
This paper presents an adaptable system for detecting trends based on the micro-blogging service Twitter, and sets out to explore to what extent such a tool can support researchers. Twitter has high uptake in the scientific community, but there is a need for a means of extracting the most important topics from a Twitter stream. There are too many tweets to read them all, and there is no organized way of keeping up with the backlog. Following the cues of visual analytics, we use visualizations to show both the temporal evolution of topics, and the relations between different topics. The Twitter Trend Detection was evaluated in the domain of Technology Enhanced Learning (TEL). The evaluation results indicate that our prototype supports trend detection but reveals the need for refined preprocessing, and further zooming and filtering facilities.
Kern Roman, Zechner Mario, Granitzer Michael
2011
Author disambiguation is a prerequisite for utilizingbibliographic metadata in citation analysis. Automaticdisambiguation algorithms mostly rely on cluster-based disambiguationstrategies for identifying unique authors given theirnames and publications. However, most approaches rely onknowing the correct number of unique authors a-priori, whichis rarely the case in real world settings. In this publicationwe analyse cluster-based disambiguation strategies and developa model selection method to estimate the number of distinctauthors based on co-authorship networks. We show that, givenclean textual features, the developed model selection methodprovides accurate guesses of the number of unique authors.
Scheir Peter, Prettenhofer Peter, Lindstaedt Stefanie , Ghidini Chiara
2010
While it is agreed that semantic enrichment of resources would lead to better search results, at present the low coverage of resources on the web with semantic information presents a major hurdle in realizing the vision of search on the Semantic Web. To address this problem, this chapter investigates how to improve retrieval performance in settings where resources are sparsely annotated with semantic information. Techniques from soft computing are employed to find relevant material that was not originally annotated with the concepts used in a query. The authors present an associative retrieval model for the Semantic Web and evaluate if and to which extent the use of associative retrieval techniques increases retrieval performance. In addition, the authors present recent work on adapting the network structure based on relevance feedback by the user to further improve retrieval effectiveness. The evaluation of new retrieval paradigms - such as retrieval based on technology for the Semantic Web - presents an additional challenge since no off-the-shelf test corpora exist. Hence, this chapter gives a detailed description of the approach taken to evaluate the information retrieval service the authors have built.
Balacheff, Nicolas, Bottino, Rosa, Fischer, Frank, Hofmann, Lena, Joubert, Marie, Kieslinger, Barbara, Lindstaedt Stefanie , Manca, Stefanie, Ney, Muriel, Pozzi, Francesca, Sutherland, Rosamund, Verbert, Katrien, Timmis, Sue, Wild, Fridolin, Scott, Peter, Specht, Marcus
2010
This First TEL Grand Challenge Vision and Strategy Report aims to: • provide a unifying framework for members of STELLAR (including doctoral candidates) to develop their own research agenda • engage the STELLAR community in scientific debate and discussion with the long term aim of developing awareness of and respect for different theoretical and methodological perspectives • build knowledge related to the STELLAR grand challenges through the construction of a wiki that is iteratively co‐edited throughout the life of the STELLAR network • develop understandings of the way in which web 2.0 technologies can be used to construct knowledge within a research community (science 2.0) • develop strategies for ways in which the STELLAR instruments can feed into the ongoing development of the wiki and how the they can be used to address the challenges highlighted in this report.
Pozzi, Francesca, Persico, Donatella, Fischer, Frank, Hofmann, Lena, Lindstaedt Stefanie , Cress, Ulrike, Rath Andreas S., Moskaliuk, Johannes, Weber, Nicolas, Kimmerle, Joachim, Devaurs Didier, Ney, Muriel, Gonçalves, Celso, Balacheff, Nicolas, Schwartz, Claudine, Bosson, Jean-Luc, Dillenbourg, Pierre, Jermann, Patrick, Zufferey, Guillaume, Brown, Elisabeth, Sharples, Mike, Windrum, Caroline, Specht, Marcus, Börner, Dirk, Glahn, Christian, Fiedler, Sebastian, Fisichella, Marco, Herder, Eelco, Marenzi, Ivana, Nejdl, Wolfgang, Kawese, Ricardo, Papadakis, George
2010
In this first STELLAR trend report we survey the more distant future of TEL, as reflected in the roadmaps; we compare the visions with trends in TEL research and TEL practice. This generic overview is complemented by a number of small-scale studies, which focus on a specific technology, approach or pedagogical model.
Granitzer Michael, Kienreich Wolfgang, Sabol Vedran, Lex Elisabeth
2010
Technological advances and paradigmatic changes in the utilization of the World Wide Web havetransformed the information seeking strategies of media consumers and invalidated traditionalbusiness models of media providers. We discuss relevant aspects of this development and presenta knowledge relationship discovery pipeline to address the requirements of media providers andmedia consumers. We also propose visually enhanced access methods to bridge the gap betweencomplex media services and the information needs of the general public. We conclude that acombination of advanced processing methods and visualizations will enable media providers totake the step from content-centered to service-centered business models and, at the same time,will help media consumers to better satisfy their personal information needs.
Wolpers Martin, Kirschner Paul A., Scheffel Maren, Lindstaedt Stefanie , Dimitrova Vania
2010
Lindstaedt Stefanie , Duval E., Ullmann T.D., Wild F., Scott P.
2010
Research2.0 is in essence a Web2.0 approach to how we do research. Research2.0 creates conversations between researchers, enables them to discuss their findings and connects them with others. Thus, Research2.0 can accelerate the diffusion of knowledge.ChallengesAs concluded during the workshop, at least four challenges are vital for future research.The first area is concerned with availability of data. Access to sanitized data and conventions on how to describe publication-related metadata provided from divergent sources are enablers for researchers to develop new views on their publications and their research area. Additional, social media data gain more and more attention. Reaching a widespread agreement about this for the field of technology-enhanced learning would be already a major step, but it is also important to focus on the next steps: what are success-critical added values driving uptake in the research community as a whole?The second area of challenges is seen in Research 2.0 practices. As technology-enhanced learning is a multidisciplinary field, practices developed in one area could be valuable for others. To extract the essence of successful multidisciplinary Research 2.0 practice though, multidimensional and longitudinal empirical work is needed. It is also an open question, if we should support practice by fostering the usage of existing tools or the development of new tools, which follow Research 2.0 principles. What makes a practice sustainable? What are the driving factors?The third challenge deals with impact. What are criteria of impact for research results (and other research artefacts) published on the Web? How can this be related to the publishing world appearing in print? Is a link equal to a citation or a download equal to a subscription? Can we develop a Research 2.0 specific position on impact measurement? This includes questions of authority, quality and re-evaluation of quality, and trust.The tension between openness and privacy spans the fourth challenge. The functionality of mash-ups often relies on the use of third-party services. What happens with the data, if this source is no longer available? What about hidden exchange of data among backend services?
Lindstaedt Stefanie , Rath Andreas S., Devaurs Didier
2010
. Supporting learning activities during work has gained momentum fororganizations since work-integrated learning (WIL) has been shown to increaseproductivity of knowledge workers. WIL aims at fostering learning at the workplace,during work, for enhancing task performance. A key challenge for enablingtask-specific, contextualized, personalized learning and work support is to automaticallydetect the user’s task. In this paper we utilize our ontology-based usertask detection approach for studying the factors influencing task detection performance.We describe three laboratory experiments we have performed in twodomains including over 40 users and more than 500 recorded task executions.The insights gained from our evaluation are: (i) the J48 decision tree and Na¨ıveBayes classifiers perform best, (ii) six features can be isolated, which providegood classification accuracy, (iii) knowledge-intensive tasks can be classified aswell as routine tasks and (iv) a classifier trained by experts on standardized taskscan be used to classify users’ personal tasks.
Kern Roman, Granitzer Michael, Muhr M.
2010
Word sense induction and discrimination(WSID) identifies the senses of an ambiguousword and assigns instances of thisword to one of these senses. We have builda WSID system that exploits syntactic andsemantic features based on the results ofa natural language parser component. Toachieve high robustness and good generalizationcapabilities, we designed our systemto work on a restricted, but grammaticallyrich set of features. Based on theresults of the evaluations our system providesa promising performance and robustness.
Granitzer Michael, Sabol Vedran, Onn K., Lukose D.
2010
Schachner W.
2010
Schachner W.
2010
Schachner W.
2010
Stern Hermann, Kaiser Rene_DB, Hofmair P., Lindstaedt Stefanie , Scheir Peter, Kraker Peter
2010
One of the success factors of Work Integrated Learning (WIL) is to provide theappropriate content to the users, both suitable for the topics they are currently working on, andtheir experience level in these topics. Our main contributions in this paper are (i) overcomingthe problem of sparse content annotation by using a network based recommendation approachcalled Associative Network, which exploits the user context as input; (ii) using snippets for notonly highlighting relevant parts of documents, but also serving as a basic concept enabling theWIL system to handle text-based and audiovisual content the same way; and (iii) using the WebTool for Ontology Evaluation (WTE) toolkit for finding the best default semantic similaritymeasure of the Associative Network for new domains. The approach presented is employed inthe software platform APOSDLE, which is designed to enable knowledge workers to learn atwork.
Lindstaedt Stefanie , Kump Barbara, Beham Günter, Pammer-Schindler Viktoria, Ley Tobias, de Hoog R., Dotan A.
2010
We present a work-integrated learning (WIL) concept which aims atempowering employees to learn while performing their work tasks. Withinthree usage scenarios we introduce the APOSDLE environment whichembodies the WIL concept and helps knowledge workers move fluidly alongthe whole spectrum of WIL activities. By doing so, they are experiencingvarying degrees of learning guidance: from building awareness, over exposingknowledge structures and contextualizing cooperation, to triggering reflectionand systematic competence development. Four key APOSDLE components areresponsible for providing this variety of learning guidance. The challenge intheir design lies in offering learning guidance without being domain-specificand without relying on manually created learning content. Our three monthsummative workplace evaluation within three application organizationssuggests that learners prefer awarenss building functionalities and descriptivelearning guidance and reveals that they benefited from it.
Lindstaedt Stefanie , Beham Günter, Stern Hermann, Drachsler H., Bogers T., Vuorikari R., Verbert K., Duval E., Manouselis N., Friedrich M., Wolpers M.
2010
This paper raises the issue of missing data sets for recommender systems in Technology Enhanced Learning that can be used asbenchmarks to compare different recommendation approaches. It discusses how suitable data sets could be created according tosome initial suggestions, and investigates a number of steps that may be followed in order to develop reference data sets that willbe adopted and reused within a scientific community. In addition, policies are discussed that are needed to enhance sharing ofdata sets by taking into account legal protection rights. Finally, an initial elaboration of a representation and exchange format forsharable TEL data sets is carried out. The paper concludes with future research needs.
Beham Günter, Kump Barbara, Lindstaedt Stefanie , Ley Tobias
2010
According to studies into learning at work, interpersonal help seeking is the most important strategy of how people acquireknowledge at their workplaces. Finding knowledgeable persons, however, can often be difficult for several reasons. Expertfinding systems can support the process of identifying knowledgeable colleagues thus facilitating communication andcollaboration within an organization. In order to provide the expert finding functionality, an underlying user model is needed thatrepresents the characteristics of each individual user. In our article we discuss requirements for user models for the workintegratedlearning (WIL) situation. Then, we present the APOSDLE People Recommender Service which is based on anunderlying domain model, and on the APOSDLE User Model. We describe the APOSDLE People Recommender Service on thebasis of the Intuitive Domain Model of expert finding systems, and explain how this service can support interpersonal helpseeking at workplaces.
Lindstaedt Stefanie , Kraker Peter, Höfler Patrick, Fessl Angela
2010
In this paper we present an ecosystem for the lightweight exchangeof publication metadata based on the principles of Web 2.0. At the heart of thisecosystem, semantically enriched RSS feeds are used for dissemination. Thesefeeds are complemented by services for creation and aggregation, as well aswidgets for retrieval and visualization of publication metadata. In twoscenarios, we show how these publication feeds can benefit institutions,researchers, and the TEL community. We then present the formats, services,and widgets developed for the bootstrapping of the ecosystem. We concludewith an outline of the integration of publication feeds with the STELLARNetwork of Excellence1 and an outlook on future developments.
Beham Günter, Lindstaedt Stefanie , Ley Tobias, Kump Barbara, Seifert C.
2010
When inferring a user’s knowledge state from naturally occurringinteractions in adaptive learning systems, one has to makes complexassumptions that may be hard to understand for users. We suggestMyExperiences, an open learner model designed for these specificrequirements. MyExperiences is based on some of the key design principles ofinformation visualization to help users understand the complex information inthe learner model. It further allows users to edit their learner models in order toimprove the accuracy of the information represented there.
Ley Tobias, Seitlinger Paul
2010
Researching the emergence of semantics in social systems needs totake into account how users process information in their cognitive system. Wereport results of an experimental study in which we examined the interactionbetween individual expertise and the basic level advantage in collaborative tagging.The basic level advantage describes availability in memory of certain preferredlevels of taxonomic abstraction when categorizing objects and has beenshown to vary with level of expertise. In the study, groups of students taggedinternet resources for a 10-week period. We measured the availability of tags inmemory with an association test and a relevance rating and found a basic leveladvantage for tags from more general as opposed to specific levels of the taxonomy.An interaction with expertise also emerged. Contrary to our expectations,groups that spent less time to develop a shared understanding shifted tomore specific levels as compared to groups that spent more time on a topic. Weattribute this to impaired collaboration in the groups. We discuss implicationsfor personalized tag and resource recommendations.
Beham Günter, Jeanquartier Fleur, Lindstaedt Stefanie
2010
This paper introduces iAPOSDLE, a mobile application enabling the use of work-integrated learning services without being limited by location. iAPOSDLE makes use of the APOSDLE WIL system for self-directed work-integrated learning support, and extends its range of application to mobile learning. Core features of iAPOSDLE are described and possible extensions are discussed.
Kern Roman, Granitzer Michael, Muhr M.
2010
Cluster label quality is crucial for browsing topic hierarchiesobtained via document clustering. Intuitively, the hierarchicalstructure should influence the labeling accuracy. However,most labeling algorithms ignore such structural propertiesand therefore, the impact of hierarchical structureson the labeling accuracy is yet unclear. In our work weintegrate hierarchical information, i.e. sibling and parentchildrelations, in the cluster labeling process. We adaptstandard labeling approaches, namely Maximum Term Frequency,Jensen-Shannon Divergence, χ2 Test, and InformationGain, to take use of those relationships and evaluatetheir impact on 4 different datasets, namely the Open DirectoryProject, Wikipedia, TREC Ohsumed and the CLEFIP European Patent dataset. We show, that hierarchicalrelationships can be exploited to increase labeling accuracyespecially on high-level nodes.
Lex Elisabeth, Granitzer Michael, Juffinger A.
2010
In the blogosphere, the amount of digital content is expanding and for search engines, new challenges have been imposed. Due to the changing information need, automatic methods are needed to support blog search users to filter information by different facets. In our work, we aim to support blog search with genre and facet information. Since we focus on the news genre, our approach is to classify blogs into news versus rest. Also, we assess the emotionality facet in news related blogs to enable users to identify people’s feelings towards specific events. Our approach is to evaluate the performance of text classifiers with lexical and stylometric features to determine the best performing combination for our tasks. Our experiments on a subset of the TREC Blogs08 dataset reveal that classifiers trained on lexical features perform consistently better than classifiers trained on the best stylometric features.
Kröll Mark, Strohmaier M.
2010
In this paper, we introduce the idea of Intent Analysis, which is to create a profile of the goals and intentions present in textual content. Intent Analysis, similar to Sentiment Analysis, represents a type of document classification that differs from traditional topic categorization by focusing on classification by intent. We investigate the extent to which the automatic analysis of human intentions in text is feasible and report our preliminary results, and discuss potential applications. Inaddition, we present results from a study that focused on evaluating intent profiles generated from transcripts of American presidential candidate speeches in 2008.
Stocker A., Mueller J.
2010
Ley Tobias, Kump Barbara, Gerdenitsch C.
2010
Adaptive scaffolding has been proposed as an efficient means for supporting self-directed learning both in educational as well as in adaptive learning systems research. However, the effects of adaptation on self-directed learning and the differential contributions of different adaptation models have not been systematically examined. In this paper, we examine whether personalized scaffolding in the learning process improves learning. We conducted a controlled lab study in which 29 students had to solve several tasks and learn with the help of an adaptive learning system in a within-subjects control condition design. In the learning process, participants obtained recommendations for learning goals from the system in three conditions: fixed scaffolding where learning goals were generated from the domain model, personalized scaffolding where these recommendations were ranked according to the user model, and random suggestions of learning goals (control condition). Students in the two experimental conditions clearly outperformed students in the control condition and felt better supported by the system. Additionally, students who received personalized scaffolding selected fewer learning goals than participants from the other groups.
Lex Elisabeth, Granitzer Michael, Juffinger A.
2010
In this paper, we outline our experiments carried out at the TREC 2009 Blog Distillation Task. Our system is based on a plain text index extracted from the XML feeds of the TREC Blogs08 dataset. This index was used to retrieve candidate blogs for the given topics. The resulting blogs were classified using a Support Vector Machine that was trained on a manually labelled subset of the TREC Blogs08 dataset. Our experiments included three runs on different features: firstly on nouns, secondly on stylometric properties, and thirdly on punctuation statistics. The facet identification based on our approach was successful, although a significant number of candidate blogs were not retrieved at all.
Granitzer Michael, Kienreich Wolfgang
2010
Granitzer Michael
2010
Term weighting strongly influences the performance of text miningand information retrieval approaches. Usually term weights are determined throughstatistical estimates based on static weighting schemes. Such static approacheslack the capability to generalize to different domains and different data sets. Inthis paper, we introduce an on-line learning method for adapting term weightsin a supervised manner. Via stochastic optimization we determine a linear transformationof the term space to approximate expected similarity values amongdocuments. We evaluate our approach on 18 standard text data sets and showthat the performance improvement of a k-NN classifier ranges between 1% and12% by using adaptive term weighting as preprocessing step. Further, we provideempirical evidence that our approach is efficient to cope with larger problems
Ley Tobias, Kump Barbara, Albert D.
2010
Ley Tobias, Lindstaedt Stefanie , Schöfegger Karin, Seitlinger Paul, Weber Nicolas, Hu Bo, Riss Uwe, Brun Roman, Hinkelmann Knut, Thönssen Barbara, Maier Ronald, Schmidt Andreas
2009
Kröll Mark, Prettenhofer P., Strohmaier M.
2009
Access to knowledge about user goals represents a critical component for realizing the vision of intelligent agents acting upon user intent on the web. Yet, the manual acquisition of knowledge about user goals is costly and often infeasible. In a departure from existing approaches, this paper proposes Goal Mining as a novel perspective for knowledge acquisition. The research presented in this chapter makes the following contributions: (a) it presents Goal Mining as an emerging field of research and a corresponding automatic method for the acquisition of user goals from web corpora, in the case of this paper search query logs (b) it provides insights into the nature and some characteristics of these goals and (c) it shows that the goals acquired from query logs exhibit traits of a long tail distribution, thereby providing access to a broad range of user goals. Our results suggest that search query logs represent a viable, yet largely untapped resource for acquiring knowledge about explicit user goals
Körner C., Kröll Mark, Strohmaier M.
2009
Understanding search intent is often assumed to represent a critical barrier to the level of service that search engine providers can achieve. Previous research has shown that search queries differ with regard to intentional explicitness. We build on this observation and introduce Intentional Query Suggestion as a novel idea that aims to make searcher’s intent more explicit during search. In this paper, we present an algorithm for Intentional Query Suggestion and corresponding data from comparative experiments with traditional query suggestion mechanisms. Our results suggest that Intentional Query Suggestion 1) diversifies search result sets (i.e. it reduces result set overlap) and 2) exhibits interesting differences in terms of click-through rates
Kröll Mark, Strohmaier M.
2009
Knowledge about human goals has been found to be an important kind of knowledge for a range of challenging problems, such as goal recognition from peoples’ actions or reasoning about human goals. Necessary steps towards conducting such complex tasks involve (i) ac-quiring a broad range of human goals and (ii) making them accessible by structuring and storing them in a knowledge base. In this work, we focus on extracting goal knowledge from weblogs, a largely untapped resource that can be expected to contain a broad variety of hu-man goals. We annotate a small sample of web-logs and devise a set of simple lexico-syntactic patterns that indicate the presence of human goals. We then evaluate the quality of our pat-terns by conducting a human subject study. Re-sulting precision values favor patterns that are not merely based on part-of-speech tags. In fu-ture steps, we intend to improve these prelimi-nary patterns based on our observations
Kröll Mark, Koerner C.
2009
Annotations represent an increasingly popular means for organizing, categorizing and finding resources on the “social” web. Yet, only a small portion of the total resources available on the web are annotated. In this paper, we describe a prototype - iTAG - for automatically annotating textual resources with human intent, a novel dimension of tagging. We investigate the extent to which the automatic analysis of human intentions in textual resources is feasible. To address this question, we present selected evidence from a study aiming to automatically annotate intent in a simplified setting, that is transcripts of speeches given by US presidential candidates in 2008
Kröll Mark
2009
Access to knowledge about common human goals has been found critical for realizing the vision of intelligent agents acting upon user intent on the web. Yet, the ac-quisition of knowledge about common human goals rep-resents a major challenge. In a departure from existing approaches, this paper investigates a novel resource for knowledge acquisition: The utilization of search query logs for this task. By relating goals contained in search query logs with goals contained in existing com-monsense knowledge bases such as ConceptNet, we aim to shed light on the usefulness of search query logs for capturing knowledge about common human goals. The main contribution of this paper consists of an empirical study comparing common human goals contained in two large search query logs (AOL and Microsoft Research) with goals contained in the commonsense knowledge base ConceptNet. The paper sketches ways how goals from search query logs could be used to address the goal acquisition and goal coverage problem related to com-monsense knowledge bases
Afzal M. T., Latif A., Us Saeed A., Sturm P., Aslam S., Andrews K., Maurer H.
2009
In numerous contexts and environments, it is necessary to identify and assign (potential) experts to subject fields. In the context of an academic journal for computer science (J.UCS), papers and reviewers are classified using the ACM classification scheme. This paper describes a system to identify and present potential reviewers for each category from the entire body of paper’s authors. The topical classification hierarchy is visualized as a hyperbolic tree and currently assigned reviewers are listed for a selected node (computer science category). In addition, a spiral visualization is used to overlay a ranked list of further potential reviewers (high-profile authors) around the currently selected category. This new interface eases the task of journal editors in finding and assigning reviewers. The system is also useful for users who want to find research collaborators in specific research areas.
Schoefegger K., Weber Nicolas, Lindstaedt Stefanie , Ley Tobias
2009
The changes in the dynamics of the economy and thecorresponding mobility and fluctuations of knowledge workers within organizationsmake continuous social learning an essential factor for an organization.Within the underlying organizational processes, KnowledgeMaturing refers to the the corresponding evolutionary process in whichknowledge objects are transformed from informal and highly contextualizedartifacts into explicitly linked and formalized learning objects.In this work, we will introduce a definition of Knowledge (Maturing)Services and will present a collection of sample services that can be dividedinto service functionality classes supporting Knowledge Maturingin content networks. Furthermore, we developed an application of thesesample services, a demonstrator which supports quality assurance withina highly content based organisational context.
Beham Günter, Lindstaedt Stefanie , Kump Barbara, Resanovic D.
2009
Jeanquartier Fleur, Kröll Mark, Strohmaier M.
2009
Getting a quick impression of the author's intention of a text is a task often performed. An author's intention plays a major role in successfully understanding a text. For supporting readers in this task, we present an intentional approach to visual text analysis, making use of tag clouds. The objectiveof tag clouds is presenting meta-information in a visually appealing way. However there is also much uncertainty associated with tag clouds, such as giving the wrong impression. It is not clear whether the author's intent can be grasped clearly while looking at a corresponding tag cloud. Therefore it is interesting to ask to what extent, with tag clouds, it is possible to support the user in understanding intentions expressed. In order to answer this question, we construct an intentional perspective on textual content. Based on an existing algorithm for extracting intent annotations from textual content we present a prototypical implementation to produce intent tag clouds, and describe a formative testing, illustrating how intent visualizations may support readers in understanding a text successfully. With the initial prototype, we conducted user studies of our intentional tag cloud visualization and a comparison with a traditional one that visualizes frequent terms. The evaluation's results indicate, that intent tag clouds have a positive effect on supporting users in grasping an author's intent.
Granitzer Michael, Rath Andreas S., Kröll Mark, Ipsmiller D., Devaurs Didier, Weber Nicolas, Lindstaedt Stefanie , Seifert C.
2009
Increasing the productivity of a knowledgeworker via intelligent applications requires the identification ofa user’s current work task, i.e. the current work context a userresides in. In this work we present and evaluate machine learningbased work task detection methods. By viewing a work taskas sequence of digital interaction patterns of mouse clicks andkey strokes, we present (i) a methodology for recording thoseuser interactions and (ii) an in-depth analysis of supervised classificationmodels for classifying work tasks in two different scenarios:a task centric scenario and a user centric scenario. Weanalyze different supervised classification models, feature typesand feature selection methods on a laboratory as well as a realworld data set. Results show satisfiable accuracy and high useracceptance by using relatively simple types of features.
Latif A., Afzal M. T., Höfler Patrick, Us Saeed A.
2009
The Semantic Web strives to add structure and meaning to the Web, thereby providing better results and easier interfaces for its users. One important foundation of the Semantic Web is Linked Data, the concept of interconnected data, describing resources by use of RDF and URIs. Linked Data (LOD) provides the opportunity to explore and combine datasets on a global scale -- something which has never been possible before. However, at its current stage, the Linked Data cloud yields little benefit for end users who know nothing of ontologies, triples and SPARQL. This paper presents an intelligent technique for locating desired URIs from the huge repository of Linked Data. Search keywords provided by users are utilized intelligently for locating the intended URI. The proposed technique has been applied in a simplified end user interface for LOD. The system evaluation shows that the proposed technique has reduced user's cognitive load in finding relevant information.
Pammer-Schindler Viktoria, Serafini L., Lindstaedt Stefanie
2009
Lindstaedt Stefanie , Aehnelt M., de Hoog R.
2009
Gras R., Devaurs Didier, Wozniak A., Aspinall A.
2009
We present an individual-based predator-prey model with, for the first time, each agent behavior being modeled by a fuzzy cognitive map (FCM), allowing the evolution of the agent behavior through the epochs of the simulation. The FCM enables the agent to evaluate its environment (e.g., distance to predator or prey, distance to potential breeding partner, distance to food, energy level) and its internal states (e.g., fear, hunger, curiosity), and to choose several possible actions such as evasion, eating, or breeding. The FCM of each individual is unique and is the result of the evolutionary process. The notion of species is also implemented in such a way that species emerge from the evolving population of agents. To our knowledge, our system is the only one that allows the modeling of links between behavior patterns and speciation. The simulation produces a lot of data, including number of individuals, level of energy by individual, choice of action, age of the individuals, and average FCM associated with each species. This study investigates patterns of macroevolutionary processes, such as the emergence of species in a simulated ecosystem, and proposes a general framework for the study of specific ecological problems such as invasive species and species diversity patterns. We present promising results showing coherent behaviors of the whole simulation with the emergence of strong correlation patterns also observed in existing ecosystems.
Lindstaedt Stefanie , de Hoog R., Aehnelt M.
2009
This contribution shortly introduces the collaborative APOSDLE environmentfor integrated knowledge work and learning. It proposes a video presentation and thepresentation of the third APOSDLE prototype.
Lex Elisabeth, Granitzer Michael, Juffinger A., Seifert C.
2009
Text classification is one of the core applications in data mining due to the huge amount of not categorized digital data available. Training a text classifier generates a model that reflects the characteristics of the domain. However, if no training data is available, labeled data from a related but different domain might be exploited to perform crossdomain classification. In our work, we aim to accurately classify unlabeled blogs into commonly agreed newspaper categories using labeled data from the news domain. The labeled news and the unlabeled blog corpus are highly dynamic and hourly growing with a topic drift, so a trade-off between accuracy and performance is required. Our approach is to apply a fast novel centroid-based algorithm, the Class-Feature-Centroid Classifier (CFC), to perform efficient cross-domain classification. Experiments showed that this algorithm achieves a comparable accuracy than k-NN and is slightly better than Support Vector Machines (SVM), yet at linear time cost for training and classification. The benefit of this approach is that the linear time complexity enables us to efficiently generate an accurate classifier, reflecting the topic drift, several times per day on a huge dataset.
Pellegrini T., Auer S., Schaffert S.
2009
Schmidt A., Hinkelmann K., Ley Tobias, Lindstaedt Stefanie , Maier R., Riss U.
2009
Effective learning support in organizations requires a flexible and personalizedtoolset that brings together the individual and the organizational perspectiveon learning. Such toolsets need a service-oriented infrastructure of reusable knowledgeand learning services as an enabler. This contribution focuses on conceptualfoundations for such an infrastructure as it is being developed within the MATUREIP and builds on the knowledge maturing process model on the one hand, and theseeding-evolutionary growth-reseeding model on the other hand. These theories areused to derive maturing services, for which initial examples are presented.
Weber Nicolas, Ley Tobias, Lindstaedt Stefanie , Schoefegger K., Bimrose J., Brown A., Barnes S.
2009
Lindstaedt Stefanie , Beham Günter, Ley Tobias, Kump Barbara
2009
Work-integrated learning (WIL) poses unique challenges for usermodel design: on the one hand users’ knowledge levels need to be determinedbased on their work activities – testing is not a viable option; on the other handusers do interact with a multitude of different work applications – there is nocentral learning system. This contribution introduces a user model and correspondingservices (based on SOA) geared to enable unobtrusive adaptabilitywithin WIL environments. Our hybrid user model services interpret usage datain the context of enterprise models (semantic approaches) and utilize heuristics(scruffy approaches) in order to determine knowledge levels, identify subjectmatter experts, etc. We give an overview of different types of user model services(logging, production, inference, control), provide a reference implementationwithin the APOSDLE project, and discuss early evaluation results.
Lindstaedt Stefanie , Rospocher M., Ghidini C., Pammer-Schindler Viktoria, Serafini L.
2009
Enterprise modelling focuses on the construction of a structureddescription of relevant aspects of an enterprise, the so-called enterprisemodel. Within this contribution we describe a wiki-based tool forenterprise modelling, called MoKi (Modelling wiKi). It specifically facilitatescollaboration between actors with different expertise to develop anenterprise model by using structural (formal) descriptions as well as moreinformal and semi-formal descriptions of knowledge. It also supports theintegrated development of interrelated models covering different aspectsof an enterprise.
Lindstaedt Stefanie , Rath Andreas S., Devaurs Didier
2009
‘Understanding context is vital’ [1] and ‘context is key’ [2]signal the key interest in the context detection field. Oneimportant challenge in this area is automatically detectingthe user’s task because once it is known it is possible tosupport her better. In this paper we propose an ontologybaseduser interaction context model (UICO) that enhancesthe performance of task detection on the user’s computerdesktop. Starting from low-level contextual attention metadatacaptured from the user’s desktop, we utilize rule-based,information extraction and machine learning approaches toautomatically populate this user interaction context model.Furthermore we automatically derive relations between themodel’s entities and automatically detect the user’s task.We present evaluation results of a large-scale user study wecarried out in a knowledge-intensive business environment,which support our approach.
Thurner-Scheuerer Claudia
2009
Schachner W., Koubek A.
2009
Lindstaedt Stefanie , Ghidini C., Kump Barbara, Mahbub N., Pammer-Schindler Viktoria, Rospocher M., Serafini L.
2009
Enterprise modelling focuses on the construction of a structureddescription, the so-called enterprise model, which represents aspectsrelevant to the activity of an enterprise. Although it has becomeclearer recently that enterprise modelling is a collaborative activity, involvinga large number of people, most of the enterprise modelling toolsstill only support very limited degrees of collaboration. Within thiscontribution we describe a tool for enterprise modelling, called MoKi(MOdelling wiKI), which supports agile collaboration between all differentactors involved in the enterprise modelling activities. MoKi is basedon a Semantic Wiki and enables actors with different expertise to developan enterprise model not only using structural (formal) descriptions butalso adopting more informal and semi-formal descriptions of knowledge.
Granitzer Michael, Lex Elisabeth, Juffinger A.
2009
People use weblogs to express thoughts, present ideas and share knowledge. However, weblogs can also be misused to influence and manipulate the readers. Therefore the credibility of a blog has to be validated before the available information is used for analysis. The credibility of a blogentry is derived from the content, the credibility of the author or blog itself, respectively, and the external references or trackbacks. In this work we introduce an additional dimension to assess the credibility, namely the quantity structure. For our blog analysis system we derive the credibility therefore from two dimensions. Firstly, the quantity structure of a set of blogs and a reference corpus is compared and secondly, we analyse each separate blog content and examine the similarity with a verified news corpus. From the content similarity values we derive a ranking function. Our evaluation showed that one can sort out incredible blogs by quantity structure without deeper analysis. Besides, the content based ranking function sorts the blogs by credibility with high accuracy. Our blog analysis system is therefore capable of providing credibility levels per blog.
Lindstaedt Stefanie , Hambach S., Müsebeck P., de Hoog R., Kooken J., Musielak M.
2009
Computational support for work-integrated learning will gain more and moreattention. We understand informal self-directed work-integrated learning of knowledgeworkers as a by-product of their knowledge work activities and propose a conceptual as wellas a technical approach for supporting learning from documents and learning in interactionwith fellow knowledge workers. The paper focuses on contextualization and scripting as twomeans to specifically address the latter interaction type.
Lex Elisabeth, Juffinger A.
2009
People use weblogs to express thoughts, present ideas and share knowledge, therefore weblogs are extraordinarily valuable resources, amongs others, for trend analysis. Trends are derived from the chronological sequence of blog post count per topic. The comparison with a reference corpus allows qualitative statements over identified trends. We propose a crosslanguage blog mining and trend visualisation system to analyse blogs across languages and topics. The trend visualisation facilitates the identification of trends and the comparison with the reference news article corpus. To prove the correctness of our system we computed the correlation between trends in blogs and news articles for a subset of blogs and topics. The evaluation corroborated our hypothesis of a high correlation coefficient for these subsets and therefore the correctness of our system for different languages and topics is proven.
Neidhart T., Granitzer Michael, Kern Roman, Weichselbraun A., Wohlgenannt G., Scharl A., Juffinger A.
2009
Lindstaedt Stefanie , Moerzinger R., Sorschag R. , Pammer-Schindler Viktoria, Thallinger G.
2009
Automatic image annotation is an important and challenging task, andbecomes increasingly necessary when managing large image collections. This paperdescribes techniques for automatic image annotation that take advantage of collaborativelyannotated image databases, so called visual folksonomies. Our approachapplies two techniques based on image analysis: First, classification annotates imageswith a controlled vocabulary and second tag propagation along visually similar images.The latter propagates user generated, folksonomic annotations and is thereforecapable of dealing with an unlimited vocabulary. Experiments with a pool of Flickrimages demonstrate the high accuracy and efficiency of the proposed methods in thetask of automatic image annotation. Both techniques were applied in the prototypicaltag recommender “tagr”.
Lindstaedt Stefanie , Pammer-Schindler Viktoria, Mörzinger Roland, Kern Roman, Mülner Helmut, Wagner Claudia
2008
Imagine you are member of an online social systemand want to upload a picture into the community pool. In currentsocial software systems, you can probably tag your photo, shareit or send it to a photo printing service and multiple other stuff.The system creates around you a space full of pictures, otherinteresting content (descriptions, comments) and full of users aswell. The one thing current systems do not do, is understandwhat your pictures are about.We present here a collection of functionalities that make a stepin that direction when put together to be consumed by a tagrecommendation system for pictures. We use the data richnessinherent in social online environments for recommending tags byanalysing different aspects of the same data (text, visual contentand user context). We also give an assessment of the quality ofthus recommended tags.
Ley Tobias, Ulbrich Armin, Lindstaedt Stefanie , Scheir Peter, Kump Barbara, Albert Dietrich
2008
Purpose – The purpose of this paper is to suggest a way to support work-integrated learning forknowledge work, which poses a great challenge for current research and practice.Design/methodology/approach – The authors first suggest a workplace learning context model, whichhas been derived by analyzing knowledge work and the knowledge sources used by knowledgeworkers. The authors then focus on the part of the context that specifies competencies by applying thecompetence performance approach, a formal framework developed in cognitive psychology. From theformal framework, a methodology is then derived of how to model competence and performance in theworkplace. The methodology is tested in a case study for the learning domain of requirementsengineering.Findings – The Workplace Learning Context Model specifies an integrative view on knowledge workers’work environment by connecting learning, work and knowledge spaces. The competence performanceapproach suggests that human competencies be formalized with a strong connection to workplaceperformance (i.e. the tasks performed by the knowledge worker). As a result, competency diagnosisand competency gap analysis can be embedded into the normal working tasks and learninginterventions can be offered accordingly. The results of the case study indicate that experts weregenerally in moderate to high agreement when assigning competencies to tasks.Research limitations/implications – The model needs to be evaluated with regard to the learningoutcomes in order to test whether the learning interventions offered benefit the user. Also, the validityand efficiency of competency diagnosis need to be compared to other standard practices incompetency management.Practical implications – Use of competence performance structures within organizational settings hasthe potential to more closely relate the diagnosis of competency needs to actual work tasks, and toembed it into work processes.Originality/value – The paper connects the latest research in cognitive psychology and in thebehavioural sciences with a formal approach that makes it appropriate for integration intotechnology-enhanced learning environments.Keywords Competences, Learning, Workplace learning, Knowledge managementPaper type Research paper
Strohmaier M., Prettenhofer P., Kröll Mark
2008
On the web, search engines represent a primary instrument through which users exercise their intent. Understanding the specific goals users express in search queries could improve our theoretical knowledge about strategies for search goal formulation and search behavior, and could equip search engine providers with better descriptions of users’ information needs. However, the degree to which goals are explicitly expressed in search queries can be suspected to exhibit considerable variety, which poses a series of challenges for researchers and search engine providers. This paper introduces a novel perspective on analyzing user goals in search query logs by proposing to study different degrees of intentional explicitness. To explore the implications of this perspective, we studied two different degrees of explicitness of user goals in the AOL search query log containing more than 20 million queries. Our results suggest that different degrees of intentional explicitness represent an orthogonal dimension to existing search query categories and that understanding these different degrees is essential for effective search. The overall contribution of this paper is the elaboration of a set of theoretical arguments and empirical evidence that makes a strong case for further studies of different degrees of intentional explicitness in search query logs.
Scharl A., Stern Hermann, Weichselbraun A.
2008
This paper presents the IDIOM Media Watch on Climate Change (www.ecoresearch.net/climate), a prototypical implementation of an environmental portal that emphasizes the importance of location data for advanced Web applications. The introductory section outlines the process of retrofitting existing knowledge repositories with geographical context information, a process also referred to as geotagging. The paper then describes the portal’s functionality, which aggregates, annotates and visualizes environmental articles from 150 Anglo-American news media sites. From 300,000 news media articles gathered in weekly intervals, the system selects about 10,000 focusing on environmental issues. The crawled data is indexed and stored in a central repository. Geographic location represents a central aspect of the application, but not the only dimension used to organize and filter content. Applying the concepts of location and topography to semantic similarity, the paper concludes with discussing information landscapes as alternative interface metaphor for accessing large Web repositories.
Lex Elisabeth, Kienreich Wolfgang, Granitzer Michael, Seifert C.
2008
Granitzer Michael, Granitzer Gisela, Lindstaedt Stefanie , Rath Andreas S., Groiss W.
2008
It is a well known fact that a wealth of knowledge lies in thehead of employees making them one of the most or even the most valuableasset of organisations. But often this knowledge is not documented andorganised in knowledge systems as required by the organisation, butinformally shared. Of course this is against the organisation’s aim forkeeping knowledge reusable as well as easily and permanently availableindependent of individual knowledge workers.In this contribution we suggest a solution which captures the collectiveknowledge to the benefit of the organisation and the knowledge worker.By automatically identifying activity patterns and aggregating them totasks as well as by assigning resources to these tasks, our proposed solutionfulfils the organisation’s need for documentation and structuring ofknowledge work. On the other hand it fulfils the the knowledge worker’sneed for relevant, currently needed knowledge, by automatically miningthe entire corporate knowledge base and providing relevant, contextdependent information based on his/her current task.
Strohmaier M., Horkoff Jennifer, Yu E., Aranda Jorge, Easterbrook Steve
2008
A considerable amount of effort has been placed into the investigation of i* modeling as a tool for early stage requirements engineering. However, widespread adoption of i* models in the requirements process has been hindered by issues such as the effort required to create the models, coverage of the problem context, and model complexity. In this work, we explore the feasibility of pattern application to address these issues. To this end, we perform both an exploratory case study and initial experiment to investigate whether the application of patterns improves aspects of i* modeling. Furthermore, we develop a methodology which guides the adoption of patterns for i* modeling. Our findings suggest that applying model patterns can increase model coverage, but increases complexity, and may increase modeling effort depending on the experience of the modeler. Our conclusions indicate situations where pattern application to i* models may be beneficial.
Stocker A., Höfler Patrick, Granitzer Gisela, Willfort R., Anna Maria Köck, Pammer-Schindler Viktoria
2008
Social web platforms have become very popular in the so-called Web 2.0, and there is no end in sight. However, very few systematic models for the constitution of such sociotechnical infrastructures exist in the scientific literature. We therefore present a generic framework for building social web platforms based on the creation of value for individuals, communities and social networks. We applied this framework in the Neurovation project, aiming to establish a platform for creative knowledge workers. This paper describes work in progress and the lessons we have learned so far.
Granitzer Gisela, Höfler Patrick
2008
Even though it was only about three years ago that Social Software became a trend, it has become a common practice to utilize Social Software in learning institutions. It brought about a lot of advantages, but also challenges. Amounts of distributed and often unstructured user generated content make it difficult to meaningfully process and find relevant information. According to the estimate of the authors, the solution lies in underpinning Social Software with structure resulting in Social Semantic Software. In this contribution we introduce the central concepts Social Software, Semantic Web and Social Semantic Web and show how Social Semantic Technologies might be utilized in the higher education context.
Sabol Vedran, Scharl A.
2008
Jones S., Lynch P., Maiden N., Lindstaedt Stefanie
2008
In this paper, we describe a creativity workshop thatwas used in a large research project, called APOSDLE,to generate creative ideas and requirements for a workintegratedlearning system. We present an analysis ofempirical data collected during and after the workshop.On the basis of this analysis, we conclude that the workshopwas an efficient way of generating ideas for futuresystem development. These ideas, on average, were usedat least as much as requirements from other sources inwriting use cases, and 18 months after the workshop wereseen to have a similar degree of influence on the projectto other requirements. We make some observations aboutthe use of more and less creative ideas, and about thetechniques used to generate them. We end with suggestionsfor further work.
Granitzer Michael, Lux M., Spaniol M.
2008
Ulbrich Armin, Höfler Patrick, Lindstaedt Stefanie
2008
Ziel dieses Kapitels ist es, gemeinsame Verwendungsszenariendes Semantic Web und des Social Web zu identifizieren und zu benennen.Dabei wird ein Teilaspekt des Themengebiets im Detail betrachtet: die Nutzungvon Services, die Beobachtungen des Verhaltens von Anwendern analysieren, umdaraus maschinell interpretierbare Informationen zu erhalten und diese als Modellezu organisieren. Es werden zunächst einige Eigenschaften und Unterscheidungsmerkmalevon Anwenderverhalten und organisierten Modellen dargestellt.Anschließend wird der mögliche wechselseitige Nutzen von Anwenderverhaltenund Modellen diskutiert. Den Abschluss bildet eine Betrachtung einiger exemplarischerSoftware-Services, die heute schon verwendet werden, um Anwenderverhaltenin Modelle überzuführen.
Granitzer Michael
2008
Lindstaedt Stefanie , , , Lokaiczyk R., Kump Barbara, Beham Günter, Pammer-Schindler Viktoria
2008
In order to support work-integrated learning scenarios task- andcompetency-aware knowledge services are needed. In this paper we introducethree key knowledge services of the APOSDLE system and illustrate how theyinteract. The context determination daemon observes user interactions andinfers the current work task of the user. The user profile service uses theidentified work tasks to determine the competences of the user. And finally, theassociative retrieval service utilizes both the current work task and the inferredcompetences to identify relevant (learning) content. All of these knowledgeservices improve through user feedback.
Christl C., Ghidini C. , Guss J., Lindstaedt Stefanie , Pammer-Schindler Viktoria, Scheir Peter, Serafini L.
2008
Modern businesses operate in a rapidly changing environment.Continuous learning is an essential ingredient in order to stay competitivein such environments. The APOSDLE system utilizes semanticweb technologies to create a generic system for supporting knowledgeworkers in different domains to learnwork. Since APOSDLE relies onthree interconnected semantic models to achieve this goal, the questionon how to efficiently create high-quality semantic models has become oneof the major research challenges. On the basis of two concrete examplesnamelydeployment of such a learning system at EADS, a large corporation,and deployment at ISN, a network of SMEs-we report in detail theissues a company has to face, when it wants to deploy a modern learningenvironment relying on semantic web technology.
Zinnen A., Hambach S., Faatz A., Lindstaedt Stefanie , Beham Günter, Godehardt E., Goertz M., Lokaiczyk R.
2008
Rath Andreas S., Weber Nicolas, Kröll Mark, Granitzer Michael, Dietzel O., Lindstaedt Stefanie
2008
Improving the productivity of knowledge workers is anopen research challenge. Our approach is based onproviding a large variety of knowledge services which takethe current work task and information need (work context)of the knowledge worker into account. In the following wepresent the DYONIPOS application which strives toautomatically identify a user’s work task and thencontextualizes different types of knowledge servicesaccordingly. These knowledge services then provideinformation (documents, people, locations) both from theuser’s personal as well as from the organizationalenvironment. The utility and functionality is illustratedalong a real world application scenario at the Ministry ofFinance in Austria.
Lindstaedt Stefanie , Ley Tobias, Scheir Peter, Ulbrich Armin
2008
This contribution introduces the concept of work-integrated learning which distinguishes itself from traditional e-Learning in that it provides learning support (i) during work task execution and tightly contextualized to the work context,(ii) within the work environment, and (iii) utilizes knowledge artefacts available within the organizational memory for learning. We argue that in order to achieve this highly flexible learning support we need to turn to" scruffy" methods (such as associative retrieval, genetic algorithms, Bayesian and other probabilistic methods) which can provide good results in the presence of uncertainty and the absence of fine-granular models. Hybrid approaches to user context determination, user profile management, and learning material identification are discussed and first results are reported.
Granitzer Michael, Kröll Mark, Seifer Christin, Rath Andreas S., Weber Nicolas, Dietzel O., Lindstaedt Stefanie
2008
’Context is key’ conveys the importance of capturing thedigital environment of a knowledge worker. Knowing theuser’s context offers various possibilities for support, likefor example enhancing information delivery or providingwork guidance. Hence, user interactions have to be aggregatedand mapped to predefined task categories. Withoutmachine learning tools, such an assignment has to be donemanually. The identification of suitable machine learningalgorithms is necessary in order to ensure accurate andtimely classification of the user’s context without inducingadditional workload.This paper provides a methodology for recording user interactionsand an analysis of supervised classification models,feature types and feature selection for automatically detectingthe current task and context of a user. Our analysisis based on a real world data set and shows the applicabilityof machine learning techniques.
Aehnelt M., Ebert M., Beham Günter, Lindstaedt Stefanie , Paschen A.
2008
Knowledge work in companies is increasingly carried out by teams of knowledge workers. They interact within and between teams with the common goal to acquire, apply, create and share knowledge. In this paper we propose a socio-technical model to support intra-organizational collaboration which specifically takes into account the social and collaborative nature of knowledge work. Our aim is to support in particular the efficiency of collaborative knowledge work processes through an automated recommendation of collaboration partners and collaboration media. We report on the theoretical as well as practical aspects of such a socio-technical model.
Ley Tobias, Kump Barbara, Ulbrich Armin, Scheir Peter, Lindstaedt Stefanie
2008
The paper suggests a way to support work-integrated learning for knowledge workwhich poses a great challenge for current research and practice. We first present a WorkplaceLearning Context Model which has been derived by analyzing knowledge work and the knowledgesources used by knowledge workers. The model specifies an integrative view on knowledgeworkers’ work environment by connecting learning, work and knowledge spaces. We then focuson the part of the context which specifies learning goals and their interrelations to task and domainmodels. Our purpose is to support learning needs analysis which is based on a comparison of tasksperformed in the past to those tasks to be tackled in the future. A first implementation in theAPOSDLE project is presented including the models generated for five real world applications andthe software prototype. We close with an outlook on future work.
Zimmermann Volker, Fredrich Helge, Grohmann Guido, Hauer Dominik, Sprenger Peter, Leyking Katrina, Martin Gunnar, Loos Peter, Naeve Ambjörn, Karapidis Alexander, Pack Jochen, Lindstaedt Stefanie , Chatti Mohamed Amine, Klamma Ralf, Jarke Matthias, Lefere Paul
2007
Given the importance of an organisation’s human capital to business success, aligning training and competencydevelopment with business needs is a key challenge. Many companies did initiate in the pastknowledge management activities or founded corporate universities as the organization intended to helpcompanies to face this challenge. In this deliverable, we talk about knowledge work and learning managementas a concept to “increase business performance” through a better short- and long-term learningapproach for people at management level. The aim is to provide a guideline for corporate users based onour and others' experiences of implementing solutions for knowledge work and learning. This is connectedto many forms and methods of learning: formal learning processes, informal learning, team learning,collaboration, social networking, community building etc. In many companies, managers think thatknowledge work can be supported solely by offering courses and enabling to access content on demand.In this deliverable this aspect (ACQUIRING knowledge) will not be in focus as it is more the job of atraining department to manage courses and catalogues. Instead we focus on APPLYING knowledge. Theconcept of knowledge work management comes into place, when companies see the ability of their employeesto APPLY their education and knowledge as a strategic instrument to create competitiveness andlook for tools to provide learning and knowledge at workplace on demand and fitting to the individualneeds. And this objective is very actual as the globalization creates pressure on companies and theknowledge and experience of the employees gets the most important differentiator to competitors – leadingto better innovation, faster processes, higher productivity and lower costs. In this deliverable, anoverall approach and guideline for companies will be provided on how to implement knowledge workmanagement and provide learning according to the needs in business and resulting from business processes.
Scheir Peter, Granitzer Michael, Lindstaedt Stefanie
2007
Evaluation of information retrieval systems is a critical aspect of information retrieval research. New retrieval paradigms, as retrieval in the Semantic Web, present an additional challenge for system evaluation as no off-the-shelf test corpora for evaluation exist. This paper describes the approach taken to evaluate an information retrieval system built for the Semantic Desktop and demonstrates how standard measures from information retrieval research are employed for evaluation.
Lux M.
2007
Is Web 2.0 just hype or just a buzzword, which might disappear in the near future? One way to find answers to these questions is to investigate the actual benefit of the Web 2.0 for real use cases. Within this contribution we study a very special aspect of the Web 2.0 ? the folksonomy ? and its use within self-directed learning. Guided by conceptual principles of emergent computing we point out methods, which might be able to let semantics emerge from folksonomies and discuss the effect of the results in self-directed learning.
Strohmaier M., Lindstaedt Stefanie
2007
Purpose: The purpose of this contribution is to motivate a new, rapid approachto modeling knowledge work in organizational settings and to introducea software tool that demonstrates the viability of the envisioned concept.Approach: Based on existing modeling structures, the KnowFlowr Toolsetthat aids knowledge analysts in rapidly conducting interviews and in conductingmulti-perspective analysis of organizational knowledge work is introduced.Findings: It is demonstrated how rapid knowledge work visualization can beconducted largely without human modelers by developing an interview structurethat allows for self-service interviews. Two application scenarios illustrate thepressing need for and the potentials of rapid knowledge work visualizations inorganizational settings.Research Implications: The efforts necessary for traditional modeling approachesin the area of knowledge management are often prohibitive. Thiscontribution argues that future research needs to take economical constraintsof organizational settings into account in order to be able to realize the fullpotential of knowledge work management.Practical Implications: This work picks up a problem identified in practiceand proposes the novel concept of rapid knowledge work visualization for makingknowledge work modeling in organizations more feasible.Value: This work develops a vision of rapid knowledge work visualization andintroduces a tool-supported approach that addresses some of the identified challenges.
Rollett H., Lux M., Strohmaier M., Dösinger G.
2007
While there is a lot of hype around various concepts associated with the term Web 2.0 in industry, little academic research has so far been conducted on the implications of this new approach for the domain of education. Much of what goes by the name of Web 2.0 can, in fact, be regarded as new kinds of learning technologies, and can be utilised as such. This paper explains the background of Web 2.0, investigates the implications for knowledge transfer in general, and then discusses its particular use in eLearning contexts with the help of short scenarios. The main challenge in the future will be to maintain essential Web 2.0 attributes, such as trust, openness, voluntariness and self-organisation, when applying Web 2.0 tools in institutional contexts.
Kröll Mark, Rath Andreas S., Weber Nicolas, Lindstaedt Stefanie , Granitzer Michael
2007
Knowledge-intensive work plays an increasingly important role in organisations of all types. Knowledge workers contribute their effort to achieve a common purpose; they are part of (business) processes. Workflow Management Systems support them during their daily work, featuring guidance and providing intelligent resource delivery. However, the emergence of richly structured, heterogeneous datasets requires a reassessment of existing mining techniques which do not take possible relations between individual instances into account. Neglecting these relations might lead to inappropriate conclusions about the data. In order to uphold the support quality of knowledge workers, the application of mining methods, that consider structure information rather than content information, is necessary. In the scope of the research project DYONIPOS, user interaction patterns, e.g., relations between users, resources and tasks, are mapped in the form of graphs. We utilize graph kernels to exploit structural information and apply Support Vector Machines to classify task instances to task models
Burgsteiner H., Kröll Mark, Leopold A., Steinbauer G.
2007
The prediction of time series is an important task in finance, economy, object tracking, state estimation and robotics. Prediction is in general either based on a well-known mathematical description of the system behind the time series or learned from previously collected time series. In this work we introduce a novel approach to learn predictions of real world time series like object trajectories in robotics. In a sequence of experiments we evaluate whether a liquid state machine in combination with a supervised learning algorithm can be used to predict ball trajectories with input data coming from a video camera mounted on a robot participating in the RoboCup. The pre-processed video data is fed into a recurrent spiking neural network. Connections to some output neurons are trained by linear regression to predict the position of a ball in various time steps ahead. The main advantages of this approach are that due to the nonlinear projection of the input data to a high-dimensional space simple learning algorithms can be used, that the liquid state machine provides temporal memory capabilities and that this kind of computation appears biologically more plausible than conventional methods for prediction. Our results support the idea that learning with a liquid state machine is a generic powerful tool for prediction.
Rath Andreas S., Kröll Mark, Lindstaedt Stefanie , Granitzer Michael
2007
Knowledge intensive organizations demand a rethinking of business process awareness. Their employees are knowledge workers, who are performing their tasks in a weakly structured way. Stiff organizational processes have to be relaxed, adopted and flexibilized to be able to provide the essential freedom requested by knowledge workers. For effectively and efficiently supporting this type of creative worker the hidden patterns, i.e. how they reach their goals, have to be discovered. This paper focuses on perceiving the knowledge workers work habits in an automatic way for bringing their work patterns to the surface. Capturing low level operating system events, observing user interactions on a fine granular level and doing in deep application inspection, give the opportunity to interrelate the received data. In the scope of the research project DYONIPOS these interrelation abilities are utilized to semantically relate and enrich this captured data to picture the actual task of a knowledge worker. Once the goal of a knowledge worker is clear, intelligent information delivery can be applied
Strohmaier M., Lux M., Granitzer Michael, Scheir Peter, Liaskos S., Yu E.
2007
Kooken J., Ley Tobias, de Hoog R.
2007
Any software development project is based on assumptions about the state of the world that probably will hold when it is fielded. Investigating whether they are true can be seen as an important task. This paper describes how an empirical investigation was designed and conducted for the EU funded APOSDLE project. This project aims at supporting informal learning during work. Four basic assumptions are derived from the project plan and subsequently investigated in a two-phase study using several methods, including workplace observations and a survey. The results show that most of the assumptions are valid in the current work context of knowledge workers. In addition more specific suggestions for the design of the prospective APOSDLE application could be derived. Though requiring a substantial effort, carrying out studies like this can be seen as important for longer term software development projects.
Lokaiczyk R., Godehardt E., Faatz A., Goertz M., Kienle A., Wessner M., Ulbrich Armin
2007
Scheir Peter, Granitzer Michael, Lindstaedt Stefanie , Hofmair P.
2006
In this contribution we present a tool for annotating documents, which are used for workintegratedlearning, with concepts from an ontology. To allow for annotating directly whilecreating or editing an ontology, the tool was realized as a plug-in for the ontology editor Protégé.Annotating documents with semantic metadata is a laborious task, most of the time knowledgerepresentations are created independently from the resources that should be annotated andadditionally in most work environments a high number of documents exist. To increase theefficiency of the person annotating, in our tool the process of assigning concepts to text-documentsis supported by automatic text-classification.
Rath Andreas S., Kröll Mark, Andrews K., Lindstaedt Stefanie , Granitzer Michael
2006
In a knowledge-intensive business environment, knowledgeworkers perform their tasks in highly creative ways. This essential freedomrequired by knowledge workers often conflicts with their organization’sneed for standardization, control, and transparency. Within thiscontext, the research project DYONIPOS aims to mitigate this contradictionby supporting the process engineer with insights into the processexecuter’s working behavior. These insights constitute the basis for balancedprocess modeling. DYONIPOS provides a process engineer supportenvironment with advanced process modeling services, such as processvisualization, standard process validation, and ad-hoc process analysisand optimization services.
Lindstaedt Stefanie , Ulbrich Armin
2006
Granitzer Michael, Lindstaedt Stefanie , Tochtermann K., Kröll Mark, Rath Andreas S.
2006
Knowledge-intensive work plays an increasinglyimportant role in organisations of all types. Thiswork is characterized by a defined input and adefined output but not the way how to transformthe input to an output. Within this context, theresearch project DYONIPOS aims at encouragingthe two crucial roles in a knowledge-intensiveorganization - the process executer and the processengineer. Ad-hoc support will be providedfor the knowledge worker by synergizing the developmentof context sensitive, intelligent, andagile semantic technologies with contextual retrieval.DYONIPOS provides process executerswith guidance through business processes andjust-in-time resource support based on the currentuser context, that are the focus of this paper.
Ley Tobias, Kump Barbara, Lindstaedt Stefanie , Albert D., Maiden N. A. M., Jones S.
2006
Challenges for learning in knowledge work are being discussed.These include the challenge to better support self-directed learning whileaddressing the organizational goals and constraints at the same time, andproviding guidance for learning. The use of competencies is introduced as away to deal with these challenges. Specifically, the competence performanceapproach offers ways to better leverage organizational context and to supportinformal learning interventions. A case study illustrates the application of thecompetence performance approach for the learning domain of requirementsengineering. We close with conclusions and an outlook on future work.
Lindstaedt Stefanie , Mayer H.
2006
The goal of the APOSDLE (Advanced Process-Oriented SelfDirectedLearning environment) project is to enhance knowledge worker productivityby supporting informal learning activities in the context of knowledgeworkers’ everyday work processes and within their work environments. Thiscontribution seeks to communicate the ideas behind this abstract vision to thereader by using a storyboard, scenarios and mock-ups. The project just startedin March 2006 and is funded within the European Commission’s 6th FrameworkProgram under the IST work program. APOSDLE is an Integrated Projectjointly coordinated by the Know-Center, Austria’s Competence Centre forKnowledge Management, and Joanneum Research. APOSDLE brings together12 partners from 7 European Countries.
Ulbrich Armin, Lindstaedt Stefanie , Scheir Peter, Goertz M.
2006
This contribution introduces the so-called Workplace Learning Contextas essential conceptualisation supporting self-directed learning experiencesdirectly at the workplace. The Workplace Learning Context is to be analysedand exploited for retrieving ‘learning’ material that best-possibly matches witha knowledge worker’s current learning needs. In doing so, several different‘flavours’ of work-integrated learning can be realised including task learning,competency-gap based support and domain-related support. The WorkplaceLearning Context Model, which is also outlined in this contribution, forms thetechnical representation of the Workplace Learning Context.
Strohmaier M.
2005
Ley Tobias, Lindstaedt Stefanie , Albert D.
2005
This paper seeks to suggest ways to support informal, self-directed, work-integrated learning within organizations. We focus on a special type of learning in organizations, namely on competency development, that is a purposeful development of employee capabilities to perform well in a large array of situations. As competency development is inherently a self-directed development activity, we seek to support these activities primarily in an informal learning context. AD-HOC environments which allow employees context specific access to documents in a knowledge repository have been suggested to support learning in the workplace. In this paper, we suggest to use the competence performance framework as a means to enhance the capabilities of AD HOC environments to support competency development. The framework formalizes the tasks employees are working in and the competencies needed to perform the tasks. Relating tasks and competencies results in a competence performance structure, which structures both tasks and competencies in terms of learning prerequisites. We conclude with two scenarios that make use of methods established in informal learning research. The scenarios show how competence performance structures enhance feedback mechanisms in a coaching process between supervisor and employee and provide assistance for self directed learning from a knowledge repository.
Timbrell G., Koller S., Schefe N., Lindstaedt Stefanie
2005
This paper explores a process view of call-centres and the knowledge infrastructuresthat support these processes. As call-centres grow and become more complex in their functionand organisation so do the knowledge infrastructures required to support their size andcomplexity. This study suggests a knowledge-based hierarchy of ‘advice-type’ call-centres anddiscusses associated knowledge management strategies for different sized centres. It introducesa Knowledge Infrastructure Hierarchy model, with which it is possible to analyze and classifycall-centre knowledge infrastructures. The model also demonstrates different types ofinterventions supporting knowledge management in call-centres. Finally the paper discusses thepossibilities of applying traditional maturity model approaches in this context.
Lindstaedt Stefanie , Ley Tobias, Farmer Johannes
2004
2004
Maurer H.
2004
Dösinger G.
2004
Andrews K., Kienreich Wolfgang, Sabol Vedran, Granitzer Michael
2004
2004
Ley Tobias, Albert D.
2004
Maurer H.
2004
Ley Tobias
2004
2004
Dösinger G., Gissing B.
2004
Göstinger G., Puntschart I.
2004
Gissing B.
2004
Hrastnik J., Rollett H., Strohmaier M.
2004
Granitzer Michael, Kienreich Wolfgang, Sabol Vedran, Andrews K.
2004
Lindstaedt Stefanie , Koller S., Krämer T.
2004
Maurer H.
2004
Lux M., Granitzer Michael, Kienreich Wolfgang, Sabol Vedran, Klieber Hans-Werner, Sarka W.
2004
Bailer Werner, Mayer H., Neuschmied H., Haas W., Lux M., Klieber Hans-Werner
2004
Retrieval in current multimedia databases is usually limited to browsing and searching based on low-level visual features and explicit textual descriptors. Semantic aspects of visual information are mainly described in full text attributes or mapped onto specialized, application specific description schemes. Result lists of queries are commonly represented by textual descriptions and single key frames. This approach is valid for text documents and images, but is often insufficient to represent video content in a meaningful way. In this paper we present a multimedia retrieval framework focusing on video objects, which fully relies on the MPEG-7 standard as information base. It provides a content-based retrieval interface which uses hierarchical content-based video summaries to allow for quick viewing and browsing through search results even on bandwidth limited Web applications. Additionally semantic meaning about video content can be annotated based on domain specific ontologies, enabling a more targeted search for content. Our experiences and results with these techniques will be discussed in this paper.
Lux M., Klieber Hans-Werner, Granitzer Michael
2004
Farmer J., Lindstaedt Stefanie , Droschl G., Luttenberger P.
2004
Carrying out today’s knowledge work without information and communicationtechnology (ICT) is unimaginable. ICT makes it possible to process and exchangeinformation quickly and efficiently. However, accomplishing tasks with ICT isoften tedious: Colleagues have to be asked, how best to proceed. Necessaryresources have to be searched for in the intranet and internet. And one has toget familiar with applying the various systems and tools. This way, solving asimple task can become a time consuming process for inexperienced employeesand also for those who are asked for their expertise.Therefore, at the Know-Center Graz, Austria , the AD-HOC methodology hasbeen developed to support knowledge workers in task-oriented learning andteaching situations. This methodology is used to analyse the work processes, toidentify the needed resources, tools, and systems, and finally to design an ADHOCEnvironment. In this environment, systems and tools are arranged forspecific work processes. Users are then guided at their work tasks and areprovided with the necessary resources instantly.This article presents the AD-HOC methodology. It analyses the obstacles thathamper efficient knowledge work and how AD-HOC overcomes them. Finally, thesupport of users at their specific work tasks by deployed AD-HOC Environmentsis shown in two field studies.
Timbrell G., Koller S., Lindstaedt Stefanie
2004
Lindstaedt Stefanie , Farmer J.
2004
Lindstaedt Stefanie , Farmer J., Ley Tobias
2004
Granitzer Michael, Kienreich Wolfgang, Sabol Vedran, Dösinger G.
2003
Lux M., Granitzer Michael, Sabol Vedran, Kienreich Wolfgang, Becker J.
2003
Kienreich Wolfgang, Sabol Vedran, Granitzer Michael, Becker J.
2003
2003
Woels K., Kirchpal S., Ley Tobias
2003
2003
Maurer H.
2003
Westbomke J.
2003
Ley Tobias
2003
Ley Tobias, Albert D.
2003
Ley Tobias, Albert D.
2003
Andrews K., Kienreich Wolfgang, Sabol Vedran, Granitzer Michael
2003
Lindstaedt Stefanie , Farmer J., Hrastnik J., Rollett H., Strohmaier M.
2003
Personalisierbare Portale als Fenster zu Unternehmensgedachtnissen finden ¨in der Praxis immer haufiger Anwendung. Bei dem Design dieser Portale stellt sich die ¨Frage nach der Strukturierung der Informationen: auf der einen Seite soll der tagliche ¨Arbeitsprozess unterstutzt werden, auf der anderen Seite sollen aber auch Informa- ¨tionen zuganglich gemacht werden, die den Prozess in einen gr ¨ oßeren Kontext set- ¨zen. Unsere Erfahrungen bei der Portalentwicklungen haben gezeigt, dass zwei unterschiedlicheStrategien Anwendung finden: Prozessfokus und Wissensfokus. Die Wahlder individuellen Strategie hangt einerseits von den Zielen ab, die das Portal erf ¨ ullen ¨soll. Andererseits hangen sie aber auch von der Ausgangssituation im anwendenden ¨Unternehmen ab. Dieser Beitrag stellt die beiden Designstrategien vor und identifiziertRahmenbedingungen, die bei der Wahl der Stategie helfen konnen.
Kappe F., Droschl G., Kienreich Wolfgang, Sabol Vedran, Andrews K., Granitzer Michael, Auer P.
2003
Kienreich Wolfgang, Sabol Vedran, Granitzer Michael, Kappe F., Andrews K.
2003
Klieber Hans-Werner, Lux M., Mayer H., Neuschmied H., Haas W.
2003
Ley Tobias, Albert D.
2003
We present a formalisation for employee competencies which is based on a psychological framework separating the overt behavioural level from the underlying competence level. On the competence level, employees draw on action potentials (knowledge, skills and abilities) which in a given situation produce performance outcomes on the behavioural level. Our conception is based on the competence performance approach by [Korossy 1997] and [Korossy 1999] which uses mathematical structures to establish prerequisite relations on the competence and the performance level. From this framework, a methodology for assessing competencies in dynamic work domains is developed which utilises documents employees have created to assess the competencies they have been acquiring. By means of a case study, we show how the methodology and the resulting structures can be validated in an organisational setting. From the resulting structures, employee competency profiles can be derived and development planning can be supported. The structures also provide the means for making inferences within the competency assessment process which in turn facilitates continuous updating of competency profiles and maintenance of the structures.
Ulbrich Armin, Kandpal D.
2003
Sabol Vedran, Kienreich Wolfgang, Granitzer Michael, Becker J.
2003
Tochtermann K., Zirm K., Lindstaedt Stefanie
2003
Ulbrich Armin, Kandpal D.
2003
Strohmaier M.
2003
Clancy J.M. , Elliott G., Ley Tobias, Odomei M.M., Wearing A.J., McLennan J., Thorsteinsson E.B.
2003
Kandpal D., Ulbrich Armin
2003
Farmer J.
2003
Strohmaier M.
2003
Lux M., Klieber Hans-Werner, Becker J., Mayer H., Neuschmied H., Haas W.
2002
Lux M., Klieber Hans-Werner, Becker J., Mayer H., Neuschmied H., Haas W.
2002
The evolution of the Web is not only accompanied by an increasing diversity of multimedia but by new requirements towards intelligent research capabilities, user specific assistance, intuitive user interfaces and platform independent information presentation. To reach these and further upcoming requirements new standardized Web technologies and XML based description languages are used. The Web Information Space has transformed into a Knowledge marketplace where worldwide located participants take part into the creation, annotation and consumption of knowledge. This paper points out the design of semantic retrieval frameworks and a prototype implementation for audio and video annotation, storage and retrieval using the MPEG-7 standard and semantic web reference implementations. MPEG-7 plays an important role towards the standardized enrichment of multimedia with semantics on higher abstraction levels and a related improvement of query results.
2002
Lindstaedt Stefanie
2002
Dösinger G., Ley Tobias
2002
Becker J., Granitzer Michael, Kienreich Wolfgang, Sabol Vedran
2002
Andrews K., Kienreich Wolfgang, Sabol Vedran, Becker J., Kappe F., Droschl G., Granitzer Michael, Auer P.
2002
Maurer H.
2002
2002
Maurer H.
2002
Pillmann W.
2002
Ulbrich Armin, Ausserhofer A., Dietinger T., Raback W., Hoitsch P.
2002
2002
Maurer H.
2002
Lindstaedt Stefanie , Fischer M.
2002
Maurer H.
2002
Westbomke J., Kussmaul A., Raiber A., Haase M., Hicks D., Lindstaedt Stefanie
2002
This paper presents research results obtained from the project Personal AdaptableDigital Library Environment (PADDLE). The main focus of the DFG funded research projectis to apply concepts of knowledge management to digital libraries by introducingpersonalization techniques. The idea is to enable the specific needs, experiences, skills andtasks of a knowledge worker using a digital library could be taken into account. Metadata is thekey issue for doing this. Therefore the PADDLE system architecture describes a metadatamanager, which allows the association of metadata with the knowledge objects stored indistributed information resources. Based on this architecture several personalization conceptslike workspaces and profiles are introduced. Finally, a geographic information portal isdescribed that realizes a new way of seeking and accessing geodata related knowledge objectswithin a digital library.
Becker J., Lux M., Klieber Hans-Werner, Sabol Vedran, Kienreich Wolfgang
2002
Lindstaedt Stefanie , Strohmaier M., Rollett Herwig, Hrastnik Janez, Bruhnsen Karin, Droschl Georg, Gerold Markus
2002
One of the first question each knowledge management project facesis: Which concrete activities are referred to under the name of knowledgemanagement and how do they relate to each other? To help answer this questionand to provide guidance when introducing knowledge management we havedeveloped KMap. KMap is an environment which supports a practitioner in theinteractive exploration of a map of knowledge management activities. Theinteraction helps trigger interesting questions crucial to the exploration of thesolution space and makes hidden argumentation lines visible. KMap is not anew theory of knowledge management but a pragmatic “object to think with”and is currently in use in two case studies.
Lindstaedt Stefanie , Strohmaier M.
2002
Becker J., Lux M., Klieber Hans-Werner
2002
Sabol Vedran, Kienreich Wolfgang, Granitzer Michael, Becker J.
2002
Lindstaedt Stefanie
2002
Kappe F., Droschl G., Kienreich Wolfgang, Sabol Vedran, Becker J., Andrews K., Granitzer Michael, Auer P.
2002
2002
Ley Tobias, Rollett H., Dösinger G., Bruhnsen K., Droschl G.
2002
Riekert W.
2002
Lindstaedt Stefanie , Scheir Peter, Sarka W.
2002
2002
2002
2002
Lindstaedt Stefanie
2002
Wir betrachten kooperative Lern- und Lehrsituationen im Kontext dertäglichen Arbeitsprozesse aus der Perspektive des Wissensmanagements. In einerCase Study bei DaimlerChrysler wurden Szenarien entwickelt, in denen Wissen inGruppen erarbeitet und weitergegeben wird, um konkrete Arbeitsaufgaben unterZeitdruck erfüllen zu können. Zur Zeit entwickeln wir im Kontext eines WissensmanagementsystemsMethoden und technische Hilfsmittel zur Unterstützung dieseraufgaben-orientierten kooperativen Lern- und Lehrprozesse.
Ulbrich Armin, Ausserhofer A.
2002
Sabol Vedran, Kienreich Wolfgang, Granitzer Michael, Becker J., Andrews K.
2002
2002
2002
Ley Tobias, Ulbrich Armin
2002
Ley Tobias, Rollett H.
2001
Lindstaedt Stefanie
2001
Sabol Vedran
2001
Rollett H., Ley Tobias
2001
Andrews K., Gütl Christian, Moser J., Sabol Vedran, Lackner W.
2001
The xFIND gatherer-broker architecture provides a wealth of metadata, which can be used to provide sophisticated search functionality. Local or remote documents are indexed and summaries and metadata are stored on an xFIND broker (server). An xFIND client can search a particular broker and access rich metadata for search result presentation, without having to fetch the original documents themselves. Search result sets are not only presented as a traditional ranked list, but also in an interactive scatterplot (Search Result Explorer) and using dynamic thematic clustering (VisIslands)
Ulbrich Armin, Ausserhofer A., Dietinger T., Raback W., Hoitsch P.
2001
Lindstaedt Stefanie
2000
Scher Sebastian, Trügler Andreas, Abermann Jakob
Machine Learning (ML) and AI techniques, especially methods based on Deep Learning, have long been considered as black boxes that might be good at predicting, but not explaining predictions. This has changed recently, with more techniques becoming available that explain predictions by ML models – known as Explainable AI (XAI). These have seen adaptation also in climate science, because they could have the potential to help us in understanding the physics behind phenomena in geoscience. It is, however, unclear, how large that potential really is, and how these methods can be incorporated into the scientific process. In our study, we use the exemplary research question of which aspects of the large-scale atmospheric circulation affect specific local conditions. We compare the different answers to this question obtained with a range of different methods, from the traditional approach of targeted data analysis based on physical knowledge (such as using dimensionality reduction based on physical reasoning) to purely data-driven and physics-unaware methods using Deep Learning with XAI techniques. Based on these insights, we discuss the usefulness and potential pitfalls of XAI for understanding and explaining phenomena in geosciences.
Pammer-Schindler Viktoria, Lindstaedt Stefanie
Cicchinelli Analia, Pammer-Schindler Viktoria
Purpose – The goal of this study is to understand what drives people (i.e., their motivations, autonomous learning attitudes and learning interests) to volunteer as mentors for a program that helps families to ideate technological solutions to community problems.Design/methodology/approach – A three-phase method was used to i) create volunteer mentor profiles; ii) elicit topics of interest; and iii) establish relationships between those. The mentor profiles were based on self-assessments of motivation, attitudes towards lifelong learning and self-regulated learning strategies. The topics of interests were defined by analyzing answers to reflection questions. Statistical methods were applied to analyze the relationships between the interests and the mentor profiles.Findings –Three mentor groups (G1 “low,” G2 “high” and G3 “medium”) were identified based on pre-survey data via bottom-up clustering. Content analysis was used to define the topics of interest: communication skills; learning AI; mentoring; prototype development; problem solving skills; and working with families. Examining relationships between the mentor profile and the topics of interest showed that group G3 “medium” with strong intrinsic motivation had significantly more interest in working with families. The group with overall highest scores (G2 “high”) expressed substantial interest in learning about AI. However, there was a high variability between members of this group. Originality/value –The study established different types of learning interests of volunteer mentors and related them to the mentor profiles based on motivation, self-regulated learning strategies and attitudes towards lifelong learning. Such knowledge can help organizations shape volunteering experience, offering more value to volunteers. Furthermore, the reflection questions can be used by: i) volunteers as an instrument of reflection; and ii) organizations for eliciting learning interests of volunteers.
Fessl Angela, Thalmann Stefan
In times of globalization, also workforce needs to be able to go global. This holds true especially for technical experts holding an exclusive expertise. Together with a global manufacturing company, we addressed the challenge of being able to send staff into foreign countries for managing technical projects in the foreign language. We developed a socio-technical language learning concept that combines an online language learning platform with gamification features and conventional individual but virtually conducted coaching sessions. We report from a project we conducted with an international manufacturing company in which native Spanish speakers learned English within two months. The approach was tested in a four weeks trial with 10 participants.The target audience for this talk are HR-professionals, educational technologists and all people interested in language learning. We expect that our talk will spark discussions about the combination of ICT mediated learning and f-to-f learning in language learning and also about the role of gamification in this process.