Razouk Houssam, Liu Xinglan, Kern Roman
2023
The Failure Mode Effect Analysis process (FMEA) is widely used in industry for risk assessment, as it effectively captures and documents domain-specific knowledge. This process is mainly concerned with causal domain knowledge. In practical applications, FMEAs encounter challenges in terms of comprehensibility, particularly related to inadequate coverage of listed failure modes and their corresponding effects and causes. This can be attributed to the limitations of traditional brainstorming approaches typically employed in the FMEA process. Depending on the size and diversity in terms of disciplines of the team conducting the analysis, these approaches may not adequately capture a comprehensive range of failure modes, leading to gaps in coverage. To this end, methods for improving FMEA knowledge comprehensibility are highly needed. A potential approach to address this gap is rooted in recent advances in common-sense knowledge graph completion, which have demonstrated the effectiveness of text-aware graph embedding techniques. However, the applicability of such methods in an industrial setting is limited. This paper addresses this issue on FMEA documents in an industrial environment. Here, the application of common-sense knowledge graph completion methods on FMEA documents from semiconductor manufacturing is studied. These methods achieve over 20% MRR on the test set and 70% of the top 10 predictions were manually assessed to be plausible by domain experts. Based on the evaluation, this paper confirms that text-aware knowledge graph embedding for common-sense knowledge graph completion are more effective than structure-only knowledge graph embedding for improving FMEA knowledge comprehensibility. Additionally we found that language model in domain fine-tuning is beneficial for extracting more meaningful embedding, thus improving the overall model performance.
Malinverno Luca, Barros Vesna, Ghisoni Francesco, Visonà Giovanni, Kern Roman, Nickel Philip , Ventura Barbara Elvira, Simic Ilija, Stryeck Sarah, Manni Francesca , Ferri Cesar , Jean-Quartier Clair, Genga Laura , Schweikert Gabriele, Lovric Mario, Rosen-Zvi Michal
2023
Understanding the inner working of machine-learning models has become a crucial point of discussion in fairness and reliability of artificial intelligence (AI). In this perspective, we reveal insights from recently published scientific works on explainable AI (XAI) within the biomedical sciences. Specifically, we speculate that the COVID-19 pandemic is associated with the rate of publications in the field. Current research efforts seem to be directed more toward explaining black-box machine-learning models than designing novel interpretable architecture. Notably, an inflection period in the publication rate was observed in October 2020, when the quantity of XAI research in biomedical sciences surged upward significantly.While a universally accepted definition of explainability is unlikely, ongoing research efforts are pushing the biomedical field toward improving the robustness and reliability of applied machine learning, which we consider a positive trend.
Siddiqi Shafaq, Qureshi Faiza, Lindstaedt Stefanie , Kern Roman
2023
Outlier detection in non-independent and identically distributed (non-IID) data refers to identifying unusual or unexpected observations in datasets that do not follow an independent and identically distributed (IID) assumption. This presents a challenge in real-world datasets where correlations, dependencies, and complex structures are common. In recent literature, several methods have been proposed to address this issue and each method has its own strengths and limitations, and the selection depends on the data characteristics and application requirements. However, there is a lack of a comprehensive categorization of these methods in the literature. This study addresses this gap by systematically reviewing methods for outlier detection in non-IID data published from 2015 to 2023. This study focuses on three major aspects; data characteristics, methods, and evaluation measures. In data characteristics, we discuss the differentiating properties of non-IID data. Then we review the recent methods proposed for outlier detection in non-IID data, covering their theoretical foundations and algorithmic approaches. Finally, we discuss the evaluation metrics proposed to measure the performance of these methods. Additionally, we present a taxonomy for organizing these methods and highlight the application domain of outlier detection in non-IID categorical data, outlier detection in federated learning, and outlier detection in attribute graphs. We provide a comprehensive overview of datasets used in the selected literature. Moreover, we discuss open challenges in outlier detection for non-IID to shed light on future research directions. By synthesizing the existing literature, this study contributes to advancing the understanding and development of outlier detection techniques in non-IID data settings.
Jantscher Michael, Gunzer Felix, Kern Roman, Hassler Eva, Tschauner Sebastian, Reishofer Gernot
2023
Recent advances in deep learning and natural language processing (NLP) have opened many new opportunities for automatic text understanding and text processing in the medical field. This is of great benefit as many clinical downstream tasks rely on information from unstructured clinical documents. However, for low-resource languages like German, the use of modern text processing applications that require a large amount of training data proves to be difficult, as only few data sets are available mainly due to legal restrictions. In this study, we present an information extraction framework that was initially pre-trained on real-world computed tomographic (CT) reports of head examinations, followed by domain adaptive fine-tuning on reports from different imaging examinations. We show that in the pre-training phase, the semantic and contextual meaning of one clinical reporting domain can be captured and effectively transferred to foreign clinical imaging examinations. Moreover, we introduce an active learning approach with an intrinsic strategic sampling method to generate highly informative training data with low human annotation cost. We see that the model performance can be significantly improved by an appropriate selection of the data to be annotated, without the need to train the model on a specific downstream task. With a general annotation scheme that can be used not only in the radiology field but also in a broader clinical setting, we contribute to a more consistent labeling and annotation process that also facilitates the verification and evaluation of language models in the German clinical setting
Gabler Philipp, Geiger Bernhard, Schuppler Barbara, Kern Roman
2023
Superficially, read and spontaneous speech—the two main kinds of training data for automatic speech recognition—appear as complementary, but are equal: pairs of texts and acoustic signals. Yet, spontaneous speech is typically harder for recognition. This is usually explained by different kinds of variation and noise, but there is a more fundamental deviation at play: for read speech, the audio signal is produced by recitation of the given text, whereas in spontaneous speech, the text is transcribed from a given signal. In this review, we embrace this difference by presenting a first introduction of causal reasoning into automatic speech recognition, and describing causality as a tool to study speaking styles and training data. After breaking down the data generation processes of read and spontaneous speech and analysing the domain from a causal perspective, we highlight how data generation by annotation must affect the interpretation of inference and performance. Our work discusses how various results from the causality literature regarding the impact of the direction of data generation mechanisms on learning and prediction apply to speech data. Finally, we argue how a causal perspective can support the understanding of models in speech processing regarding their behaviour, capabilities, and limitations.
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2023
This research presents an approach that combines stacked Gaussian processes (stacked GP) with target vector Bayesian optimization (BO) to solve multi-objective inverse problems of chained manufacturing processes. In this context, GP surrogate models represent individual manufacturing processes and are stacked to build a unified surrogate model that represents the entire manufacturing process chain. Using stacked GPs, epistemic uncertainty can be propagated through all chained manufacturing processes. To perform target vector BO, acquisition functions make use of a noncentral χ-squared distribution of the squared Euclidean distance between a given target vector and surrogate model output. In BO of chained processes, there are the options to use a single unified surrogate model that represents the entire joint chain, or that there is a surrogate model for each individual process and the optimization is cascaded from the last to the first process. Literature suggests that a joint optimization approach using stacked GPs overestimates uncertainty, whereas a cascaded approach underestimates it. For improved target vector BO results of chained processes, we present an approach that combines methods which under- or overestimate uncertainties in an ensemble for rank aggregation. We present a thorough analysis of the proposed methods and evaluate on two artificial use cases and on a typical manufacturing process chain: preforming and final pressing of an Inconel 625 superalloy billet.
Gursch Heimo, Körner Stefan, Thaler Franz, Waltner Georg, Ganster Harald, Rinnhofer Alfred, Oberwinkler Christian, Meisenbichler Reinhard, Bischof Horst, Kern Roman
2022
Refuse separation and sorting is currently done by recycling plants that are manually optimised for a fixed refuse composition. Since the refuse compositions constantly change, these plants deliver either suboptimal sorting performances or require constant monitoring and adjustments by the plant operators. Image recognition offers the possibility to continuously monitor the refuse composition on the conveyor belts in a sorting facility. When information about the refuse composition is combined with parameters and measurements of the sorting machinery, the sorting performance of a plant can be continuously monitored, problems detected, optimisations suggested and trends predicted. This article describes solutions for multispectral and 3D image capturing of refuse streams and evaluates the performance of image segmentation models. The image segmentation models are trained with synthetic training data to reduce the manual labelling effort thus reducing the costs of the image recognition introduction. Furthermore, an outlook on the combination of image recognition data with parameters and measurements of the sorting machinery in a combined time series analysis is provided.
Liu Xinglan, Hussain Hussain, Razouk Houssam, Kern Roman
2022
Graph embedding methods have emerged as effective solutions for knowledge graph completion. However, such methods are typically tested on benchmark datasets such as Freebase, but show limited performance when applied on sparse knowledge graphs with orders of magnitude lower density. To compensate for the lack of structure in a sparse graph, low dimensional representations of textual information such as word2vec or BERT embeddings have been used. This paper proposes a BERT-based method (BERT-ConvE), to exploit transfer learning of BERT in combination with a convolutional network model ConvE. Comparing to existing text-aware approaches, we effectively make use of the context dependency of BERT embeddings through optimizing the features extraction strategies. Experiments on ConceptNet show that the proposed method outperforms strong baselines by 50% on knowledge graph completion tasks. The proposed method is suitable for sparse graphs as also demonstrated by empirical studies on ATOMIC and sparsified-FB15k-237 datasets. Its effectiveness and simplicity make it appealing for industrial applications.
Salhofer Eileen, Liu Xinglan, Kern Roman
2022
State of the art performances for entity extrac-tion tasks are achieved by supervised learning,specifically, by fine-tuning pretrained languagemodels such as BERT. As a result, annotatingapplication specific data is the first step in manyuse cases. However, no practical guidelinesare available for annotation requirements. Thiswork supports practitioners by empirically an-swering the frequently asked questions (1) howmany training samples to annotate? (2) whichexamples to annotate? We found that BERTachieves up to 80% F1 when fine-tuned on only70 training examples, especially on biomedicaldomain. The key features for guiding the selec-tion of high performing training instances areidentified to be pseudo-perplexity and sentence-length. The best training dataset constructedusing our proposed selection strategy shows F1score that is equivalent to a random selectionwith twice the sample size. The requirementof only a small number of training data im-plies cheaper implementations and opens doorto wider range of applications.
Razouk Houssam, Kern Roman
2022
Digitalization of causal domain knowledge is crucial. Especially since the inclusion of causal domain knowledge in the data analysis processes helps to avoid biased results. To extract such knowledge, the Failure Mode Effect Analysis (FMEA) documents represent a valuable data source. Originally, FMEA documents were designed to be exclusively produced and interpreted by human domain experts. As a consequence, these documents often suffer from data consistency issues. This paper argues that due to the transitive perception of the causal relations, discordant and merged information cases are likely to occur. Thus, we propose to improve the consistency of FMEA documents as a step towards more efficient use of causal domain knowledge. In contrast to other work, this paper focuses on the consistency of causal relations expressed in the FMEA documents. To this end, based on an explicit scheme of types of inconsistencies derived from the causal perspective, novel methods to enhance the data quality in FMEA documents are presented. Data quality improvement will significantly improve downstream tasks, such as root cause analysis and automatic process control.
Sousa Samuel, Kern Roman
2022
Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union’s General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.
Koutroulis Georgios, Mutlu Belgin, Kern Roman
2022
In prognostics and health management (PHM), the task of constructing comprehensive health indicators (HI)from huge amounts of condition monitoring data plays a crucial role. HIs may influence both the accuracyand reliability of remaining useful life (RUL) prediction, and ultimately the assessment of system’s degradationstatus. Most of the existing methods assume apriori an oversimplified degradation law of the investigatedmachinery, which in practice may not appropriately reflect the reality. Especially for safety–critical engineeredsystems with a high level of complexity that operate under time-varying external conditions, degradationlabels are not available, and hence, supervised approaches are not applicable. To address the above-mentionedchallenges for extrapolating HI values, we propose a novel anticausal-based framework with reduced modelcomplexity, by predicting the cause from the causal models’ effects. Two heuristic methods are presented forinferring the structural causal models. First, the causal driver is identified from complexity estimate of thetime series, and second, the set of the effect measuring parameters is inferred via Granger Causality. Once thecausal models are known, off-line anticausal learning only with few healthy cycles ensures strong generalizationcapabilities that helps obtaining robust online predictions of HIs. We validate and compare our framework onthe NASA’s N-CMAPSS dataset with real-world operating conditions as recorded on board of a commercial jet,which are utilized to further enhance the CMAPSS simulation model. The proposed framework with anticausallearning outperforms existing deep learning architectures by reducing the average root-mean-square error(RMSE) across all investigated units by nearly 65%.
Hoffer Johannes Georg, Ofner Andreas Benjamin, Rohrhofer Franz Martin, Lovric Mario, Kern Roman, Lindstaedt Stefanie , Geiger Bernhard
2022
Most engineering domains abound with models derived from first principles that have beenproven to be effective for decades. These models are not only a valuable source of knowledge, but they also form the basis of simulations. The recent trend of digitization has complemented these models with data in all forms and variants, such as process monitoring time series, measured material characteristics, and stored production parameters. Theory-inspired machine learning combines the available models and data, reaping the benefits of established knowledge and the capabilities of modern, data-driven approaches. Compared to purely physics- or purely data-driven models, the models resulting from theory-inspired machine learning are often more accurate and less complex, extrapolate better, or allow faster model training or inference. In this short survey, we introduce and discuss several prominent approaches to theory-inspired machine learning and show how they were applied in the fields of welding, joining, additive manufacturing, and metal forming.
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2022
The avoidance of scrap and the adherence to tolerances is an important goal in manufacturing. This requires a good engineering understanding of the underlying process. To achieve this, real physical experiments can be conducted. However, they are expensive in time and resources, and can slow down production. A promising way to overcome these drawbacks is process exploration through simulation, where the finite element method (FEM) is a well-established and robust simulation method. While FEM simulation can provide high-resolution results, it requires extensive computing resources to do so. In addition, the simulation design often depends on unknown process properties. To circumvent these drawbacks, we present a Gaussian Process surrogate model approach that accounts for real physical manufacturing process uncertainties and acts as a substitute for expensive FEM simulation, resulting in a fast and robust method that adequately depicts reality. We demonstrate that active learning can be easily applied with our surrogate model to improve computational resources. On top of that, we present a novel optimization method that treats aleatoric and epistemic uncertainties separately, allowing for greater flexibility in solving inverse problems. We evaluate our model using a typical manufacturing use case, the preforming of an Inconel 625 superalloy billet on a forging press.
Gursch Heimo, Pramhas Martin, Bernhard Knopper, Daniel Brandl, Markus Gratzl, Schlager Elke, Kern Roman
2021
Im Projekt COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) wird die Behaglichkeit von Büroräumen mit Simulationen und datengetriebenen Verfahren untersucht. Während die datengetriebenen Verfahren auf Messdaten setzen, benötigt die Simulation umfangreiche Beschreibungen der Büroräume, welche sich vielfach mit im Building Information Model (BIM) erfassten Informationen decken. Trotz großer Fortschritte in den letzten Jahren, ist die Integration von BIM und Simulation noch nicht vollständig automatisiert. An dem Fallbeispiel der Aufstockung eines Bürogebäudes der Thomas Lorenz ZT GmbH wird die Übergabe von BIM-Daten an Building Energy Simulation (BES) und Computational Fluid Dynamics (CFD) Simulationen untersucht. Beim untersuchten Gebäude wurde der gesamte Planungsprozess anhand des BIM durchgeführt. Damit konnten Einreichplanung, Ausschreibungsplanung für sämtliche Gewerke inkl. Massenableitung, Ausführungspläne wie Polier-, Schalungs- und Bewehrungspläne aus dem Modell abgeleitet werden und das Haustechnikmodell frühzeitig mit Architektur- und Tragwerksplanungsmodell verknüpft werden.Ausgehend vom BIM konnten die nötigen Daten im IFC-Format an die BES übergeben werden. Die verwendete Software konnte aber noch keine automatische Übergabe durchführen, weshalb eine manuelle Nachbearbeitung der Räume erforderlich war. Für die CFD-Simulation wurden nur ausgewählte Räume betrachtet, denn der Zusatzaufwand zur Übergabe im STEP-Format ist bei normaler Bearbeitung des BIM immer noch sehr groß. Dabei muss der freie Luftraum im BIM separat modelliert und bestimmte geometrischen Randbedingungen erfüllt werden. Ebenso müssen Angaben zu Wärmequellen und Möbel in einer sehr hohen Planungstiefe vorliegen. Der Austausch von Randbedingungen an den Grenzflächen zwischen Luft und Hülle musste noch manuell geschehen.Die BES- und CFD-Simulationsergebnisse sind bezüglich ihrer Aussagekraft mit denen aus herkömmlichen, manuell erstellten Simulationsmodellen als identisch zu betrachten. Eine automatische Übernahme von Parameterwerten scheitert momentan noch an der mangelnden Interpretier- bzw. Zuordenbarkeit in der Simulationssoftware. In Zukunft sollen es die Etablierung von IFC 4 und zusätzlicher Industry Foundation Class (IFC) Parameter einfacher machen die benötigten Daten im Modell strukturiert zu hinterlegen. Besonderes Augenmerk ist dabei auf die Integration von Raumbuchdaten in BIM zu legen, da diese Informationen nicht nur für die Simulation von großem Nutzen sind. Diese Informationsintegrationen sind nicht auf eine einmalige Übermittlung beschränkt, sondern zielen auf eine Integration zur automatischen Übernahme von Änderungen zwischen BIM, Simulation und anknüpfenden Bereichen ab.
Egger Jan, Pepe Antonio, Gsaxner Christina, Jin Yuan, Li Jianning, Kern Roman
2021
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category
Lovric Mario, Duricic Tomislav, Tran Thi Ngoc Han, Hussain Hussain, Lacic Emanuel, Morten A. Rasmussen, Kern Roman
2021
Methods for dimensionality reduction are showing significant contributions to knowledge generation in high-dimensional modeling scenarios throughout many disciplines. By achieving a lower dimensional representation (also called embedding), fewer computing resources are needed in downstream machine learning tasks, thus leading to a faster training time, lower complexity, and statistical flexibility. In this work, we investigate the utility of three prominent unsupervised embedding techniques (principal component analysis—PCA, uniform manifold approximation and projection—UMAP, and variational autoencoders—VAEs) for solving classification tasks in the domain of toxicology. To this end, we compare these embedding techniques against a set of molecular fingerprint-based models that do not utilize additional pre-preprocessing of features. Inspired by the success of transfer learning in several fields, we further study the performance of embedders when trained on an external dataset of chemical compounds. To gain a better understanding of their characteristics, we evaluate the embedders with different embedding dimensionalities, and with different sizes of the external dataset. Our findings show that the recently popularized UMAP approach can be utilized alongside known techniques such as PCA and VAE as a pre-compression technique in the toxicology domain. Nevertheless, the generative model of VAE shows an advantage in pre-compressing the data with respect to classification accuracy.
Hoffer Johannes Georg, Geiger Bernhard, Ofner Patrick, Kern Roman
2021
The technical world of today fundamentally relies on structural analysis in the form of design and structural mechanic simulations.A traditional and robust simulation method is the physics-based Finite Element Method (FEM) simulation. FEM simulations in structural mechanics are known to be very accurate, however, the higher the desired resolution, the more computational effort is required. Surrogate modeling provides a robust approach to address this drawback. Nonetheless, finding the right surrogate model and its hyperparameters for a specific use case is not a straightforward process.In this paper, we discuss and compare several classes of mesh-free surrogate models based on traditional and thriving Machine Learning (ML) and Deep Learning (DL) methods.We show that relatively simple algorithms (such as $k$-nearest neighbor regression) can be competitive in applications with low geometrical complexity and extrapolation requirements. With respect to tasks exhibiting higher geometric complexity, our results show that recent DL methods at the forefront of literature (such as physics-informed neural networks), are complicated to train and to parameterize and thus require further research before they can be put to practical use. In contrast, we show that already well-researched DL methods such as the multi-layer perceptron are superior with respect to interpolation use cases and can be easily trained with available tools.With our work, we thus present a basis for selection and practical implementation of surrogate models.
Gursch Heimo, Ganster Harald, Rinnhofer Alfred, Waltner Georg, Payer Christian, Oberwinkler Christian, Meisenbichler Reinhard, Kern Roman
2021
Refuse sorting is a key technology to increase the recycling rate and reduce the growths of landfills worldwide. The project KI-Waste combines image recognition with time series analysis to monitor and optimise processes in sorting facilities. The image recognition captures the refuse category distribution and particle size of the refuse streams in the sorting facility. The time series analysis focuses on insights derived from machine parameters and sensor values. The combination of results from the image recognition and the time series analysis creates a new holistic view of the complete sorting process and the performance of a sorting facility. This is the basis for comprehensive monitoring, data-driven optimisations, and performance evaluations supporting workers in sorting facilities. Digital solutions allowing the workers to monitor the sorting process remotely are very desirable since the working conditions in sorting facilities are potentially harmful due to dust, bacteria, and fungal spores. Furthermore, the introduction of objective sorting performance measures enables workers to make informed decisions to improve the sorting parameters and react quicker to changes in the refuse composition. This work describes ideas and objectives of the KI-Waste project, summarises techniques and approaches used in KI-Waste, gives preliminary findings, and closes with an outlook on future work.
Lovric Mario, Kern Roman, Fadljevic Leon, Gerdenitsch, Johann, Steck, Thomas, Peche, Ernst
2021
In industrial electro galvanizing lines, the performance of the dimensionally stable anodes (Ti +IrOx) is a crucial factor for product quality. Ageing of the anodes causes worsened zinc coatingdistribution on the steel strip and a significant increase in production costs due to a higher resistivityof the anodes. Up to now, the end of the anode lifetime has been detected by visual inspectionevery several weeks. The voltage of the rectifiers increases much earlier, indicating the deteriorationof anode performance. Therefore monitoring rectifier voltage has the potential for a prematuredetermination of the end of anode lifetime. Anode condition is only one of many parameters affectingthe rectifier voltage. In this work we employed machine learning to predict expected baseline rectifiervoltages for a variety of steel strips and operating conditions at an industrial electro galvanizingline. In the plating section the strip passes twelve “Gravitel” cells and zinc from the electrolyte isdeposited on the surface at high current densities. Data, collected on one exemplary rectifier unitequipped with two anodes, have been studied for a period of two years. The dataset consists of onetarget variable (rectifier voltage) and nine predictive variables describing electrolyte, current andsteel strip characteristics. For predictive modelling, we used selected Random Forest Regression.Training was conducted on intervals after the plating cell was equipped with new anodes. Our resultsshow a Normalized Root Mean Square Error of Prediction (NRMSEP) of 1.4 % for baseline rectifiervoltage during good anode condition. When anode condition was estimated as bad (by manualinspection), we observe a large distinctive deviation in regard to the predicted baseline voltage. Thegained information about the observed deviation can be used for early detection resp. classificationof anode ageing to recognize the onset of damage and reduce total operation cost
Lovric Mario, Meister Richard, Steck Thomas, Fadljevic Leon, Gerdenitsch Johann, Schuster Stefan, Schiefermüller Lukas, Lindstaedt Stefanie , Kern Roman
2020
In industrial electro galvanizing lines aged anodes deteriorate zinc coating distribution over the strip width, leading to an increase in electricity and zinc cost. We introduce a data-driven approach in predictive maintenance of anodes to replace the cost- and labor-intensive manual inspection, which is still common for this task. The approach is based on parasitic resistance as an indicator of anode condition which might be aged or mis-installed. The parasitic resistance is indirectly observable via the voltage difference between the measured and baseline (theoretical) voltage for healthy anode. Here we calculate the baseline voltage by means of two approaches: (1) a physical model based on electrical and electrochemical laws, and (2) advanced machine learning techniques including boosting and bagging regression. The data was collected on one exemplary rectifier unit equipped with two anodes being studied for a total period of two years. The dataset consists of one target variable (rectifier voltage) and nine predictive variables used in the models, observing electrical current, electrolyte, and steel strip characteristics. For predictive modelling, we used Random Forest, Partial Least Squares and AdaBoost Regression. The model training was conducted on intervals where the anodes were in good condition and validated on other segments which served as a proof of concept that bad anode conditions can be identified using the parasitic resistance predicted by our models. Our results show a RMSE of 0.24 V for baseline rectifier voltage with a mean ± standard deviation of 11.32 ± 2.53 V for the best model on the validation set. The best-performing model is a hybrid version of a Random Forest which incorporates meta-variables computed from the physical model. We found that a large predicted parasitic resistance coincides well with the results of the manual inspection. The results of this work will be implemented in online monitoring of anode conditions to reduce operational cost at a production site
Kern Roman, Al-Ubaidi Tarek, Sabol Vedran, Krebs Sarah, Khodachenko Maxim, Scherf Manuel
2020
Scientific progress in the area of machine learning, in particular advances in deep learning, have led to an increase in interest in eScience and related fields. While such methods achieve great results, an in-depth understanding of these new technologies and concepts is still often lacking and domain knowledge and subject matter expertise play an important role. In regard to space science there are a vast variety of application areas, in particular with regard to analysis of observational data. This chapter aims at introducing a number of promising approaches to analyze time series data, via the introduction query by example, i.e., any signal can be provided to the system, which then responds with a ranked list of datasets containing similar signals. Building on top of this ability the system can then be trained using annotations provided by expert users, with the goal of detecting similar features and hence provide a semiautomated analysis and classification. A prototype built to work on MESSENGER data based on existing background implementations by the Know-Center in cooperation with the Space Research Institute in Graz is presented. Further, several representations of time series data that demonstrated to be required for analysis tasks, as well as techniques for preprocessing, frequent pattern mining, outlier detection, and classification of segmented and unsegmented data, are discussed. Screen shots of the developed prototype, detailing various techniques for the presentation of signals, complete the discussion.
Schrunner Stefan, Geiger Bernhard, Zernig Anja, Kern Roman
2020
Classification has been tackled by a large number of algorithms, predominantly following a supervised learning setting. Surprisingly little research has been devoted to the problem setting where a dataset is only partially labeled, including even instances of entirely unlabeled classes. Algorithmic solutions that are suited for such problems are especially important in practical scenarios, where the labelling of data is prohibitively expensive, or the understanding of the data is lacking, including cases, where only a subset of the classes is known. We present a generative method to address the problem of semi-supervised classification with unknown classes, whereby we follow a Bayesian perspective. In detail, we apply a two-step procedure based on Bayesian classifiers and exploit information from both a small set of labeled data in combination with a larger set of unlabeled training data, allowing that the labeled dataset does not contain samples from all present classes. This represents a common practical application setup, where the labeled training set is not exhaustive. We show in a series of experiments that our approach outperforms state-of-the-art methods tackling similar semi-supervised learning problems. Since our approach yields a generative model, which aids the understanding of the data, it is particularly suited for practical applications.
Arslanovic Jasmina, Ajana Löw, Lovric Mario, Kern Roman
2020
Previous studies have suggested that artistic (synchronized) swimming athletes might showeating disorders symptoms. However, systematic research on eating disorders in artistic swimming is limited and the nature and antecedents of the development of eating disorders in this specific population of athletes is still scarce. Hence, the aim of our research was to investigate the eating disorder symptoms in artistic swimming athletes using the EAT-26 instrument, and to examine the relation of the incidence and severity of these symptoms to body mass index and body image dissatisfaction. Furthermore, we wanted to compare artistic swimmers with athletes of a non-leanness (but also an aquatic) sport, therefore we also included a group of female water-polo athletes of the same age. The sample consisted of 36 artistic swimmers and 34 female waterpolo players (both aged 13-16). To test the presence of the eating disorder symptoms the EAT-26 was used. The Mann-Whitney U Test (MWU) was used to test for the differences in EAT-26 scores. The EAT-26 total score and the Dieting subscale (one of the three subscale) showed significant differences between the two groups. The median value for EAT-26 total score was higher in the artistic swimmers’ group (C = 11) than in the waterpolo players’ group (C = 8). A decision tree classifier was used to discriminate the artistic swimmers and female water polo players based on the features from the EAT26 and calculated features. The most discriminative features were the BMI, the dieting subscale and the habit of post-meal vomiting.Our results suggest that artistic swimmers, at their typical competing age, show higher risk of developing eating disorders than female waterpoloplayers and that they are also prone to dieting weight-control behaviors to achieve a desired weight. Furthermore, results indicate that purgative behaviors, such as binge eating or self-induced vomiting, might not be a common weight-control behavior among these athletes. The results corroborate the findings that sport environment in leanness sports might contribute to the development of eating disorders. The results are also in line with evidence that leanness sports athletes are more at risk for developing restrictive than purgative eating behaviors, as the latter usually do not contribute to body weight reduction. As sport environment factors in artistic swimming include judging criteria that emphasize a specific body shape and performance, it is important to raise the awareness of mental health risks that such environment might encourage.
Chiancone Alessandro, Cuder Gerald, Geiger Bernhard, Harzl Annemarie, Tanzer Thomas, Kern Roman
2019
This paper presents a hybrid model for the prediction of magnetostriction in power transformers by leveraging the strengths of a data-driven approach and a physics-based model. Specifically, a non-linear physics-based model for magnetostriction as a function of the magnetic field is employed, the parameters of which are estimated as linear combinations of electrical coil measurements and coil dimensions. The model is validated in a practical scenario with coil data from two different suppliers, showing that the proposed approach captures the different magnetostrictive properties of the two suppliers and provides an estimation of magnetostriction in agreement with the measurement system in place. It is argued that the combination of a non-linear physics-based model with few parameters and a linear data-driven model to estimate these parameters is attractive both in terms of model accuracy and because it allows training the data-driven part with comparably small datasets.
Santos Tiago, Schrunner Stefan, Geiger Bernhard, Pfeiler Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2019
Semiconductor manufacturing is a highly innovative branch of industry, where a high degree of automation has already been achieved. For example, devices tested to be outside of their specifications in electrical wafer test are automatically scrapped. In this paper, we go one step further and analyze test data of devices still within the limits of the specification, by exploiting the information contained in the analog wafermaps. To that end, we propose two feature extraction approaches with the aim to detect patterns in the wafer test dataset. Such patterns might indicate the onset of critical deviations in the production process. The studied approaches are: 1) classical image processing and restoration techniques in combination with sophisticated feature engineering and 2) a data-driven deep generative model. The two approaches are evaluated on both a synthetic and a real-world dataset. The synthetic dataset has been modeled based on real-world patterns and characteristics. We found both approaches to provide similar overall evaluation metrics. Our in-depth analysis helps to choose one approach over the other depending on data availability as a major aspect, as well as on available computing power and required interpretability of the results.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Toller Maximilian, Santos Tiago, Kern Roman
2019
Season length estimation is the task of identifying the number of observations in the dominant repeating pattern of seasonal time series data. As such, it is a common pre-processing task crucial for various downstream applications. Inferring season length from a real-world time series is often challenging due to phenomena such as slightly varying period lengths and noise. These issues may, in turn, lead practitioners to dedicate considerable effort to preprocessing of time series data since existing approaches either require dedicated parameter-tuning or their performance is heavily domain-dependent. Hence, to address these challenges, we propose SAZED: spectral and average autocorrelation zero distance density. SAZED is a versatile ensemble of multiple, specialized time series season length estimation approaches. The combination of various base methods selected with respect to domain-agnostic criteria and a novel seasonality isolation technique, allow a broad applicability to real-world time series of varied properties. Further, SAZED is theoretically grounded and parameter-free, with a computational complexity of O(𝑛log𝑛), which makes it applicable in practice. In our experiments, SAZED was statistically significantly better than every other method on at least one dataset. The datasets we used for the evaluation consist of time series data from various real-world domains, sterile synthetic test cases and synthetic data that were designed to be seasonal and yet have no finite statistical moments of any order.
Toller Maximilian, Geiger Bernhard, Kern Roman
2019
Distance-based classification is among the most competitive classification methods for time series data. The most critical componentof distance-based classification is the selected distance function.Past research has proposed various different distance metrics ormeasures dedicated to particular aspects of real-world time seriesdata, yet there is an important aspect that has not been considered so far: Robustness against arbitrary data contamination. In thiswork, we propose a novel distance metric that is robust against arbitrarily “bad” contamination and has a worst-case computationalcomplexity of O(n logn). We formally argue why our proposedmetric is robust, and demonstrate in an empirical evaluation thatthe metric yields competitive classification accuracy when appliedin k-Nearest Neighbor time series classification.
Winter Kevin, Kern Roman
2019
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second,to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved aver-age F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Geiger Bernhard, Schrunner Stefan, Kern Roman
2019
Schrunner and Geiger have contributed equally to this work.
Lovric Mario, Molero Perez Jose Manuel, Kern Roman
2019
The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations
Fernández Alonso, Miguel Yuste, Kern Roman
2018
Collection of environmental datasets recorded with Tinkerforge sensors and used in the development of a bachelor thesis on the topic of frequent pattern mining. The data was collected in several locations in the city of Graz, Austria, as well as an additional dataset recorded in Santander, Spain.
Gursch Heimo, Silva Nelson, Reiterer Bernhard , Paletta Lucas , Bernauer Patrick, Fuchs Martin, Veas Eduardo Enrique, Kern Roman
2018
The project Flexible Intralogistics for Future Factories (FlexIFF) investigates human-robot collaboration in intralogistics teams in the manufacturing industry, which form a cyber-physical system consisting of human workers, mobile manipulators, manufacturing machinery, and manufacturing information systems. The workers use Virtual Reality (VR) and Augmented Reality (AR) devices to interact with the robots and machinery. The right information at the right time is key for making this collaboration successful. Hence, task scheduling for mobile manipulators and human workers must be closely linked with the enterprise’s information systems, offering all actors on the shop floor a common view of the current manufacturing status. FlexIFF will provide useful, well-tested, and sophisticated solutions for cyberphysicals systems in intralogistics, with humans and robots making the most of their strengths, working collaboratively and helping each other.
Cuder Gerald, Breitfuß Gert, Kern Roman
2018
Electric vehicles have enjoyed a substantial growth in recent years. One essential part to ensure their success in the future is a well-developed and easy-to-use charging infrastructure. Since charging stations generate a lot of (big) data, gaining useful information out of this data can help to push the transition to E-Mobility. In a joint research project, the Know-Center, together with the has.to.be GmbH applied data analytics methods and visualization technologies on the provided data sets. One objective of the research project is, to provide a consumption forecast based on the historical consumption data. Based on this information, the operators of charging stations are able to optimize the energy supply. Additionally, the infrastructure data were analysed with regard to "predictive maintenance", aiming to optimize the availability of the charging stations. Furthermore, advanced prediction algorithms were applied to provide services to the end user regarding availability of charging stations.
Andrusyak Bohdan, Kugi Thomas, Kern Roman
2018
The stock and foreign exchange markets are the two fundamental financial markets in the world and play acrucial role in international business. This paper examines the possibility of predicting the foreign exchangemarket via machine learning techniques, taking the stock market into account. We compare prediction modelsbased on algorithms from the fields of shallow and deep learning. Our models of foreign exchange marketsbased on information from the stock market have been shown to be able to predict the future of foreignexchange markets with an accuracy of over 60%. This can be seen as an indicator of a strong link between thetwo markets. Our insights offer a chance of a better understanding guiding the future of market predictions.We found the accuracy depends on the time frame of the forecast and the algorithms used, where deeplearning tends to perform better for farther-reaching forecasts
Lovric Mario, Krebs Sarah, Cemernek David, Kern Roman
2018
The use of big data technologies has a deep impact on today’s research (Tetko et al., 2016) and industry (Li et al., n.d.), but also on public health (Khoury and Ioannidis, 2014) and economy (Einav and Levin, 2014). These technologies are particularly important for manufacturing sites, where complex processes are coupled with large amounts of data, for example in chemical and steel industry. This data originates from sensors, processes. and quality-testing. Typical application of these technologies is related to predictive maintenance and optimisation of production processes. Media makes the term “big data” a hot buzzword without going to deep into the topic. We noted a lack in user’s understanding of the technologies and techniques behind it, making the application of such technologies challenging. In practice the data is often unstructured (Gandomi and Haider, 2015) and a lot of resources are devoted to cleaning and preparation, but also to understanding causalities and relevance among features. The latter one requires domain knowledge, making big data projects not only challenging from a technical perspective, but also from a communication perspective. Therefore, there is a need to rethink the big data concept among researchers and manufacturing experts including topics like data quality, knowledge exchange and technology required. The scope of this presentation is to present the main pitfalls in applying big data technologies amongst users from industry, explain scaling principles in big data projects, and demonstrate common challenges in an industrial big data project
Santos Tiago, Kern Roman
2018
Semiconductor manufacturing processes critically depend on hundreds of highly complex process steps, which may cause critical deviations in the end-product.Hence, a better understanding of wafer test data patterns, which represent stress tests conducted on devices in semiconductor material slices, may lead to an improved production process.However, the shapes and types of these wafer patterns, as well as their relation to single process steps, are unknown.In a first step to address these issues, we tailor and apply a variational auto-encoder (VAE) to wafer pattern images.We find the VAE's generator allows for explorative wafer pattern analysis, andits encoder provides an effective dimensionality reduction algorithm, which, in a clustering application, performs better than several baselines such as t-SNE and yields interpretable clusters of wafer patterns.
Urak Günter, Ziak Hermann, Kern Roman
2018
The task of federated search is to combine results from multiple knowledge bases into a single, aggregated result list, where the items typically range from textual documents toimages. These knowledge bases are also called sources, and the process of choosing the actual subset of sources for a given query is called source selection. A scenario wherethese sources do not provide information about their content in a standardized way is called uncooperative setting. In our work we focus on knowledge bases providing long tail content, i.e., rather specialized sources offering a low number of relevant documents. These sources are often neglected in favor of more popular knowledge sources, both by today’s Web users as well as by most of the existing source selection techniques. We propose a system for source selection which i) could be utilized to automatically detect long tail knowledge bases and ii) generates aggregated search results that tend to incorporate results from these long tail sources. Starting from the current state-of-the-art we developed components that allowed to adjust the amount of contribution from long tail sources. Our evaluation is conducted on theTREC 2014 Federated WebSearch dataset. As this dataset also favors the most popular sources, systems that include many long tail knowledge bases will yield low performancemeasures. Here, we propose a system where just a few relevant long tail sources are integrated into the list of more popular knowledge bases. Additionally, we evaluated the implications of an uncooperative setting, where only minimal information of the sources is available to the federated search system. Here a severe drop in performance is observed once the share of long tail sources is higher than 40%. Our work is intended to steer the development of federated search systems that aim at increasing the diversity and coverage of the aggregated search result.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2018
The goal of our work is inspired by the task of associating segments of text to their real authors. In this work, we focus on analyzing the way humans judge different writing styles. This analysis can help to better understand this process and to thus simulate/ mimic such behavior accordingly. Unlike the majority of the work done in this field (i.e., authorship attribution, plagiarism detection, etc.) which uses content features, we focus only on the stylometric, i.e. content-agnostic, characteristics of authors.Therefore, we conducted two pilot studies to determine, if humans can identify authorship among documents with high content similarity. The first was a quantitative experiment involving crowd-sourcing, while the second was a qualitative one executed by the authors of this paper.Both studies confirmed that this task is quite challenging.To gain a better understanding of how humans tackle such a problem, we conducted an exploratory data analysis on the results of the studies. In the first experiment, we compared the decisions against content features and stylometric features. While in the second, the evaluators described the process and the features on which their judgment was based. The findings of our detailed analysis could (i) help to improve algorithms such as automatic authorship attribution as well as plagiarism detection, (ii) assist forensic experts or linguists to create profiles of writers, (iii) support intelligence applications to analyze aggressive and threatening messages and (iv) help editor conformity by adhering to, for instance, journal specific writing style.
Bassa Akim, Kröll Mark, Kern Roman
2018
Open Information Extraction (OIE) is the task of extracting relations fromtext without the need of domain speci c training data. Currently, most of the researchon OIE is devoted to the English language, but little or no research has been conductedon other languages including German. We tackled this problem and present GerIE, anOIE parser for the German language. Therefore we started by surveying the availableliterature on OIE with a focus on concepts, which may also apply to the Germanlanguage. Our system is built upon the output of a dependency parser, on which anumber of hand crafted rules are executed. For the evaluation we created two dedicateddatasets, one derived from news articles and one based on texts from an encyclopedia.Our system achieves F-measures of up to 0.89 for sentences that have been correctlypreprocessed.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower. The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment. For more information about the extraction of the data, please consider reading our paper: "Extending Scientific Literature Search by Including the Author’s Writing Style" @BIR: http://www.gesis.org/en/services/events/events-archive/conferences/ecir-workshops/ecir-workshop-2017
Breitfuß Gert, Kaiser Rene_DB, Kern Roman, Kowald Dominik, Lex Elisabeth, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2017
Proceedings of the Workshop Papers of i-Know 2017, co-located with International Conference on Knowledge Technologies and Data-Driven Business 2017 (i-Know 2017), Graz, Austria, October 11-12, 2017.
Seifert Christin, Bailer Werner, Orgel Thomas, Gantner Louis, Kern Roman, Ziak Hermann, Petit Albin, Schlötterer Jörg, Zwicklbauer Stefan, Granitzer Michael
2017
The digitization initiatives in the past decades have led to a tremendous increase in digitized objects in the cultural heritagedomain. Although digitally available, these objects are often not easily accessible for interested users because of the distributedallocation of the content in different repositories and the variety in data structure and standards. When users search for culturalcontent, they first need to identify the specific repository and then need to know how to search within this platform (e.g., usageof specific vocabulary). The goal of the EEXCESS project is to design and implement an infrastructure that enables ubiquitousaccess to digital cultural heritage content. Cultural content should be made available in the channels that users habituallyvisit and be tailored to their current context without the need to manually search multiple portals or content repositories. Torealize this goal, open-source software components and services have been developed that can either be used as an integratedinfrastructure or as modular components suitable to be integrated in other products and services. The EEXCESS modules andcomponents comprise (i) Web-based context detection, (ii) information retrieval-based, federated content aggregation, (iii) meta-data definition and mapping, and (iv) a component responsible for privacy preservation. Various applications have been realizedbased on these components that bring cultural content to the user in content consumption and content creation scenarios. Forexample, content consumption is realized by a browser extension generating automatic search queries from the current pagecontext and the focus paragraph and presenting related results aggregated from different data providers. A Google Docs add-onallows retrieval of relevant content aggregated from multiple data providers while collaboratively writing a document. Theserelevant resources then can be included in the current document either as citation, an image, or a link (with preview) withouthaving to leave disrupt the current writing task for an explicit search in various content providers’ portals.
Kern Roman, Falk Stefan, Rexha Andi
2017
This paper describes our participation inSemEval-2017 Task 10, named ScienceIE(Machine Reading for Scientist). We competedin Subtask 1 and 2 which consist respectivelyin identifying all the key phrasesin scientific publications and label them withone of the three categories: Task, Process,and Material. These scientific publicationsare selected from Computer Science, MaterialSciences, and Physics domains. We followeda supervised approach for both subtasksby using a sequential classifier (CRF - ConditionalRandom Fields). For generating oursolution we used a web-based application implementedin the EU-funded research project,named CODE. Our system achieved an F1score of 0.39 for the Subtask 1 and 0.28 forthe Subtask 2.
Rexha Andi, Kern Roman, Ziak Hermann, Dragoni Mauro
2017
Retrieval of domain-specific documents became attractive for theSemantic Web community due to the possibility of integrating classicInformation Retrieval (IR) techniques with semantic knowledge.Unfortunately, the gap between the construction of a full semanticsearch engine and the possibility of exploiting a repository ofontologies covering all possible domains is far from being filled.Recent solutions focused on the aggregation of different domain-specificrepositories managed by third-parties. In this paper, wepresent a semantic federated search engine developed in the contextof the EEXCESS EU project. Through the developed platform,users are able to perform federated queries over repositories in atransparent way, i.e. without knowing how their original queries aretransformed before being actually submitted. The platform implementsa facility for plugging new repositories and for creating, withthe support of general purpose knowledge bases, knowledge graphsdescribing the content of each connected repository. Such knowledgegraphs are then exploited for enriching queries performed byusers.
Schrunner Stefan, Bluder Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2017
In semiconductor industry it is of paramount im- portance to check whether a manufactured device fulfills all quality specifications and is therefore suitable for being sold to the customer. The occurrence of specific spatial patterns within the so-called wafer test data, i.e. analog electric measurements, might point out on production issues. However the shape of these critical patterns is unknown. In this paper different kinds of process patterns are extracted from wafer test data by an image processing approach using Markov Random Field models for image restoration. The goal is to develop an automated procedure to identify visible patterns in wafer test data to improve pattern matching. This step is a necessary precondition for a subsequent root-cause analysis of these patterns. The developed pattern ex- traction algorithm yields a more accurate discrimination between distinct patterns, resulting in an improved pattern comparison than in the original dataset. In a next step pattern classification will be applied to improve the production process control.
Cemernek David, Gursch Heimo, Kern Roman
2017
The catchphrase “Industry 4.0” is widely regarded as a methodology for succeeding in modern manufacturing. This paper provides an overview of the history, technologies and concepts of Industry 4.0. One of the biggest challenges to implementing the Industry 4.0 paradigms in manufacturing are the heterogeneity of system landscapes and integrating data from various sources, such as different suppliers and different data formats. These issues have been addressed in the semiconductor industry since the early 1980s and some solutions have become well-established standards. Hence, the semiconductor industry can provide guidelines for a transition towards Industry 4.0 in other manufacturing domains. In this work, the methodologies of Industry 4.0, cyber-physical systems and Big data processes are discussed. Based on a thorough literature review and experiences from the semiconductor industry, we offer implementation recommendations for Industry 4.0 using the manufacturing process of an electronics manufacturer as an example.
Gursch Heimo, Cemernek David, Kern Roman
2017
In manufacturing environments today, automated machinery works alongside human workers. In many cases computers and humans oversee different aspects of the same manufacturing steps, sub-processes, and processes. This paper identifies and describes four feedback loops in manufacturing and organises them in terms of their time horizon and degree of automation versus human involvement. The data flow in the feedback loops is further characterised by features commonly associated with Big Data. Velocity, volume, variety, and veracity are used to establish, describe and compare differences in the data flows.
Traub Matthias, Gursch Heimo, Lex Elisabeth, Kern Roman
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Ziak Hermann, Kern Roman
2017
The combination of different knowledge bases in thefield of information retrieval is called federated or aggregated search. It has several benefits over single source retrieval but poses some challenges as well. This work focuses on the challenge of result aggregation; especially in a setting where the final result list should include a certain degree of diversity and serendipity. Both concepts have been shown to have an impact on how user perceive an information retrieval system. In particular, we want to assess if common procedures for result list aggregation can be utilized to introduce diversity and serendipity. Furthermore, we study whether a blocking or interleaving for result aggregation yields better results. In a cross vertical aggregated search the so-called verticalscould be news, multimedia content or text. Block ranking is one approach to combine such heterogeneous result. It relies on the idea that these verticals are combined into a single result list as blocks of several adjacent items. An alternative approach for this is interleaving. Here the verticals are blended into one result list on an item by item basis, i.e. adjacent items in the result list may come from different verticals. To generate the diverse and serendipitous results we reliedon a query reformulation technique which we showed to be beneficial to generate diversified results in previous work. To conduct this evaluation we created a dedicated dataset. This dataset served as a basis for three different evaluation settings on a crowd sourcing platform, with over 300 participants. Our results show that query based diversification can be adapted to generate serendipitous results in a similar manner. Further, we discovered that both approaches, interleaving and block ranking, appear to be beneficial to introduce diversity and serendipity. Though it seems that queries either benefit from one approach or the other but not from both.
Toller Maximilian, Kern Roman
2017
The in-depth analysis of time series has gained a lot of re-search interest in recent years, with the identification of pe-riodic patterns being one important aspect. Many of themethods for identifying periodic patterns require time series’season length as input parameter. There exist only a few al-gorithms for automatic season length approximation. Manyof these rely on simplifications such as data discretization.This paper presents an algorithm for season length detec-tion that is designed to be sufficiently reliable to be used inpractical applications. The algorithm estimates a time series’season length by interpolating, filtering and detrending thedata. This is followed by analyzing the distances betweenzeros in the directly corresponding autocorrelation function.Our algorithm was tested against a comparable algorithmand outperformed it by passing 122 out of 165 tests, whilethe existing algorithm passed 83 tests. The robustness of ourmethod can be jointly attributed to both the algorithmic ap-proach and also to design decisions taken at the implemen-tational level.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
Our work is motivated by the idea to extend the retrieval of related scientific literature to cases, where the relatedness also incorporates the writing style of individual scientific authors. Therefore we conducted a pilot study to answer the question whether humans can identity authorship once the topological clues have been removed. As first result, we found out that this task is challenging, even for humans. We also found some agreement between the annotators. To gain a better understanding how humans tackle such a problem, we conducted an exploratory data analysis. Here, we compared the decisions against a number of topological and stylometric features. The outcome of our work should help to improve automatic authorship identificationalgorithms and to shape potential follow-up studies.
Rexha Andi, Kern Roman, Dragoni Mauro , Kröll Mark
2016
With different social media and commercial platforms, users express their opinion about products in a textual form. Automatically extracting the polarity (i.e. whether the opinion is positive or negative) of a user can be useful for both actors: the online platform incorporating the feedback to improve their product as well as the client who might get recommendations according to his or her preferences. Different approaches for tackling the problem, have been suggested mainly using syntactic features. The “Challenge on Semantic Sentiment Analysis” aims to go beyond the word-level analysis by using semantic information. In this paper we propose a novel approach by employing the semantic information of grammatical unit called preposition. We try to drive the target of the review from the summary information, which serves as an input to identify the proposition in it. Our implementation relies on the hypothesis that the proposition expressing the target of the summary, usually containing the main polarity information.
Ziak Hermann, Kern Roman
2016
Within this work represents the documentation of our ap-proach on the Social Book Search Lab 2016 where we took part in thesuggestion track. The main goal of the track was to create book recom-mendation for readers only based on their stated request within a forum.The forum entry contained further contextual information, like the user’scatalogue of already read books and the list of example books mentionedin the user’s request. The presented approach is mainly based on themetadata included in the book catalogue provided by the organizers ofthe task. With the help of a dedicated search index we extracted severalpotential book recommendations which were re-ranked by the use of anSVD based approach. Although our results did not meet our expectationwe consider it as first iteration towards a competitive solution.
Gursch Heimo, Körner Stefan, Krasser Hannes, Kern Roman
2016
Painting a modern car involves applying many coats during a highly complex and automated process. The individual coats not only serve a decoration purpose but are also curial for protection from damage due to environmental influences, such as rust. For an optimal paint job, many parameters have to be optimised simultaneously. A forecasting model was created, which predicts the paint flaw probability for a given set of process parameters, to help the production managers modify the process parameters to achieve an optimal result. The mathematical model was based on historical process and quality observations. Production managers who are not familiar with the mathematical concept of the model can use it via an intuitive Web-based Graphical User Interface (Web-GUI). The Web-GUI offers production managers the ability to test process parameters and forecast the expected quality. The model can be used for optimising the process parameters in terms of quality and costs.
Gursch Heimo, Kern Roman
2016
Many different sensing, recording and transmitting platforms are offered on today’s market for Internet of Things (IoT) applications. But taking and transmitting measurements is just one part of a complete system. Also long time storage and processing of recorded sensor values are vital for IoT applications. Big Data technologies provide a rich variety of processing capabilities to analyse the recorded measurements. In this paper an architecture for recording, searching, and analysing sensor measurements is proposed. This architecture combines existing IoT and Big Data technologies to bridge the gap between recording, transmission, and persistency of raw sensor data on one side, and the analysis of data on Hadoop clusters on the other side. The proposed framework emphasises scalability and persistence of measurements as well as easy access to the data from a variety of different data analytics tools. To achieve this, a distributed architecture is designed offering three different views on the recorded sensor readouts. The proposed architecture is not targeted at one specific use-case, but is able to provide a platform for a large number of different services.
Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman
2016
To bring bibliometrics and information retrieval closer together, we propose to add the concept of author attribution into the pre-processing of scientific publications. Presently, common bibliographic metrics often attribute the entire article to all the authors affecting author-specific retrieval processes. We envision a more finegrained analysis of scientific authorship by attributing particular segments to authors. To realize this vision, we propose a new feature representation of scientific publications that captures the distribution of tylometric features. In a classification setting, we then seek to predict the number of authors of a scientific article. We evaluate our approach on a data set of ~ 6100 PubMed articles and achieve best results by applying random forests, i.e., 0.76 precision and 0.76 recall averaged over all classes.
Rexha Andi, Kröll Mark, Kern Roman
2016
Monitoring (social) media represents one means for companies to gain access to knowledge about, for instance, competitors, products as well as markets. As a consequence, social media monitoring tools have been gaining attention to handle amounts of data nowadays generated in social media. These tools also include summarisation services. However, most summarisation algorithms tend to focus on (i) first and last sentences respectively or (ii) sentences containing keywords.In this work we approach the task of summarisation by extracting 4W (who, when, where, what) information from (social)media texts. Presenting 4W information allows for a more compact content representation than traditional summaries. Inaddition, we depart from mere named entity recognition (NER) techniques to answer these four question types by includingnon-rigid designators, i.e. expressions which do not refer to the same thing in all possible worlds such as “at the main square”or “leaders of political parties”. To do that, we employ dependency parsing to identify grammatical characteristics for each question type. Every sentence is then represented as a 4W block. We perform two different preliminary studies: selecting sentences that better summarise texts by achieving an F1-measure of 0.343, as well as a 4W block extraction for which we achieve F1-measures of 0.932; 0.900; 0.803; 0.861 for “who”, “when”, “where” and “what” category respectively. In a next step the 4W blocks are ranked by relevance. The top three ranked blocks, for example, then constitute a summary of the entire textual passage. The relevance metric can be customised to the user’s needs, for instance, ranked by up-to-dateness where the sentences’ tense is taken into account. In a user study we evaluate different ranking strategies including (i) up-todateness,(ii) text sentence rank, (iii) selecting the firsts and lasts sentences or (iv) coverage of named entities, i.e. based on the number of named entities in the sentence. Our 4W summarisation method presents a valuable addition to a company’s(social) media monitoring toolkit, thus supporting decision making processes.
Pimas Oliver, Rexha Andi, Kröll Mark, Kern Roman
2016
The PAN 2016 author profiling task is a supervised classification problemon cross-genre documents (tweets, blog and social media posts). Our systemmakes use of concreteness, sentiment and syntactic information present in thedocuments. We train a random forest model to identify gender and age of a document’sauthor. We report the evaluation results received by the shared task.
Kern Roman, Klampfl Stefan, Rexha Andi
2016
This report describes our contribution to the 2nd ComputationalLinguistics Scientific Document Summarization Shared Task (CLSciSumm2016), which asked to identify the relevant text span in a referencepaper that corresponds to a citation in another document that citesthis paper. We developed three different approaches based on summarisationand classification techniques. First, we applied a modified versionof an unsupervised summarisation technique, TextSentenceRank, to thereference document, which incorporates the similarity of sentences tothe citation on a textual level. Second, we employed classification to selectfrom candidates previously extracted through the original TextSentenceRankalgorithm. Third, we used unsupervised summarisation of therelevant sub-part of the document that was previously selected in a supervisedmanner.
Gursch Heimo, Ziak Hermann, Kröll Mark, Kern Roman
2016
Modern knowledge workers need to interact with a large number of different knowledge sources with restricted or public access. Knowledge workers are thus burdened with the need to familiarise and query each source separately. The EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project aims at developing a recommender system providing relevant and novel content to its users. Based on the user’s work context, the EEXCESS system can either automatically recommend useful content, or support users by providing a single user interface for a variety of knowledge sources. In the design process of the EEXCESS system, recommendation quality, scalability and security where the three most important criteria. This paper investigates the scalability aspect achieved by federated design of the EEXCESS recommender system. This means that, content in different sources is not replicated but its management is done in each source individually. Recommendations are generated based on the context describing the knowledge worker’s information need. Each source offers result candidates which are merged and re-ranked into a single result list. This merging is done in a vector representation space to achieve high recommendation quality. To ensure security, user credentials can be set individually by each user for each source. Hence, access to the sources can be granted and revoked for each user and source individually. The scalable architecture of the EEXCESS system handles up to 100 requests querying up to 10 sources in parallel without notable performance deterioration. The re-ranking and merging of results have a smaller influence on the system's responsiveness than the average source response rates. The EEXCESS recommender system offers a common entry point for knowledge workers to a variety of different sources with only marginally lower response times as the individual sources on their own. Hence, familiarisation with individual sources and their query language is not necessary.
Rexha Andi, Dragoni Mauro, Kern Roman, Kröll Mark
2016
Ontology matching in a multilingual environment consists of finding alignments between ontologies modeled by using more than one language. Such a research topic combines traditional ontology matching algorithms with the use of multilingual resources, services, and capabilities for easing multilingual matching. In this paper, we present a multilingual ontology matching approach based on Information Retrieval (IR) techniques: ontologies are indexed through an inverted index algorithm and candidate matches are found by querying such indexes. We also exploit the hierarchical structure of the ontologies by adopting the PageRank algorithm for our system. The approaches have been evaluated using a set of domain-specific ontologies belonging to the agricultural and medical domain. We compare our results with existing systems following an evaluation strategy closely resembling a recommendation scenario. The version of our system using PageRank showed an increase in performance in our evaluations.
Mutlu Belgin, Sabol Vedran, Gursch Heimo, Kern Roman
2016
Graphical interfaces and interactive visualisations are typical mediators between human users and data analytics systems. HCI researchers and developers have to be able to understand both human needs and back-end data analytics. Participants of our tutorial will learn how visualisation and interface design can be combined with data analytics to provide better visualisations. In the first of three parts, the participants will learn about visualisations and how to appropriately select them. In the second part, restrictions and opportunities associated with different data analytics systems will be discussed. In the final part, the participants will have the opportunity to develop visualisations and interface designs under given scenarios of data and system settings.
Santos Tiago, Kern Roman
2016
This paper provides an overview of current literature on timeseries classification approaches, in particular of early timeseries classification.A very common and effective time series classification ap-proach is the 1-Nearest Neighbor classifier, with differentdistance measures such as the Euclidean or dynamic timewarping distances. This paper starts by reviewing thesebaseline methods.More recently, with the gain in popularity in the applica-tion of deep neural networks to the field of computer vision,research has focused on developing deep learning architec-tures for time series classification as well. The literature inthe field of deep learning for time series classification hasshown promising results.Early time series classification aims to classify a time se-ries with as few temporal observations as possible, whilekeeping the loss of classification accuracy at a minimum.Prominent early classification frameworks reviewed by thispaper include, but are not limited to, ECTS, RelClass andECDIRE. These works have shown that early time seriesclassification may be feasible and performant, but they alsoshow room for improvement
Kern Roman, Ziak Hermann
2016
Context-driven query extraction for content-basedrecommender systems faces the challenge of dealing with queriesof multiple topics. In contrast to manually entered queries, forautomatically generated queries this is a more frequent problem. For instances if the information need is inferred indirectly viathe user's current context. Especially for federated search systemswere connected knowledge sources might react vastly differentlyon such queries, an algorithmic way how to deal with suchqueries is of high importance. One such method is to split mixedqueries into their individual subtopics. To gain insight how amulti topic query can be split into its subtopics we conductedan evaluation where we compared a naive approach against amore complex approaches based on word embedding techniques:One created using Word2Vec and one created using GloVe. Toevaluate these two approaches we used the Webis-QSeC-10 queryset, consisting of about 5,000 multi term queries. Queries of thisset were concatenated and passed through the algorithms withthe goal to split those queries again. Hence the naive approach issplitting the queries into several groups, according to the amountof joined queries, assuming the topics are of equal query termcount. In the case of the Word2Vec and GloVe based approacheswe relied on the already pre-trained datasets. The Google Newsmodel and a model trained with a Wikipedia dump and theEnglish Gigaword newswire text archive. The out of this datasetsresulting query term vectors were grouped into subtopics usinga k-Means clustering. We show that a clustering approach basedon word vectors achieves better results in particular when thequery is not in topical order. Furthermore we could demonstratethe importance of the underlying dataset.
Klampfl Stefan, Kern Roman
2016
Semantic enrichment of scientific publications has an increasing impact on scholarly communication. This document describes our contribution to Semantic Publishing Challenge 2016, which aims at investigating novel approaches for improving scholarly publishing through semantic technologies. We participated in Task 2 of this challenge, which requires the extraction of information from the content of a paper given as PDF. The extracted information allows answering queries about the paper’s internal organisation and the context in which it was written. We build upon our contribution to the previous edition of the challenge, where we categorised meta-data, such as authors and affiliations, and extracted funding information. Here we use unsupervised machine learning techniques in order to extend the analysis of the logical structure of the document as to identify section titles and captions of figures and tables. Furthermore, we employ clustering techniques to create the hierarchical table of contents of the article. Our system is modular in nature and allows a separate training of different stages on different training sets.
Urak Günter, Ziak Hermann, Kern Roman
2016
The core approach to distributed knowledge bases is federated search. Two of the main challenges for federated search are the source representation and source selection. Different solutions to these problems were proposed in the literature. Within this work we present our novel approach for query-based sampling by relying on knowledge bases. We show the basic correctness of our approach and we came to the insight that the ambiguity of the probing terms has just a minor impact on the representation of the collection. Finally, we show that our method can be used to distinguish between niche and encyclopedic knowledge bases.
Horn Christopher, Gursch Heimo, Kern Roman, Cik Michael
2016
Models describing human travel patterns are indispensable to plan and operate road, rail and public transportation networks. For most kind of analyses in the field of transportation planning, there is a need for origin-destination (OD) matrices, which specify the travel demands between the origin and destination zones in the network. The preparation of OD matrices is traditionally a time consuming and cumbersome task. The presented system, QZTool, reduces the necessary effort as it is capable of generating OD matrices automatically. These matrices are produced starting from floating phone data (FPD) as raw input. This raw input is processed by a Hadoop-based big data system. A graphical user interface allows for an easy usage and hides the complexity from the operator. For evaluation, we compare a FDP-based OD matrix to an OD matrix created by a traffic demand model. Results show that both matrices agree to a high degree, indicating that FPD-based OD matrices can be used to create new, or to validate or amend existing OD matrices.
Falk Stefan, Rexha Andi, Kern Roman
2016
This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2.The challenge demands to find domain specific target expressions on sentence level thatrefer to reviewed entities. The detection of target words is achieved by using word vectorsand their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words tothe whole target phrase. Our system achievedan F1 score of 56.816% for this task.
Dragoni Mauro, Rexha Andi, Kröll Mark, Kern Roman
2016
Twitter is one of the most popular micro-blogging serviceson the web. The service allows sharing, interaction and collaboration viashort, informal and often unstructured messages called tweets. Polarityclassification of tweets refers to the task of assigning a positive or a nega-tive sentiment to an entire tweet. Quite similar is predicting the polarityof a specific target phrase, for instance@Microsoftor#Linux,whichiscontained in the tweet.In this paper we present a Word2Vec approach to automatically pre-dict the polarity of a target phrase in a tweet. In our classification setting,we thus do not have any polarity information but use only semantic infor-mation provided by a Word2Vec model trained on Twitter messages. Toevaluate our feature representation approach, we apply well-establishedclassification algorithms such as the Support Vector Machine and NaiveBayes. For the evaluation we used theSemeval 2016 Task #4dataset.Our approach achieves F1-measures of up to∼90 % for the positive classand∼54 % for the negative class without using polarity informationabout single words.
Pimas Oliver, Klampfl Stefan, Kohl Thomas, Kern Roman, Kröll Mark
2016
Patents and patent applications are important parts of acompany’s intellectual property. Thus, companies put a lot of effort indesigning and maintaining an internal structure for organizing their ownpatent portfolios, but also in keeping track of competitor’s patent port-folios. Yet, official classification schemas offered by patent offices (i) areoften too coarse and (ii) are not mappable, for instance, to a company’sfunctions, applications, or divisions. In this work, we present a first steptowards generating tailored classification. To automate the generationprocess, we apply key term extraction and topic modelling algorithmsto 2.131 publications of German patent applications. To infer categories,we apply topic modelling to the patent collection. We evaluate the map-ping of the topics found via the Latent Dirichlet Allocation method tothe classes present in the patent collection as assigned by the domainexpert.
Ziak Hermann, Rexha Andi, Kern Roman
2016
This paper describes our system for the mining task of theSocial Book Search Lab in 2016. The track consisted of two task, theclassification of book request postings and the task of linking book identifierswith references mentioned within the text. For the classificationtask we used text mining features like n-grams and vocabulary size, butalso included advanced features like average spelling errors found withinthe text. Here two datasets were provided by the organizers for this taskwhich were evaluated separately. The second task, the linking of booktitles to a work identifier, was addressed by an approach based on lookuptables. For the dataset of the first task our approach was ranked third,following two baseline approaches of the organizers with an accuracy of91 percent. For the second dataset we achieved second place with anaccuracy of 82 percent. Our approach secured the first place with anF-score of 33.50 for the second task.
Gursch Heimo, Ziak Hermann, Kern Roman
2015
The objective of the EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project is to develop a system that can automatically recommend helpful and novel content to knowledge workers. The EEXCESS system can be integrated into existing software user interfaces as plugins which will extract topics and suggest the relevant material automatically. This recommendation process simplifies the information gathering of knowledge workers. Recommendations can also be triggered manually via web frontends. EEXCESS hides the potentially large number of knowledge sources by semi or fully automatically providing content suggestions. Hence, users only have to be able to in use the EEXCESS system and not all sources individually. For each user, relevant sources can be set or auto-selected individually. EEXCESS offers open interfaces, making it easy to connect additional sources and user program plugins.
Schulze Gunnar, Horn Christopher, Kern Roman
2015
This paper presents an approach for matching cell phone trajectories of low spatial and temporal accuracy to the underlying road network. In this setting, only the position of the base station involved in a signaling event and the timestamp are known, resulting in a possible error of several kilometers. No additional information, such as signal strength, is available. The proposed solution restricts the set of admissible routes to a corridor by estimating the area within which a user is allowed to travel. The size and shape of this corridor can be controlled by various parameters to suit different requirements. The computed area is then used to select road segments from an underlying road network, for instance OpenStreetMap. These segments are assembled into a search graph, which additionally takes the chronological order of observations into account. A modified Dijkstra algorithm is applied for finding admissible candidate routes, from which the best one is chosen. We performed a detailed evaluation of 2249 trajectories with an average sampling time of 260 seconds. Our results show that, in urban areas, on average more than 44% of each trajectory are matched correctly. In rural and mixed areas, this value increases to more than 55%. Moreover, an in-depth evaluation was carried out to determine the optimal values for the tunable parameters and their effects on the accuracy, matching ratio and execution time. The proposed matching algorithm facilitates the use of large volumes of cell phone data in Intelligent Transportation Systems, in which accurate trajectories are desirable.
Ziak Hermann, Kern Roman
2015
Cross vertical aggregated search is a special form of meta search, were multiple search engines from different domains and varying behaviour are combined to produce a single search result for each query. Such a setting poses a number of challenges, among them the question of how to best evaluate the quality of the aggregated search results. We devised an evaluation strategy together with an evaluation platform in order to conduct a series of experiments. In particular, we are interested whether pseudo relevance feedback helps in such a scenario. Therefore we implemented a number of pseudo relevance feedback techniques based on knowledge bases, where the knowledge base is either Wikipedia or a combination of the underlying search engines themselves. While conducting the evaluations we gathered a number of qualitative and quantitative results and gained insights on how different users compare the quality of search result lists. In regard to the pseudo relevance feedback we found that using Wikipedia as knowledge base generally provides a benefit, unless for entity centric queries, which are targeting single persons or organisations. Our results will enable to help steering the development of cross vertical aggregated search engines and will also help to guide large scale evaluation strategies, for example using crowd sourcing techniques.
Pimas Oliver, Kröll Mark, Kern Roman
2015
Our system for the PAN 2015 authorship verification challenge is basedupon a two step pre-processing pipeline. In the first step we extract different fea-tures that observe stylometric properties, grammatical characteristics and purestatistical features. In the second step of our pre-processing we merge all thosefeatures into a single meta feature space. We train an SVM classifier on the gener-ated meta features to verify the authorship of an unseen text document. We reportthe results from the final evaluation as well as on the training datasets
Rubien Raoul, Ziak Hermann, Kern Roman
2015
Underspecified search queries can be performed via result list diversification approaches, which are often compu- tationally complex and require longer response times. In this paper, we explore an alternative, and more efficient way to diversify the result list based on query expansion. To that end, we used a knowledge base pseudo-relevance feedback algorithm. We compared our algorithm to IA-Select, a state-of-the-art diversification method, using its intent-aware version of the NDCG (Normalized Discounted Cumulative Gain) metric. The results indicate that our approach can guarantee a similar extent of diversification as IA-Select. In addition, we showed that the supported query language of the underlying search engines plays an important role in the query expansion based on diversification. Therefore, query expansion may be an alternative when result diversification is not feasible, for example in federated search systems where latency and the quantity of handled search results are critical issues.
Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman
2015
The overwhelming majority of scientific publications are authored by multiple persons; yet, bibliographic metrics are only assigned to individual articles as single entities. In this paper, we aim at a more fine-grained analysis of scientific authorship. We therefore adapt a text segmentation algorithm to identify potential author changes within the main text of a scientific article, which we obtain by using existing PDF extraction techniques. To capture stylistic changes in the text, we employ a number of stylometric features. We evaluate our approach on a small subset of PubMed articles consisting of an approximately equal number of research articles written by a varying number of authors. Our results indicate that the more authors an article has the more potential author changes are identified. These results can be considered as an initial step towards a more detailed analysis of scientific authorship, thereby extending the repertoire of bibliometrics.
Klampfl Stefan, Kern Roman
2015
Scholarly publishing increasingly requires automated systems that semantically enrich documents in order to support management and quality assessment of scientific output.However, contextual information, such as the authors' affiliations, references, and funding agencies, is typically hidden within PDF files.To access this information we have developed a processing pipeline that analyses the structure of a PDF document incorporating a diverse set of machine learning techniques.First, unsupervised learning is used to extract contiguous text blocks from the raw character stream as the basic logical units of the article.Next, supervised learning is employed to classify blocks into different meta-data categories, including authors and affiliations.Then, a set of heuristics are applied to detect the reference section at the end of the paper and segment it into individual reference strings.Sequence classification is then utilised to categorise the tokens of individual references to obtain information such as the journal and the year of the reference.Finally, we make use of named entity recognition techniques to extract references to research grants, funding agencies, and EU projects.Our system is modular in nature.Some parts rely on models learnt on training data, and the overall performance scales with the quality of these data sets.
Horn Christopher, Kern Roman
2015
In this paper, we propose an approach to deriving public transportation timetables of a region (i.e. country) based on (i) large- scale, non-GPS cell phone data and (ii) a dataset containing geographic information of public transportation stations. The presented algorithm is designed to work with movements data, which are scarce and have a low spatial accuracy but exists in vast amounts (large-scale). Since only aggregated statistics are used, our algorithm copes well with anonymized data. Our evaluation shows that 89% of the departure times of popular train connections are correctly recalled with an allowed deviation of 5 minutes. The timetable can be used as feature for transportation mode detection to separate public from private transport when no public timetable is available.
Kern Roman, Frey Matthias
2015
Table recognition and table extraction are important tasks in information extraction, especially in the domain of schol- arly communication. In this domain tables are commonplace and contain valuable information. Many different automatic approaches for table recognition and extraction exist. Com- mon to many of these approaches is the need for ground truth datasets, to train algorithms or to evaluate the results. In this paper we present the PDF Table Annotator, a web based tool for annotating elements and regions in PDF doc- uments, in particular tables. The annotated data is intended to serve as a ground truth useful to machine learning algo- rithms for detecting table regions and table structure. To make the task of manual table annotation as convenient as possible, the tool is designed to allow an efficient annotation process that may spawn multiple session by multiple users. An evaluation is conducted where we compare our tool to three alternative ways of creating ground truth of tables in documents. Here we found that our tool overall provides an efficient and convenient way to annotate tables. In addition, our tool is particularly suitable for complex table structures, where it provided the lowest annotation time and the highest accuracy. Furthermore, our tool allows to annotate tables following a logical or a functional model. Given that by the use of our tool ground truth datasets for table recognition and extraction are easier to produce, the quality of auto- matic tables extraction should greatly benefit. General
Stegmaier Florian, Seifert Christin, Kern Roman, Höfler Patrick, Bayerl Sebastian, Granitzer Michael, Kosch Harald, Lindstaedt Stefanie , Mutlu Belgin, Sabol Vedran, Schlegel Kai
2014
Research depends to a large degree on the availability and quality of primary research data, i.e., data generated through experiments and evaluations. While the Web in general and Linked Data in particular provide a platform and the necessary technologies for sharing, managing and utilizing research data, an ecosystem supporting those tasks is still missing. The vision of the CODE project is the establishment of a sophisticated ecosystem for Linked Data. Here, the extraction of knowledge encapsulated in scientific research paper along with its public release as Linked Data serves as the major use case. Further, Visual Analytics approaches empower end users to analyse, integrate and organize data. During these tasks, specific Big Data issues are present.
Kern Roman, Zechner Mario, Granitzer Michael
2011
Author disambiguation is a prerequisite for utilizingbibliographic metadata in citation analysis. Automaticdisambiguation algorithms mostly rely on cluster-based disambiguationstrategies for identifying unique authors given theirnames and publications. However, most approaches rely onknowing the correct number of unique authors a-priori, whichis rarely the case in real world settings. In this publicationwe analyse cluster-based disambiguation strategies and developa model selection method to estimate the number of distinctauthors based on co-authorship networks. We show that, givenclean textual features, the developed model selection methodprovides accurate guesses of the number of unique authors.
Kern Roman, Granitzer Michael, Muhr M.
2010
Word sense induction and discrimination(WSID) identifies the senses of an ambiguousword and assigns instances of thisword to one of these senses. We have builda WSID system that exploits syntactic andsemantic features based on the results ofa natural language parser component. Toachieve high robustness and good generalizationcapabilities, we designed our systemto work on a restricted, but grammaticallyrich set of features. Based on theresults of the evaluations our system providesa promising performance and robustness.
Kern Roman, Granitzer Michael, Muhr M.
2010
Cluster label quality is crucial for browsing topic hierarchiesobtained via document clustering. Intuitively, the hierarchicalstructure should influence the labeling accuracy. However,most labeling algorithms ignore such structural propertiesand therefore, the impact of hierarchical structureson the labeling accuracy is yet unclear. In our work weintegrate hierarchical information, i.e. sibling and parentchildrelations, in the cluster labeling process. We adaptstandard labeling approaches, namely Maximum Term Frequency,Jensen-Shannon Divergence, χ2 Test, and InformationGain, to take use of those relationships and evaluatetheir impact on 4 different datasets, namely the Open DirectoryProject, Wikipedia, TREC Ohsumed and the CLEFIP European Patent dataset. We show, that hierarchicalrelationships can be exploited to increase labeling accuracyespecially on high-level nodes.
Neidhart T., Granitzer Michael, Kern Roman, Weichselbraun A., Wohlgenannt G., Scharl A., Juffinger A.
2009
Lindstaedt Stefanie , Pammer-Schindler Viktoria, Mörzinger Roland, Kern Roman, Mülner Helmut, Wagner Claudia
2008
Imagine you are member of an online social systemand want to upload a picture into the community pool. In currentsocial software systems, you can probably tag your photo, shareit or send it to a photo printing service and multiple other stuff.The system creates around you a space full of pictures, otherinteresting content (descriptions, comments) and full of users aswell. The one thing current systems do not do, is understandwhat your pictures are about.We present here a collection of functionalities that make a stepin that direction when put together to be consumed by a tagrecommendation system for pictures. We use the data richnessinherent in social online environments for recommending tags byanalysing different aspects of the same data (text, visual contentand user context). We also give an assessment of the quality ofthus recommended tags.