Chiancone Alessandro, Cuder Gerald, Geiger Bernhard, Harzl Annemarie, Tanzer Thomas, Kern Roman
2019
This paper presents a hybrid model for the prediction of magnetostriction in power transformers by leveraging the strengths of a data-driven approach and a physics-based model. Specifically, a non-linear physics-based model for magnetostriction as a function of the magnetic field is employed, the parameters of which are estimated as linear combinations of electrical coil measurements and coil dimensions. The model is validated in a practical scenario with coil data from two different suppliers, showing that the proposed approach captures the different magnetostrictive properties of the two suppliers and provides an estimation of magnetostriction in agreement with the measurement system in place. It is argued that the combination of a non-linear physics-based model with few parameters and a linear data-driven model to estimate these parameters is attractive both in terms of model accuracy and because it allows training the data-driven part with comparably small datasets.
Santos Tiago, Schrunner Stefan, Geiger Bernhard, Pfeiler Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2019
Semiconductor manufacturing is a highly innovative branch of industry, where a high degree of automation has already been achieved. For example, devices tested to be outside of their specifications in electrical wafer test are automatically scrapped. In this paper, we go one step further and analyze test data of devices still within the limits of the specification, by exploiting the information contained in the analog wafermaps. To that end, we propose two feature extraction approaches with the aim to detect patterns in the wafer test dataset. Such patterns might indicate the onset of critical deviations in the production process. The studied approaches are: 1) classical image processing and restoration techniques in combination with sophisticated feature engineering and 2) a data-driven deep generative model. The two approaches are evaluated on both a synthetic and a real-world dataset. The synthetic dataset has been modeled based on real-world patterns and characteristics. We found both approaches to provide similar overall evaluation metrics. Our in-depth analysis helps to choose one approach over the other depending on data availability as a major aspect, as well as on available computing power and required interpretability of the results.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Toller Maximilian, Santos Tiago, Kern Roman
2019
Season length estimation is the task of identifying the number of observations in the dominant repeating pattern of seasonal time series data. As such, it is a common pre-processing task crucial for various downstream applications. Inferring season length from a real-world time series is often challenging due to phenomena such as slightly varying period lengths and noise. These issues may, in turn, lead practitioners to dedicate considerable effort to preprocessing of time series data since existing approaches either require dedicated parameter-tuning or their performance is heavily domain-dependent. Hence, to address these challenges, we propose SAZED: spectral and average autocorrelation zero distance density. SAZED is a versatile ensemble of multiple, specialized time series season length estimation approaches. The combination of various base methods selected with respect to domain-agnostic criteria and a novel seasonality isolation technique, allow a broad applicability to real-world time series of varied properties. Further, SAZED is theoretically grounded and parameter-free, with a computational complexity of O(𝑛log𝑛), which makes it applicable in practice. In our experiments, SAZED was statistically significantly better than every other method on at least one dataset. The datasets we used for the evaluation consist of time series data from various real-world domains, sterile synthetic test cases and synthetic data that were designed to be seasonal and yet have no finite statistical moments of any order.
Toller Maximilian, Geiger Bernhard, Kern Roman
2019
Distance-based classification is among the most competitive classification methods for time series data. The most critical componentof distance-based classification is the selected distance function.Past research has proposed various different distance metrics ormeasures dedicated to particular aspects of real-world time seriesdata, yet there is an important aspect that has not been considered so far: Robustness against arbitrary data contamination. In thiswork, we propose a novel distance metric that is robust against arbitrarily “bad” contamination and has a worst-case computationalcomplexity of O(n logn). We formally argue why our proposedmetric is robust, and demonstrate in an empirical evaluation thatthe metric yields competitive classification accuracy when appliedin k-Nearest Neighbor time series classification.
Winter Kevin, Kern Roman
2019
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second,to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved aver-age F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Geiger Bernhard, Schrunner Stefan, Kern Roman
2019
Schrunner and Geiger have contributed equally to this work.
Lovric Mario, Molero Perez Jose Manuel, Kern Roman
2019
The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations