Lovric Mario, Antunović Mario, Šunić Iva, Vuković Matej, Kecorius Simon, Kröll Mark, Bešlić Ivan, Godec Ranka, Pehnec Gordana, Geiger Bernhard, Grange Stuart K, Šimić Iva
2022
In this paper, the authors investigated changes in mass concentrations of particulate matter (PM) during the Coronavirus Disease of 2019 (COVID-19) lockdown. Daily samples of PM1, PM2.5 and PM10 fractions were measured at an urban background sampling site in Zagreb, Croatia from 2009 to late 2020. For the purpose of meteorological normalization, the mass concentrations were fed alongside meteorological and temporal data to Random Forest (RF) and LightGBM (LGB) models tuned by Bayesian optimization. The models’ predictions were subsequently de-weathered by meteorological normalization using repeated random resampling of all predictive variables except the trend variable. Three pollution periods in 2020 were examined in detail: January and February, as pre-lockdown, the month of April as the lockdown period, as well as June and July as the “new normal”. An evaluation using normalized mass concentrations of particulate matter and Analysis of variance (ANOVA) was conducted. The results showed that no significant differences were observed for PM1, PM2.5 and PM10 in April 2020—compared to the same period in 2018 and 2019. No significant changes were observed for the “new normal” as well. The results thus indicate that a reduction in mobility during COVID-19 lockdown in Zagreb, Croatia, did not significantly affect particulate matter concentration in the long-term
Reichel Robert, Gursch Heimo, Kröll Mark
2022
Der Trend, im Gesundheitswesen von Aufzeichnungen in Papierform auf digitale Formen zu wechseln, legt die Basis für eine elektronische Verarbeitung von Gesundheitsdaten. Dieser Artikel beschreibt die technischen Grundlagen für die semantische Aufbereitung und Analyse von textuellen Inhalten in der medizinischen Domäne. Die speziellen Eigenschaften medizinischer Texte gestalten die Extraktion sowie Aggregation relevanter Information herausfordernder als in anderen Anwendungsgebieten. Zusätzlich gibt es Bedarf an spezialisierten Methoden gerade im Bereich der Anonymisierung bzw. Pseudonymisierung personenbezogener Daten. Der Einsatz von Methoden der Computerlinguistik in Kombination mit der fortschreitenden Digitalisierung birgt dennoch enormes Potential, das Personal im Gesundheitswesen zu unterstützen.
Lovric Mario, Šimić Iva, Godec Ranka, Kröll Mark, Beslic Ivan
2020
Narrow city streets surrounded by tall buildings are favorable to inducing a general effect of a “canyon” in which pollutants strongly accumulate in a relatively small area because of weak or inexistent ventilation. In this study, levels of nitrogen-oxide (NO2), elemental carbon (EC) and organic carbon (OC) mass concentrations in PM10 particles were determined to compare between seasons and different years. Daily samples were collected at one such street canyon location in the center of Zagreb in 2011, 2012 and 2013. By applying machine learning methods we showed seasonal and yearly variations of mass concentrations for carbon species in PM10 and NO2, as well as their covariations and relationships. Furthermore, we compared the predictive capabilities of five regressors (Lasso, Random Forest, AdaBoost, Support Vector Machine and Partials Least squares) with Lasso regression being the overall best performing algorithm. By showing the feature importance for each model, we revealed true predictors per target. These measurements and application of machine learning of pollutants were done for the first time at a street canyon site in the city of Zagreb, Croatia.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2018
The goal of our work is inspired by the task of associating segments of text to their real authors. In this work, we focus on analyzing the way humans judge different writing styles. This analysis can help to better understand this process and to thus simulate/ mimic such behavior accordingly. Unlike the majority of the work done in this field (i.e., authorship attribution, plagiarism detection, etc.) which uses content features, we focus only on the stylometric, i.e. content-agnostic, characteristics of authors.Therefore, we conducted two pilot studies to determine, if humans can identify authorship among documents with high content similarity. The first was a quantitative experiment involving crowd-sourcing, while the second was a qualitative one executed by the authors of this paper.Both studies confirmed that this task is quite challenging.To gain a better understanding of how humans tackle such a problem, we conducted an exploratory data analysis on the results of the studies. In the first experiment, we compared the decisions against content features and stylometric features. While in the second, the evaluators described the process and the features on which their judgment was based. The findings of our detailed analysis could (i) help to improve algorithms such as automatic authorship attribution as well as plagiarism detection, (ii) assist forensic experts or linguists to create profiles of writers, (iii) support intelligence applications to analyze aggressive and threatening messages and (iv) help editor conformity by adhering to, for instance, journal specific writing style.
Bassa Akim, Kröll Mark, Kern Roman
2018
Open Information Extraction (OIE) is the task of extracting relations fromtext without the need of domain speci c training data. Currently, most of the researchon OIE is devoted to the English language, but little or no research has been conductedon other languages including German. We tackled this problem and present GerIE, anOIE parser for the German language. Therefore we started by surveying the availableliterature on OIE with a focus on concepts, which may also apply to the Germanlanguage. Our system is built upon the output of a dependency parser, on which anumber of hand crafted rules are executed. For the evaluation we created two dedicateddatasets, one derived from news articles and one based on texts from an encyclopedia.Our system achieves F-measures of up to 0.89 for sentences that have been correctlypreprocessed.
Yusuke Fukazawa, Kröll Mark, Strohmaier M., Ota Jun
2016
Task-models concretize general requests to support users in real-world scenarios. In this paper, we present an IR based algorithm (IRTML) to automate the construction of hierarchically structured task-models. In contrast to other approaches, our algorithm is capable of assigning general tasks closer to the top and specific tasks closer to the bottom. Connections between tasks are established by extending Turney’s PMI-IR measure. To evaluate our algorithm, we manually created a ground truth in the health-care domain consisting of 14 domains. We compared the IRTML algorithm to three state-of-the-art algorithms to generate hierarchical structures, i.e. BiSection K-means, Formal Concept Analysis and Bottom-Up Clustering. Our results show that IRTML achieves a 25.9% taxonomic overlap with the ground truth, a 32.0% improvement over the compared algorithms.
Granitzer Michael, Rath Andreas S., Kröll Mark, Ipsmiller D., Devaurs Didier, Weber Nicolas, Lindstaedt Stefanie , Seifert C.
2009
Increasing the productivity of a knowledgeworker via intelligent applications requires the identification ofa user’s current work task, i.e. the current work context a userresides in. In this work we present and evaluate machine learningbased work task detection methods. By viewing a work taskas sequence of digital interaction patterns of mouse clicks andkey strokes, we present (i) a methodology for recording thoseuser interactions and (ii) an in-depth analysis of supervised classificationmodels for classifying work tasks in two different scenarios:a task centric scenario and a user centric scenario. Weanalyze different supervised classification models, feature typesand feature selection methods on a laboratory as well as a realworld data set. Results show satisfiable accuracy and high useracceptance by using relatively simple types of features.
Burgsteiner H., Kröll Mark, Leopold A., Steinbauer G.
2007
The prediction of time series is an important task in finance, economy, object tracking, state estimation and robotics. Prediction is in general either based on a well-known mathematical description of the system behind the time series or learned from previously collected time series. In this work we introduce a novel approach to learn predictions of real world time series like object trajectories in robotics. In a sequence of experiments we evaluate whether a liquid state machine in combination with a supervised learning algorithm can be used to predict ball trajectories with input data coming from a video camera mounted on a robot participating in the RoboCup. The pre-processed video data is fed into a recurrent spiking neural network. Connections to some output neurons are trained by linear regression to predict the position of a ball in various time steps ahead. The main advantages of this approach are that due to the nonlinear projection of the input data to a high-dimensional space simple learning algorithms can be used, that the liquid state machine provides temporal memory capabilities and that this kind of computation appears biologically more plausible than conventional methods for prediction. Our results support the idea that learning with a liquid state machine is a generic powerful tool for prediction.