Publikationen

Hier finden Sie von Know-Center MitarbeiterInnen verfasste wissenschaftliche Publikationen

2016

Ziak Hermann, Rexha Andi, Kern Roman

KNOW At The Social Book Search Lab 2016 Mining Track

CLEF 2016 Social Book Search Lab, Krisztian Balog, Linda Cappellato, Nicola Ferro,Craig Macdonald, Springer, Évora, Portugal, 2016

Konferenz
This paper describes our system for the mining task of theSocial Book Search Lab in 2016. The track consisted of two task, theclassification of book request postings and the task of linking book identifierswith references mentioned within the text. For the classificationtask we used text mining features like n-grams and vocabulary size, butalso included advanced features like average spelling errors found withinthe text. Here two datasets were provided by the organizers for this taskwhich were evaluated separately. The second task, the linking of booktitles to a work identifier, was addressed by an approach based on lookuptables. For the dataset of the first task our approach was ranked third,following two baseline approaches of the organizers with an accuracy of91 percent. For the second dataset we achieved second place with anaccuracy of 82 percent. Our approach secured the first place with anF-score of 33.50 for the second task.
2016

Gursch Heimo, Ziak Hermann, Kröll Mark, Kern Roman

Context-Driven Federated Recommendations for Knowledge Workers

Proceedings of the 17th European Conference on Knowledge Management (ECKM), Dr. Sandra Moffett and Dr. Brendan Galbraith, Academic Conferences and Publishing International Limited, Belfast, Northern Ireland, UK, 2016

Konferenz
Modern knowledge workers need to interact with a large number of different knowledge sources with restricted or public access. Knowledge workers are thus burdened with the need to familiarise and query each source separately. The EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project aims at developing a recommender system providing relevant and novel content to its users. Based on the user’s work context, the EEXCESS system can either automatically recommend useful content, or support users by providing a single user interface for a variety of knowledge sources. In the design process of the EEXCESS system, recommendation quality, scalability and security where the three most important criteria. This paper investigates the scalability aspect achieved by federated design of the EEXCESS recommender system. This means that, content in different sources is not replicated but its management is done in each source individually. Recommendations are generated based on the context describing the knowledge worker’s information need. Each source offers result candidates which are merged and re-ranked into a single result list. This merging is done in a vector representation space to achieve high recommendation quality. To ensure security, user credentials can be set individually by each user for each source. Hence, access to the sources can be granted and revoked for each user and source individually. The scalable architecture of the EEXCESS system handles up to 100 requests querying up to 10 sources in parallel without notable performance deterioration. The re-ranking and merging of results have a smaller influence on the system's responsiveness than the average source response rates. The EEXCESS recommender system offers a common entry point for knowledge workers to a variety of different sources with only marginally lower response times as the individual sources on their own. Hence, familiarisation with individual sources and their query language is not necessary.
2016

Ziak Hermann, Kern Roman

KNOW At The Social Book Search Lab 2016 Suggestion Track

CLEF 2016 Social Book Search Lab, Krisztian Balog, Linda Cappellato, Nicola Ferro, Craig Macdonal, CEUR Workshop Proceeding, Évora, Portugal, 2016

Konferenz
Within this work represents the documentation of our ap-proach on the Social Book Search Lab 2016 where we took part in thesuggestion track. The main goal of the track was to create book recom-mendation for readers only based on their stated request within a forum.The forum entry contained further contextual information, like the user’scatalogue of already read books and the list of example books mentionedin the user’s request. The presented approach is mainly based on themetadata included in the book catalogue provided by the organizers ofthe task. With the help of a dedicated search index we extracted severalpotential book recommendations which were re-ranked by the use of anSVD based approach. Although our results did not meet our expectationwe consider it as first iteration towards a competitive solution.
2016

Gursch Heimo, Körner Stefan, Krasser Hannes, Kern Roman

Parameter Forecasting for Vehicle Paint Quality Optimisation

Mensch und Computer 2016 – Workshopband, Benjamin Weyers, Anke Dittmar, Gesellschaft für Informatik e.V., Aachen, 2016

Konferenz
Painting a modern car involves applying many coats during a highly complex and automated process. The individual coats not only serve a decoration purpose but are also curial for protection from damage due to environmental influences, such as rust. For an optimal paint job, many parameters have to be optimised simultaneously. A forecasting model was created, which predicts the paint flaw probability for a given set of process parameters, to help the production managers modify the process parameters to achieve an optimal result. The mathematical model was based on historical process and quality observations. Production managers who are not familiar with the mathematical concept of the model can use it via an intuitive Web-based Graphical User Interface (Web-GUI). The Web-GUI offers production managers the ability to test process parameters and forecast the expected quality. The model can be used for optimising the process parameters in terms of quality and costs.
2016

Gursch Heimo, Kern Roman

Internet of Things meets Big Data: An Infrastructure to Collect, Connect, and Analyse Sensor Data

VDE Kongress 2016: Internet der Dinge (VDE Kongress 2016), DE Verlag GmbH, Berlin - Offenbach, Congress Center Rosengarten, Mannheim, Germany, 2016

Konferenz
Many different sensing, recording and transmitting platforms are offered on today’s market for Internet of Things (IoT) applications. But taking and transmitting measurements is just one part of a complete system. Also long time storage and processing of recorded sensor values are vital for IoT applications. Big Data technologies provide a rich variety of processing capabilities to analyse the recorded measurements. In this paper an architecture for recording, searching, and analysing sensor measurements is proposed. This architecture combines existing IoT and Big Data technologies to bridge the gap between recording, transmission, and persistency of raw sensor data on one side, and the analysis of data on Hadoop clusters on the other side. The proposed framework emphasises scalability and persistence of measurements as well as easy access to the data from a variety of different data analytics tools. To achieve this, a distributed architecture is designed offering three different views on the recorded sensor readouts. The proposed architecture is not targeted at one specific use-case, but is able to provide a platform for a large number of different services.
2016

Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman

Towards a more fine grained analysis of scientific authorship: Predicting the number of authors using stylometric features

BIR 2016 Workshop on Bibliometric-enhanced Information Retrieval, Atanassova, I.; Bertin, M.; Mayr, P., Springer, Padova, Italy, 2016

Konferenz
To bring bibliometrics and information retrieval closer together, we propose to add the concept of author attribution into the pre-processing of scientific publications. Presently, common bibliographic metrics often attribute the entire article to all the authors affecting author-specific retrieval processes. We envision a more finegrained analysis of scientific authorship by attributing particular segments to authors. To realize this vision, we propose a new feature representation of scientific publications that captures the distribution of tylometric features. In a classification setting, we then seek to predict the number of authors of a scientific article. We evaluate our approach on a data set of ~ 6100 PubMed articles and achieve best results by applying random forests, i.e., 0.76 precision and 0.76 recall averaged over all classes.
2016

Rexha Andi, Kröll Mark, Kern Roman

Social Media Monitoring for Companies: A 4W Summarisation Approach

European Conference on Knowledge Management, Dr. Sandra Moffett and Dr. Brendan Galbraith, Academic Conferences and Publishing International Limited, Belfast, Northern Ireland, UK, 2016

Konferenz
Monitoring (social) media represents one means for companies to gain access to knowledge about, for instance, competitors, products as well as markets. As a consequence, social media monitoring tools have been gaining attention to handle amounts of data nowadays generated in social media. These tools also include summarisation services. However, most summarisation algorithms tend to focus on (i) first and last sentences respectively or (ii) sentences containing keywords.In this work we approach the task of summarisation by extracting 4W (who, when, where, what) information from (social)media texts. Presenting 4W information allows for a more compact content representation than traditional summaries. Inaddition, we depart from mere named entity recognition (NER) techniques to answer these four question types by includingnon-rigid designators, i.e. expressions which do not refer to the same thing in all possible worlds such as “at the main square”or “leaders of political parties”. To do that, we employ dependency parsing to identify grammatical characteristics for each question type. Every sentence is then represented as a 4W block. We perform two different preliminary studies: selecting sentences that better summarise texts by achieving an F1-measure of 0.343, as well as a 4W block extraction for which we achieve F1-measures of 0.932; 0.900; 0.803; 0.861 for “who”, “when”, “where” and “what” category respectively. In a next step the 4W blocks are ranked by relevance. The top three ranked blocks, for example, then constitute a summary of the entire textual passage. The relevance metric can be customised to the user’s needs, for instance, ranked by up-to-dateness where the sentences’ tense is taken into account. In a user study we evaluate different ranking strategies including (i) up-todateness,(ii) text sentence rank, (iii) selecting the firsts and lasts sentences or (iv) coverage of named entities, i.e. based on the number of named entities in the sentence. Our 4W summarisation method presents a valuable addition to a company’s(social) media monitoring toolkit, thus supporting decision making processes.
2016

Pimas Oliver, Rexha Andi, Kröll Mark, Kern Roman

Profiling microblog authors using concreteness and sentiment - Know-Center at PAN 2016 author profiling

PAN 2016, Krisztian Balog, Linda Cappellato, Nicola Ferro, Craig Macdonald, Springer, Evora, Portugal, 2016

Konferenz
The PAN 2016 author profiling task is a supervised classification problemon cross-genre documents (tweets, blog and social media posts). Our systemmakes use of concreteness, sentiment and syntactic information present in thedocuments. We train a random forest model to identify gender and age of a document’sauthor. We report the evaluation results received by the shared task.
2016

Kern Roman, Klampfl Stefan, Rexha Andi

Identifying Referenced Text in ScientificPublications by Summarisation andClassification Techniques

BIRNDL 2016 Joint Workshop on Bibliometric-enhanced Information Retrieval and NLP for Digital Libraries, G. Cabanac, Muthu Kumar Chandrasekaran, Ingo Frommholz , Kokil Jaidka, Min-Yen Kan, Philipp Mayr, Dietmar Wolfram, ACM, New Jersey, USA, 2016

Konferenz
This report describes our contribution to the 2nd ComputationalLinguistics Scientific Document Summarization Shared Task (CLSciSumm2016), which asked to identify the relevant text span in a referencepaper that corresponds to a citation in another document that citesthis paper. We developed three different approaches based on summarisationand classification techniques. First, we applied a modified versionof an unsupervised summarisation technique, TextSentenceRank, to thereference document, which incorporates the similarity of sentences tothe citation on a textual level. Second, we employed classification to selectfrom candidates previously extracted through the original TextSentenceRankalgorithm. Third, we used unsupervised summarisation of therelevant sub-part of the document that was previously selected in a supervisedmanner.
2016

Rexha Andi, Dragoni Mauro, Kern Roman, Kröll Mark

An Information Retrieval Based Approach for Multilingual Ontology Matching

International Conference on Applications of Natural Language to Information Systems, Métais E., Meziane F., Saraee M., Sugumaran V., Vadera S. , Springer , Salford, UK, 2016

Konferenz
Ontology matching in a multilingual environment consists of finding alignments between ontologies modeled by using more than one language. Such a research topic combines traditional ontology matching algorithms with the use of multilingual resources, services, and capabilities for easing multilingual matching. In this paper, we present a multilingual ontology matching approach based on Information Retrieval (IR) techniques: ontologies are indexed through an inverted index algorithm and candidate matches are found by querying such indexes. We also exploit the hierarchical structure of the ontologies by adopting the PageRank algorithm for our system. The approaches have been evaluated using a set of domain-specific ontologies belonging to the agricultural and medical domain. We compare our results with existing systems following an evaluation strategy closely resembling a recommendation scenario. The version of our system using PageRank showed an increase in performance in our evaluations.
2016

Pimas Oliver, Klampfl Stefan, Kohl Thomas, Kern Roman, Kröll Mark

Generating Tailored Classification Schemas for German Patents

21st International Conference on Applications of Natural Language to Information Systems, NLDB 2016, Springer-Verlag, Salford, UK, 2016

Konferenz
Patents and patent applications are important parts of acompany’s intellectual property. Thus, companies put a lot of effort indesigning and maintaining an internal structure for organizing their ownpatent portfolios, but also in keeping track of competitor’s patent port-folios. Yet, official classification schemas offered by patent offices (i) areoften too coarse and (ii) are not mappable, for instance, to a company’sfunctions, applications, or divisions. In this work, we present a first steptowards generating tailored classification. To automate the generationprocess, we apply key term extraction and topic modelling algorithmsto 2.131 publications of German patent applications. To infer categories,we apply topic modelling to the patent collection. We evaluate the map-ping of the topics found via the Latent Dirichlet Allocation method tothe classes present in the patent collection as assigned by the domainexpert.
2016

Mutlu Belgin, Sabol Vedran, Gursch Heimo, Kern Roman

From Data to Visualisations and Back: Selecting Visualisations Based on Data and System Design Considerations

arXiv, 2016

Konferenz
Graphical interfaces and interactive visualisations are typical mediators between human users and data analytics systems. HCI researchers and developers have to be able to understand both human needs and back-end data analytics. Participants of our tutorial will learn how visualisation and interface design can be combined with data analytics to provide better visualisations. In the first of three parts, the participants will learn about visualisations and how to appropriately select them. In the second part, restrictions and opportunities associated with different data analytics systems will be discussed. In the final part, the participants will have the opportunity to develop visualisations and interface designs under given scenarios of data and system settings.
2016

Santos Tiago, Kern Roman

A Literature Survey of Early Time Series Classification and Deep Learning

SamI40 workshop at i-KNOW'16, 2016

Konferenz
This paper provides an overview of current literature on timeseries classification approaches, in particular of early timeseries classification.A very common and effective time series classification ap-proach is the 1-Nearest Neighbor classifier, with differentdistance measures such as the Euclidean or dynamic timewarping distances. This paper starts by reviewing thesebaseline methods.More recently, with the gain in popularity in the applica-tion of deep neural networks to the field of computer vision,research has focused on developing deep learning architec-tures for time series classification as well. The literature inthe field of deep learning for time series classification hasshown promising results.Early time series classification aims to classify a time se-ries with as few temporal observations as possible, whilekeeping the loss of classification accuracy at a minimum.Prominent early classification frameworks reviewed by thispaper include, but are not limited to, ECTS, RelClass andECDIRE. These works have shown that early time seriesclassification may be feasible and performant, but they alsoshow room for improvement
2016

Kern Roman, Ziak Hermann

Query Splitting For Context-Driven Federated Recommendations

Database and Expert Systems Applications (DEXA), 2016 27th International Workshop on, IEEEE, Porto, Portugal, 2016

Konferenz
Context-driven query extraction for content-basedrecommender systems faces the challenge of dealing with queriesof multiple topics. In contrast to manually entered queries, forautomatically generated queries this is a more frequent problem. For instances if the information need is inferred indirectly viathe user's current context. Especially for federated search systemswere connected knowledge sources might react vastly differentlyon such queries, an algorithmic way how to deal with suchqueries is of high importance. One such method is to split mixedqueries into their individual subtopics. To gain insight how amulti topic query can be split into its subtopics we conductedan evaluation where we compared a naive approach against amore complex approaches based on word embedding techniques:One created using Word2Vec and one created using GloVe. Toevaluate these two approaches we used the Webis-QSeC-10 queryset, consisting of about 5,000 multi term queries. Queries of thisset were concatenated and passed through the algorithms withthe goal to split those queries again. Hence the naive approach issplitting the queries into several groups, according to the amountof joined queries, assuming the topics are of equal query termcount. In the case of the Word2Vec and GloVe based approacheswe relied on the already pre-trained datasets. The Google Newsmodel and a model trained with a Wikipedia dump and theEnglish Gigaword newswire text archive. The out of this datasetsresulting query term vectors were grouped into subtopics usinga k-Means clustering. We show that a clustering approach basedon word vectors achieves better results in particular when thequery is not in topical order. Furthermore we could demonstratethe importance of the underlying dataset.
2016

Klampfl Stefan, Kern Roman

Reconstructing the Logical Structure of a Scientific Publication using Machine Learning

Semantic Web Challenges, Communications in Computer and Information Science, Springer Link, Springer-Verlag, 2016

Konferenz
Semantic enrichment of scientific publications has an increasing impact on scholarly communication. This document describes our contribution to Semantic Publishing Challenge 2016, which aims at investigating novel approaches for improving scholarly publishing through semantic technologies. We participated in Task 2 of this challenge, which requires the extraction of information from the content of a paper given as PDF. The extracted information allows answering queries about the paper’s internal organisation and the context in which it was written. We build upon our contribution to the previous edition of the challenge, where we categorised meta-data, such as authors and affiliations, and extracted funding information. Here we use unsupervised machine learning techniques in order to extend the analysis of the logical structure of the document as to identify section titles and captions of figures and tables. Furthermore, we employ clustering techniques to create the hierarchical table of contents of the article. Our system is modular in nature and allows a separate training of different stages on different training sets.
2016

Urak Günter, Ziak Hermann, Kern Roman

Do Ambiguous Words Improve Probing For Federated Search?

International Conference on Theory and Practice of Digital Libraries, TPDL 2016, Springer-Verlag, 2016

Konferenz
The core approach to distributed knowledge bases is federated search. Two of the main challenges for federated search are the source representation and source selection. Different solutions to these problems were proposed in the literature. Within this work we present our novel approach for query-based sampling by relying on knowledge bases. We show the basic correctness of our approach and we came to the insight that the ambiguity of the probing terms has just a minor impact on the representation of the collection. Finally, we show that our method can be used to distinguish between niche and encyclopedic knowledge bases.
2016

Horn Christopher, Gursch Heimo, Kern Roman, Cik Michael

QZTool – Automatically generated Origin-Destination Matrices from Cell Phone Trajectories

Advances in The Human Side of Service Engineering: Proceedings of the AHFE 2016 International Conference on Human Factors and Sustainable Infrastructure, July 27-31, 2016, Walt Disney World®, Florida, USA, Jerzy Charytonowicz (series Editor), Neville A. Stanton and Steven Landry and Giuseppe Di Bucchianico and Andrea Vallicelli, Springer International Publishing, Cham, Switzerland, 2016

Konferenz
Models describing human travel patterns are indispensable to plan and operate road, rail and public transportation networks. For most kind of analyses in the field of transportation planning, there is a need for origin-destination (OD) matrices, which specify the travel demands between the origin and destination zones in the network. The preparation of OD matrices is traditionally a time consuming and cumbersome task. The presented system, QZTool, reduces the necessary effort as it is capable of generating OD matrices automatically. These matrices are produced starting from floating phone data (FPD) as raw input. This raw input is processed by a Hadoop-based big data system. A graphical user interface allows for an easy usage and hides the complexity from the operator. For evaluation, we compare a FDP-based OD matrix to an OD matrix created by a traffic demand model. Results show that both matrices agree to a high degree, indicating that FPD-based OD matrices can be used to create new, or to validate or amend existing OD matrices.
2016

Falk Stefan, Rexha Andi, Kern Roman

Know-Center at SemEval-2016 Task 5: Using Word Vectors with Typed Dependencies for Opinion Target Expression Extraction

Conference: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), SemEval 2016, ACL Anthology, San Diego, USA, 2016

Konferenz
This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2.The challenge demands to find domain specific target expressions on sentence level thatrefer to reviewed entities. The detection of target words is achieved by using word vectorsand their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words tothe whole target phrase. Our system achievedan F1 score of 56.816% for this task.
2016

Dragoni Mauro, Rexha Andi, Kröll Mark, Kern Roman

Polarity Classification for Target Phrases in Tweets: A Word2Vec approach

The Semantic Web, ESWC 2016 Satellite Events, ESWC 2016, Springer-Verlag, Crete, Greece, 2016

Konferenz
Twitter is one of the most popular micro-blogging serviceson the web. The service allows sharing, interaction and collaboration viashort, informal and often unstructured messages called tweets. Polarityclassification of tweets refers to the task of assigning a positive or a nega-tive sentiment to an entire tweet. Quite similar is predicting the polarityof a specific target phrase, for instance@Microsoftor#Linux,whichiscontained in the tweet.In this paper we present a Word2Vec approach to automatically pre-dict the polarity of a target phrase in a tweet. In our classification setting,we thus do not have any polarity information but use only semantic infor-mation provided by a Word2Vec model trained on Twitter messages. Toevaluate our feature representation approach, we apply well-establishedclassification algorithms such as the Support Vector Machine and NaiveBayes. For the evaluation we used theSemeval 2016 Task #4dataset.Our approach achieves F1-measures of up to∼90 % for the positive classand∼54 % for the negative class without using polarity informationabout single words.
2016

Rexha Andi, Kern Roman, Dragoni Mauro , Kröll Mark

Exploiting Propositions for Opinion Mining

ESWC-16 Challenge on Semantic Sentiment Analysis, Springer Link, Springer-Verlag, Crete, Greece, 2016

Konferenz
With different social media and commercial platforms, users express their opinion about products in a textual form. Automatically extracting the polarity (i.e. whether the opinion is positive or negative) of a user can be useful for both actors: the online platform incorporating the feedback to improve their product as well as the client who might get recommendations according to his or her preferences. Different approaches for tackling the problem, have been suggested mainly using syntactic features. The “Challenge on Semantic Sentiment Analysis” aims to go beyond the word-level analysis by using semantic information. In this paper we propose a novel approach by employing the semantic information of grammatical unit called preposition. We try to drive the target of the review from the summary information, which serves as an input to identify the proposition in it. Our implementation relies on the hypothesis that the proposition expressing the target of the summary, usually containing the main polarity information.
Kontakt Karriere

Hiermit erkläre ich ausdrücklich meine Einwilligung zum Einsatz und zur Speicherung von Cookies. Weiter Informationen finden sich unter Datenschutzerklärung

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close