Wissenschaftliche Arbeiten

Hier finden Sie von Know-Center MitarbeiterInnen verfasste wissenschaftliche Abschlussarbeiten

2016

Bassa Akim

array(37) { ["Start"]=> string(10) "22.02.2016" ["year"]=> int(2016) ["title"]=> string(51) "GerIE: Open Information Extraction for German Texts" ["Abstract de"]=> string(926) "Open Information Extraction (OIE) targets domain- and relation-independent discovery of relations in text, scalable to the Web. Although German is a major European language, no research has been conducted in German OIE yet. In this paper we fill this knowledge gap and present GerIE, the first German OIE system. As OIE has received increasing attention lately and various potent approaches have already been proposed, we surveyed to what extent these methods can be applied to German language and which additionally principles could be valuable in a new system. The most promising approach, hand-crafted rules working on dependency parsed sentences, was implemented in GerIE. We also created two German OIE evaluation datasets, which showed that GerIE achieves at least 0.88 precision and recall with correctly parsed sentences, while errors made by the used dependency parser can reduce precision to 0.54 and recall to 0.48." ["AutorId"]=> string(0) "" ["author"]=> string(10) "Bassa Akim" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "188" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(10) "Bassa Akim" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

GerIE: Open Information Extraction for German Texts i

Master

Master
Open Information Extraction (OIE) targets domain- and relation-independent discovery of relations in text, scalable to the Web. Although German is a major European language, no research has been conducted in German OIE yet. In this paper we fill this knowledge gap and present GerIE, the first German OIE system. As OIE has received increasing attention lately and various potent approaches have already been proposed, we surveyed to what extent these methods can be applied to German language and which additionally principles could be valuable in a new system. The most promising approach, hand-crafted rules working on dependency parsed sentences, was implemented in GerIE. We also created two German OIE evaluation datasets, which showed that GerIE achieves at least 0.88 precision and recall with correctly parsed sentences, while errors made by the used dependency parser can reduce precision to 0.54 and recall to 0.48.
2016

Fessl Angela

array(37) { ["Start"]=> string(10) "28.04.2009" ["year"]=> int(2016) ["title"]=> string(69) "Individual Reflection Guidance to Support Reflective Learning at Work" ["Abstract de"]=> string(3838) "Reflective learning can be seen as the conscious re-evaluation of past situations or experiences with the goal to learn from them and to use the gained insights to guide future behaviour. Reflective learning in the context of workplace learning has been identified as a core process which aims at getting new insights, deriving better practices and finally improving own work. Reflective learning, which is a cognitive process based on the individual’s intrinsic motivation, cannot be directly enforced, but guidance techniques like prompts, journals or diary writing, and visuals can foster reflection while using tools or software applications during work. The goal of this thesis is to conceptualise reflection guidance as adaptive software components that provide technologically supported guidance independent of the application and the working environment. In order to achieve this, a literature review was conducted to identify key challenges necessary to provide meaningful technological support for guiding reflective learning at work. Based on those challenges, technologies were investigated and analysed to extract those technologies that are the most suitable ones for providing reflection guidance and are able to trigger reflective learning. Finally, core components and architecture were derived to present a general applicable reflection guidance framework. The theoretical underpinning is grounded in existing reflective learning theory and theoretical models and processes supporting reflective learning. The design science research methodology is used as underlying research method to thoroughly present the conducted research. Altogether fifteen field studies consisting of one focus group, two design studies, six formative field studies and six summative field studies were conducted in different work-related settings. The field studies together with an extensive literature research led to the development of five iteration cycles of two different reflective learning applications to trigger reflective learning. Finally the thesis resulted in 9 publications (7 accepted and 2 under major revision). The research conducted was divided into three different phases. First, form the extensive literature the following key challenges emerged: (i) the timing of reflection (when to motivate to reflect: during an activity or after an activity), (ii) the appropriate tool used to motivate for reflection ( prompts vs. diaries vs. visuals vs. contextualisation) and (iii) the work-related context of reflection (to not disrupt the work-flow). Second, an in-app reflection guidance concept was developed, which provides reflection guidance in form of adaptive components. To illustrate how the concept can be instantiated in work-related settings, different components of the concept were implemented in three applications adopting various approaches to support reflective learning. The results showed that (i) prompts, diaries, and contextualisation are effective tools for initiating reflection when presented at the right time and in the right place and (ii) their integration in the work processes needs to be carefully considered in order to not interrupt or annoy the user during work. Third, a general applicable conceptual reflection guidance framework called “Reflector” has been elaborated including requirements, lessons learned and necessary features for providing meaningful technologically supported reflection guidance. This framework can be seen as a kind of a technical summary of the insights gained from the literature review and the implemented and evaluated reflection guidance concept. This thesis contributes scientifically to the area of technology-enhanced learning and provides a novel approach on how to provide meaningful technologically supported individual reflection guidance at work." ["AutorId"]=> string(2) "93" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(27) "Pammer-Schindler Viktoria; " ["Zweitbetreuer1_ID"]=> string(3) "212" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(3) "TUG" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "4" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(115) "reflective learning, reflection guidance, workplace learning, reflection guidance concept, framework for reflection" ["Link"]=> string(0) "" ["ID"]=> string(3) "197" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(12) "Fessl Angela" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "4" ["organ"]=> string(17) "PhD/ Dissertation" ["thesis"]=> string(3) "PhD" }

Individual Reflection Guidance to Support Reflective Learning at Work i

PhD/ Dissertation

PhD
Reflective learning can be seen as the conscious re-evaluation of past situations or experiences with the goal to learn from them and to use the gained insights to guide future behaviour. Reflective learning in the context of workplace learning has been identified as a core process which aims at getting new insights, deriving better practices and finally improving own work. Reflective learning, which is a cognitive process based on the individual’s intrinsic motivation, cannot be directly enforced, but guidance techniques like prompts, journals or diary writing, and visuals can foster reflection while using tools or software applications during work. The goal of this thesis is to conceptualise reflection guidance as adaptive software components that provide technologically supported guidance independent of the application and the working environment. In order to achieve this, a literature review was conducted to identify key challenges necessary to provide meaningful technological support for guiding reflective learning at work. Based on those challenges, technologies were investigated and analysed to extract those technologies that are the most suitable ones for providing reflection guidance and are able to trigger reflective learning. Finally, core components and architecture were derived to present a general applicable reflection guidance framework. The theoretical underpinning is grounded in existing reflective learning theory and theoretical models and processes supporting reflective learning. The design science research methodology is used as underlying research method to thoroughly present the conducted research. Altogether fifteen field studies consisting of one focus group, two design studies, six formative field studies and six summative field studies were conducted in different work-related settings. The field studies together with an extensive literature research led to the development of five iteration cycles of two different reflective learning applications to trigger reflective learning. Finally the thesis resulted in 9 publications (7 accepted and 2 under major revision). The research conducted was divided into three different phases. First, form the extensive literature the following key challenges emerged: (i) the timing of reflection (when to motivate to reflect: during an activity or after an activity), (ii) the appropriate tool used to motivate for reflection ( prompts vs. diaries vs. visuals vs. contextualisation) and (iii) the work-related context of reflection (to not disrupt the work-flow). Second, an in-app reflection guidance concept was developed, which provides reflection guidance in form of adaptive components. To illustrate how the concept can be instantiated in work-related settings, different components of the concept were implemented in three applications adopting various approaches to support reflective learning. The results showed that (i) prompts, diaries, and contextualisation are effective tools for initiating reflection when presented at the right time and in the right place and (ii) their integration in the work processes needs to be carefully considered in order to not interrupt or annoy the user during work. Third, a general applicable conceptual reflection guidance framework called “Reflector” has been elaborated including requirements, lessons learned and necessary features for providing meaningful technologically supported reflection guidance. This framework can be seen as a kind of a technical summary of the insights gained from the literature review and the implemented and evaluated reflection guidance concept. This thesis contributes scientifically to the area of technology-enhanced learning and provides a novel approach on how to provide meaningful technologically supported individual reflection guidance at work.
2016

Steinbauer Florian

array(37) { ["Start"]=> string(10) "30.07.2015" ["year"]=> int(2016) ["title"]=> string(43) "German Sentiment Analysis on Facebook Posts" ["Abstract de"]=> string(986) "Social media monitoring has become an important means for business analytics and trend detection, comparing companies with each other or keeping a healthy customer relationship. While English sentiment analysis is very closely researched, not much work has been done on German data analysis. In this work we will (i) annotate ~700 posts from 15 corporate Facebook pages, (ii) evaluate existing approaches capable of processing German data against the annotated data set and (iii) due to the insufficient results train a two-step hierarchical classifier capable of predicting posts with an accuracy of 70%. The first binary classifier decides whether the post is opinionated. If the outcome is not neutral, the second classifier predicts the polarity of the document. Further we will apply the algorithm in two application scenarios where German Facebook posts, in particular the fashion trade chain Peek&Cloppenburg and the Austrian railway operators OeBB and Westbahn will be analyzed." ["AutorId"]=> string(0) "" ["author"]=> string(19) "Steinbauer Florian " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "217" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(19) "Steinbauer Florian " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

German Sentiment Analysis on Facebook Posts i

Bakk

Bakk
Social media monitoring has become an important means for business analytics and trend detection, comparing companies with each other or keeping a healthy customer relationship. While English sentiment analysis is very closely researched, not much work has been done on German data analysis. In this work we will (i) annotate ~700 posts from 15 corporate Facebook pages, (ii) evaluate existing approaches capable of processing German data against the annotated data set and (iii) due to the insufficient results train a two-step hierarchical classifier capable of predicting posts with an accuracy of 70%. The first binary classifier decides whether the post is opinionated. If the outcome is not neutral, the second classifier predicts the polarity of the document. Further we will apply the algorithm in two application scenarios where German Facebook posts, in particular the fashion trade chain Peek&Cloppenburg and the Austrian railway operators OeBB and Westbahn will be analyzed.
2016

Bischofter Heimo

array(37) { ["Start"]=> string(10) "01.07.2015" ["year"]=> int(2016) ["title"]=> string(137) "Vergleich der Leistungsfähigkeit von Graphen-Datenbanken für Informationsvernetzung anhand der Abbildbarkeit von Berechtigungskonzepten" ["Abstract de"]=> string(2322) "Vernetzte Daten und Strukturen erfahren ein wachsendes Interesse und verdrängen bewährte Methoden der Datenhaltung in den Hintergrund. Einen neuen Ansatz für die Herausforderungen, die das Management von ausgeprägten und stark vernetzten Datenmengen mit sich bringen, liefern Graphdatenbanken. In der vorliegenden Masterarbeit wird die Leistungsfähigkeit von Graphdatenbanken gegenüber der etablierten relationalen Datenbank evaluiert. Die Ermittlung der Leistungsfähigkeit erfolgt durch Benchmarktests hinsichtlich der Verarbeitung von hochgradig vernetzten Daten, unter der Berücksichtigung eines umgesetzten feingranularen Berechtigungskonzepts. Im Rahmen der theoretischen Ausarbeitung wird zuerst auf die Grundlagen von Datenbanken und der Graphentheorie eingegangen. Diese liefern die Basis für die Bewertung des Funktionsumfangs und der Funktionalität der zur Evaluierung ausgewählten Graphdatenbanken. Die beschriebenen Berechtigungskonzepte liefern einen Überblick unterschiedlicher Zugriffskonzepte sowie die Umsetzung von Zugriffskontrollen in den Graphdatenbanken. Anhand der gewonnenen Informationen wird ein Java-Framework umgesetzt, welches es ermöglicht, die Graphdatenbanken, als auch die relationale Datenbank unter der Berücksichtigung des umgesetzten feingranularen Berechtigungskonzepts zu testen. Durch die Ausführung von geeigneten Testläufen kann die Leistungsfähigkeit in Bezug auf Schreib- und Lesevorgänge ermittelt werden. Benchmarktests für den schreibenden Zugriff erfolgen für Datenbestände unterschiedlicher Größe. Einzelne definierte Suchanfragen für die unterschiedlichen Größen an Daten erlauben die Ermittlung der Leseperformance. Es hat sich gezeigt, dass die relationale Datenbank beim Schreiben der Daten besser skaliert als die Graphdatenbanken. Das Erzeugen von Knoten und Kanten ist in Graphdatenbanken aufwendiger, als die Erzeugung eines neuen Tabelleneintrags in der relationalen Datenbank. Die Bewertung der Suchanfragen unter der Berücksichtigung des umgesetzten Zugriffkonzepts hat gezeigt, dass Graphdatenbanken bei ausgeprägten und stark vernetzten Datenmengen bedeutend besser skalieren als die relationale Datenbank. Je ausgeprägter der Vernetzungsgrad der Daten, desto mehr wird die JOIN-Problematik der relationalen Datenbank verdeutlicht." ["AutorId"]=> string(0) "" ["author"]=> string(16) "Bischofter Heimo" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(120) "Graphen-Datenbanken, Graphdatenbanken, Berechtigungskonzepte, Informationsvernetzung, feingranuales Berechtigungskonzept" ["Link"]=> string(0) "" ["ID"]=> string(3) "220" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Bischofter Heimo" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Vergleich der Leistungsfähigkeit von Graphen-Datenbanken für Informationsvernetzung anhand der Abbildbarkeit von Berechtigungskonzepten i

Master

Master
Vernetzte Daten und Strukturen erfahren ein wachsendes Interesse und verdrängen bewährte Methoden der Datenhaltung in den Hintergrund. Einen neuen Ansatz für die Herausforderungen, die das Management von ausgeprägten und stark vernetzten Datenmengen mit sich bringen, liefern Graphdatenbanken. In der vorliegenden Masterarbeit wird die Leistungsfähigkeit von Graphdatenbanken gegenüber der etablierten relationalen Datenbank evaluiert. Die Ermittlung der Leistungsfähigkeit erfolgt durch Benchmarktests hinsichtlich der Verarbeitung von hochgradig vernetzten Daten, unter der Berücksichtigung eines umgesetzten feingranularen Berechtigungskonzepts. Im Rahmen der theoretischen Ausarbeitung wird zuerst auf die Grundlagen von Datenbanken und der Graphentheorie eingegangen. Diese liefern die Basis für die Bewertung des Funktionsumfangs und der Funktionalität der zur Evaluierung ausgewählten Graphdatenbanken. Die beschriebenen Berechtigungskonzepte liefern einen Überblick unterschiedlicher Zugriffskonzepte sowie die Umsetzung von Zugriffskontrollen in den Graphdatenbanken. Anhand der gewonnenen Informationen wird ein Java-Framework umgesetzt, welches es ermöglicht, die Graphdatenbanken, als auch die relationale Datenbank unter der Berücksichtigung des umgesetzten feingranularen Berechtigungskonzepts zu testen. Durch die Ausführung von geeigneten Testläufen kann die Leistungsfähigkeit in Bezug auf Schreib- und Lesevorgänge ermittelt werden. Benchmarktests für den schreibenden Zugriff erfolgen für Datenbestände unterschiedlicher Größe. Einzelne definierte Suchanfragen für die unterschiedlichen Größen an Daten erlauben die Ermittlung der Leseperformance. Es hat sich gezeigt, dass die relationale Datenbank beim Schreiben der Daten besser skaliert als die Graphdatenbanken. Das Erzeugen von Knoten und Kanten ist in Graphdatenbanken aufwendiger, als die Erzeugung eines neuen Tabelleneintrags in der relationalen Datenbank. Die Bewertung der Suchanfragen unter der Berücksichtigung des umgesetzten Zugriffkonzepts hat gezeigt, dass Graphdatenbanken bei ausgeprägten und stark vernetzten Datenmengen bedeutend besser skalieren als die relationale Datenbank. Je ausgeprägter der Vernetzungsgrad der Daten, desto mehr wird die JOIN-Problematik der relationalen Datenbank verdeutlicht.
2016

Fraz Koini Josef

array(37) { ["Start"]=> string(10) "06.04.2016" ["year"]=> int(2016) ["title"]=> string(24) "Study on Health Trackers" ["Abstract de"]=> string(1434) "The rising distribution of compact devices with numerous sensors in the last decade has led to an increasing popularity of tracking fitness and health data and storing those data sets in apps and cloud environments for further evaluation. However, this massive collection of data is becoming more and more interesting for companies to reduce costs and increase productivity. All this possibly leads to problematic impacts on people’s privacy in the future. Hence, the main research question of this bachelor’s thesis is: “To what extent are people aware of the processing and pro- tection of their personal health data concerning the utilisation of various health tracking solutions?” This thesis investigates the historical development of personal fitness and health tracking, gives an overview of current options for users and presents potential problems and possible solutions regarding the use of health track- ing technology. Furthermore, it outlines the societal impact and legal issues. The results of a conducted online survey concerning the distribution and usage of health tracking solutions as well as the participants’ views on privacy concerning data sharing with service and insurance providers, ad- vertisers and employers are presented. Given those results, the necessity and importance of data protection according to the fierce opposition of the participants to various data sharing scenarios is expressed." ["AutorId"]=> string(0) "" ["author"]=> string(16) "Fraz Koini Josef" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "234" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Fraz Koini Josef" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

Study on Health Trackers i

Bakk

Bakk
The rising distribution of compact devices with numerous sensors in the last decade has led to an increasing popularity of tracking fitness and health data and storing those data sets in apps and cloud environments for further evaluation. However, this massive collection of data is becoming more and more interesting for companies to reduce costs and increase productivity. All this possibly leads to problematic impacts on people’s privacy in the future. Hence, the main research question of this bachelor’s thesis is: “To what extent are people aware of the processing and pro- tection of their personal health data concerning the utilisation of various health tracking solutions?” This thesis investigates the historical development of personal fitness and health tracking, gives an overview of current options for users and presents potential problems and possible solutions regarding the use of health track- ing technology. Furthermore, it outlines the societal impact and legal issues. The results of a conducted online survey concerning the distribution and usage of health tracking solutions as well as the participants’ views on privacy concerning data sharing with service and insurance providers, ad- vertisers and employers are presented. Given those results, the necessity and importance of data protection according to the fierce opposition of the participants to various data sharing scenarios is expressed.
2016

Suschnigg Josef

array(37) { ["Start"]=> string(10) "01.03.2013" ["year"]=> int(2016) ["title"]=> string(75) "Mobile Unterstützung zur Reflexion der Übungspraxis bei Musikstudierenden" ["Abstract de"]=> string(928) "Es wird eine mobile Anwendung entwickelt, die Musikstudierende dabei unterstützt reflexiv ein Instrument zu lernen. Der Anwender soll in der Lage sein seinen Übungserfolg über Selbstbeobachtung festzustellen, um in weiterer Folge Übungsstrategien zu finden, die die Übungspraxis optimieren soll. Kurzfristig stellt die Anwendung dem Benutzer für verschiedene Handlungsphasen einer Übungseinheit (preaktional, aktional und postaktional) Benutzeroberflächen zur Verfügung. Mit Hilfe von Leitfragen, oder vom Anwender formulierten Fragen, wird das Üben organisiert, strukturiert bzw. selbstreflektiert und evaluiert. Im Optimalfall kann der Anwender seinen Lernprozess auch auf Basis von Tonaufnahmen mitverfolgen. Langfristig können alle Benutzereingaben wieder abgerufen werden. Diese werden journalartig dargestellt und können zur Selbstreflexion oder auch gemeinsam mit einer Lehrperson ausgewertet werden.

" ["AutorId"]=> string(0) "" ["author"]=> string(16) "Suschnigg Josef " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "235" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Suschnigg Josef " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

Mobile Unterstützung zur Reflexion der Übungspraxis bei Musikstudierenden i

Bakk

Bakk
Es wird eine mobile Anwendung entwickelt, die Musikstudierende dabei unterstützt reflexiv ein Instrument zu lernen. Der Anwender soll in der Lage sein seinen Übungserfolg über Selbstbeobachtung festzustellen, um in weiterer Folge Übungsstrategien zu finden, die die Übungspraxis optimieren soll. Kurzfristig stellt die Anwendung dem Benutzer für verschiedene Handlungsphasen einer Übungseinheit (preaktional, aktional und postaktional) Benutzeroberflächen zur Verfügung. Mit Hilfe von Leitfragen, oder vom Anwender formulierten Fragen, wird das Üben organisiert, strukturiert bzw. selbstreflektiert und evaluiert. Im Optimalfall kann der Anwender seinen Lernprozess auch auf Basis von Tonaufnahmen mitverfolgen. Langfristig können alle Benutzereingaben wieder abgerufen werden. Diese werden journalartig dargestellt und können zur Selbstreflexion oder auch gemeinsam mit einer Lehrperson ausgewertet werden.

2016

Bassa Kevin

array(37) { ["Start"]=> string(10) "22.02.2016" ["year"]=> int(2016) ["title"]=> string(84) "Validation of Information: On-The-Fly Data Set Generation for Single Fact Validation" ["Abstract de"]=> string(1595) "Information validation is the process of determining whether a certain piece of information is true or false. Existing research in this area focuses on specific domains, but neglects cross-domain relations. This work will attempt to fill this gap and examine how various domains deal with the validation of information, providing a big picture across multiple domains. Therefore, we study how research areas, application domains and their definition of related terms in the field of information validation are related to each other, and show that there is no uniform use of the key terms. In addition we give an overview of existing fact finding approaches, with a focus on the data sets used for evaluation. We show that even baseline methods already achieve very good results, and that more sophisticated methods often improve the results only when they are tailored to specific data sets. Finally, we present the first step towards a new dynamic approach for information validation, which will generate a data set for existing fact finding methods on the fly by utilizing web search engines and information extraction tools. We show that with some limitations, it is possible to use existing fact finding methods to validate facts without a preexisting data set. We generate four different data sets with this approach, and use them to compare seven existing fact finding methods to each other. We discover that the performance of the fact validation process is strongly dependent on the type of fact that has to be validated as well as on the quality of the used information extraction tool." ["AutorId"]=> string(0) "" ["author"]=> string(11) "Bassa Kevin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "237" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(11) "Bassa Kevin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Validation of Information: On-The-Fly Data Set Generation for Single Fact Validation i

Master

Master
Information validation is the process of determining whether a certain piece of information is true or false. Existing research in this area focuses on specific domains, but neglects cross-domain relations. This work will attempt to fill this gap and examine how various domains deal with the validation of information, providing a big picture across multiple domains. Therefore, we study how research areas, application domains and their definition of related terms in the field of information validation are related to each other, and show that there is no uniform use of the key terms. In addition we give an overview of existing fact finding approaches, with a focus on the data sets used for evaluation. We show that even baseline methods already achieve very good results, and that more sophisticated methods often improve the results only when they are tailored to specific data sets. Finally, we present the first step towards a new dynamic approach for information validation, which will generate a data set for existing fact finding methods on the fly by utilizing web search engines and information extraction tools. We show that with some limitations, it is possible to use existing fact finding methods to validate facts without a preexisting data set. We generate four different data sets with this approach, and use them to compare seven existing fact finding methods to each other. We discover that the performance of the fact validation process is strongly dependent on the type of fact that has to be validated as well as on the quality of the used information extraction tool.
2016

Ivantstits Matthias

array(37) { ["Start"]=> string(10) "02.03.2015" ["year"]=> int(2016) ["title"]=> string(42) "Quantitative & qualitative Market-Analysis" ["Abstract de"]=> string(1830) "The buzzword big data is ubiquitous and has much impact on our everyday live and many businesses. Since the outset of the financial market, it is the aim to find some explanatory factors which contribute to the development of stock prices, therefore big data is a chance to do so. Gathering a vast amount of data concerning the financial market, filtering and analysing it, is of course tightly tied to predicting future stock prices. A lot of work has already been done with noticeable outcomes in this field of research. However, the question was raised, whether it is possible to build a tool with a large quantity of companies and news indexed and a natural language processing tool suitable for everyday applications. The sentiment analysis tool that was utilised in the development of this implementation is sensium.io. To achieve this goal two main modules were built. The first is responsible for constructing a filtered company index and for gathering detailed information about them, for example news, balance sheet figures and stock prices. The second is accountable for preprocessing the collected data and analysing them. This includes filtering unwanted news, translating them, calculating the text polarity and predicting the price development based on these facts. Utilising all these modules, the optimal period for buying and selling shares was found to be three days. This means buying some shares on the day of the news publication and selling them three days later. Pursuant to this analysis expected return is 0.07 percent a day, which might not seem much, however this would result in an annualised performance of 30.18 percent. This idea can also be outlaid in the contrary direction, telling the user when to sell his shares. Which could help an investor to find the ideal time to sell his company shares." ["AutorId"]=> string(0) "" ["author"]=> string(19) "Ivantstits Matthias" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "245" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(19) "Ivantstits Matthias" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

Quantitative & qualitative Market-Analysis i

Bakk

Bakk
The buzzword big data is ubiquitous and has much impact on our everyday live and many businesses. Since the outset of the financial market, it is the aim to find some explanatory factors which contribute to the development of stock prices, therefore big data is a chance to do so. Gathering a vast amount of data concerning the financial market, filtering and analysing it, is of course tightly tied to predicting future stock prices. A lot of work has already been done with noticeable outcomes in this field of research. However, the question was raised, whether it is possible to build a tool with a large quantity of companies and news indexed and a natural language processing tool suitable for everyday applications. The sentiment analysis tool that was utilised in the development of this implementation is sensium.io. To achieve this goal two main modules were built. The first is responsible for constructing a filtered company index and for gathering detailed information about them, for example news, balance sheet figures and stock prices. The second is accountable for preprocessing the collected data and analysing them. This includes filtering unwanted news, translating them, calculating the text polarity and predicting the price development based on these facts. Utilising all these modules, the optimal period for buying and selling shares was found to be three days. This means buying some shares on the day of the news publication and selling them three days later. Pursuant to this analysis expected return is 0.07 percent a day, which might not seem much, however this would result in an annualised performance of 30.18 percent. This idea can also be outlaid in the contrary direction, telling the user when to sell his shares. Which could help an investor to find the ideal time to sell his company shares.
2016

Toller Maximilian

array(37) { ["Start"]=> string(10) "29.02.2016" ["year"]=> int(2016) ["title"]=> string(48) "Automated Season Length Detection in Time Series" ["Abstract de"]=> string(1125) "The in-depth analysis of time series has been a central topic of research in the last years. Many of the present methods for finding periodic patterns and features require the use to input the time series’ season length. Today, there exist a few algorithms for automated season length approximation, yet many of them rely on simplifications such as data discretization. This thesis aims to develop an algorithm for season length detection that is more reliable than existing methods. The process developed in this thesis estimates a time series’ season length by interpolating, filtering and detrending the data and then analyzing the distances between zeros in the directly corresponding autocorrelation function. This method was tested against the only comparable open source algorithm and outperformed it by passing 94 out of 125 tests, while the existing algorithm only passed 62 tests. The results do not necessarily suggest a superiority of the new autocorrelation based method, but rather a supremacy of the new implementation. Further related studies might assess and compare the value of the theoretical concept." ["AutorId"]=> string(0) "" ["author"]=> string(17) "Toller Maximilian" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "249" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Toller Maximilian" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

Automated Season Length Detection in Time Series i

Bakk

Bakk
The in-depth analysis of time series has been a central topic of research in the last years. Many of the present methods for finding periodic patterns and features require the use to input the time series’ season length. Today, there exist a few algorithms for automated season length approximation, yet many of them rely on simplifications such as data discretization. This thesis aims to develop an algorithm for season length detection that is more reliable than existing methods. The process developed in this thesis estimates a time series’ season length by interpolating, filtering and detrending the data and then analyzing the distances between zeros in the directly corresponding autocorrelation function. This method was tested against the only comparable open source algorithm and outperformed it by passing 94 out of 125 tests, while the existing algorithm only passed 62 tests. The results do not necessarily suggest a superiority of the new autocorrelation based method, but rather a supremacy of the new implementation. Further related studies might assess and compare the value of the theoretical concept.
2016

Hasitschka Peter

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(80) " Visualisierung und Analyse von Empfehlungs-Historien unter Einsatz von WebGL " ["Abstract de"]=> string(1151) "Content-based recommender systems are commonly used to automatically provide context-based resource suggestions to users. This work introduces ECHO (Explorer of Collection HistOries), a visual tool supporting isualizationof a recommender system’s entire query history. It provides an interactive three-dimensional scene resembling the CoverFlow layout to browse through all collections in several Levels of Detail, compare collections, and find similarities in previous result sets. The user has the possibility to analyze a single collection through an intuitive visual representation of the results and their metadata, which is embedded into the 3D scene. These visualizations give insights into the metadata distribution of a collection and support the application of faceted filters on the whole query-history. Search results can be explored by the user in detail, organised in bookmark-collections for a later usage, and may also be used in external tools such as editors. ECHO implementation supports graphics card acceleration to avoid rendering performance issues and to provide smooth, animated transitions by using the WebGL technology. " ["AutorId"]=> string(3) "181" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "121" ["Betreuer"]=> string(12) "Sabol Vedran" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(49) "Visualization; Visual Analytics; Query-History " ["Link"]=> string(0) "" ["ID"]=> string(3) "255" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Hasitschka Peter" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Visualisierung und Analyse von Empfehlungs-Historien unter Einsatz von WebGL i

Master

Master
Content-based recommender systems are commonly used to automatically provide context-based resource suggestions to users. This work introduces ECHO (Explorer of Collection HistOries), a visual tool supporting isualizationof a recommender system’s entire query history. It provides an interactive three-dimensional scene resembling the CoverFlow layout to browse through all collections in several Levels of Detail, compare collections, and find similarities in previous result sets. The user has the possibility to analyze a single collection through an intuitive visual representation of the results and their metadata, which is embedded into the 3D scene. These visualizations give insights into the metadata distribution of a collection and support the application of faceted filters on the whole query-history. Search results can be explored by the user in detail, organised in bookmark-collections for a later usage, and may also be used in external tools such as editors. ECHO implementation supports graphics card acceleration to avoid rendering performance issues and to provide smooth, animated transitions by using the WebGL technology.
2016

Teixeira dos Santos Tiago Filipe

array(37) { ["Start"]=> string(10) "01.05.2016" ["year"]=> int(2016) ["title"]=> string(55) "Early Classification on Time Series Using Deep Learning" ["Abstract de"]=> string(1928) "This thesis aims to shed light on the early classification of time series problem, by deriving the trade-off between classification accuracy and time series length for a number of different time series types and classification algorithms. Previous research on early classification of time series focused on keeping classification accuracy of reduced time series roughly at the level of the complete ones. Furthermore, that research work does not employ cutting-edge approaches like Deep Learning. This work fills that research gap by computing trade-off curves on classification ”earlyness” vs. accuracy and by empirically comparing algorithm performance in that context, with a focus on the comparison of Deep Learning with classical approaches. Such early classification trade-off curves are calculated for univariate and multivariate time series and the following algorithms: 1-Nearest Neighbor search with both the Euclidean and Frobenius distance, 1-Nearest Neighbor search with forecasts from ARIMA and linear models, and Deep Learning. The results obtained indicate that early classification is feasible in all types of time series considered. The derived tradeoff curves all share the common trait of slowly decreasing at first, and featuring sharp drops as time series lengths become exceedingly short. Results showed Deep Learning models were able to maintain higher classification accuracies for larger time series length reductions than other algorithms. However, their long run-times, coupled with complexity in parameter configuration, implies that faster, albeit less accurate, baseline algorithms like 1-Nearest Neighbor search may still be a sensible choice on a case-by-case basis. This thesis draws its motivation from areas like predictive maintenance, where the early classification of multivariate time series data may boost performance of early warning systems, for example in manufacturing processes." ["AutorId"]=> string(0) "" ["author"]=> string(32) "Teixeira dos Santos Tiago Filipe" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(75) "Time series classification, Early time series classification, Deep Learning" ["Link"]=> string(0) "" ["ID"]=> string(3) "271" ["angestellt bei"]=> string(21) "Wiss. Partner TUG-IWT" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(32) "Teixeira dos Santos Tiago Filipe" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Early Classification on Time Series Using Deep Learning i

Master

Master
This thesis aims to shed light on the early classification of time series problem, by deriving the trade-off between classification accuracy and time series length for a number of different time series types and classification algorithms. Previous research on early classification of time series focused on keeping classification accuracy of reduced time series roughly at the level of the complete ones. Furthermore, that research work does not employ cutting-edge approaches like Deep Learning. This work fills that research gap by computing trade-off curves on classification ”earlyness” vs. accuracy and by empirically comparing algorithm performance in that context, with a focus on the comparison of Deep Learning with classical approaches. Such early classification trade-off curves are calculated for univariate and multivariate time series and the following algorithms: 1-Nearest Neighbor search with both the Euclidean and Frobenius distance, 1-Nearest Neighbor search with forecasts from ARIMA and linear models, and Deep Learning. The results obtained indicate that early classification is feasible in all types of time series considered. The derived tradeoff curves all share the common trait of slowly decreasing at first, and featuring sharp drops as time series lengths become exceedingly short. Results showed Deep Learning models were able to maintain higher classification accuracies for larger time series length reductions than other algorithms. However, their long run-times, coupled with complexity in parameter configuration, implies that faster, albeit less accurate, baseline algorithms like 1-Nearest Neighbor search may still be a sensible choice on a case-by-case basis. This thesis draws its motivation from areas like predictive maintenance, where the early classification of multivariate time series data may boost performance of early warning systems, for example in manufacturing processes.
2016

Herrera Timoteo

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(81) "Development of an augmented reality supported positioning system for radiotherapy" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(15) "Herrera Timoteo" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(22) "Veas Eduardo Enrique; " ["Zweitbetreuer1_ID"]=> string(3) "135" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(3) "TUG" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "272" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Herrera Timoteo" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Development of an augmented reality supported positioning system for radiotherapy

Master

Master
2016

Hirv Jaanika

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(94) "Digital Transformation: Learning Practices and Organisational Change in a Regional VET Centre " ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(12) "Hirv Jaanika" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "274" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(12) "Hirv Jaanika" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Digital Transformation: Learning Practices and Organisational Change in a Regional VET Centre

Master

Master
2016

Vega Bayo Marta

array(37) { ["Start"]=> string(10) "01.10.2015" ["year"]=> int(2016) ["title"]=> string(48) "Reference Recommendation for Scientific Articles" ["Abstract de"]=> string(1926) "During the last decades, the amount of information available for researches has increased several fold, making the searches more difficult. Thus, Information Retrieval Systems (IR) are needed. In this master thesis, a tool has been developed to create a dataset with metadata of scientific articles. This tool parses the articles of Pubmed, extracts metadata from them and saves the metadata in a relational database. Once all the articles have been parsed, the tool generates three XML files with that metadata: Articles.xml, ExtendedArticles.xml and Citations.xml. The first file contains the title, authors and publication date of the parsed articles and the articles referenced by them. The second one contains the abstract, keywords, body and reference list of the parsed articles. Finally, the file Citations.xml file contains the citations found within the articles and their context. The tool has been used to parse 45.000 articles. After the parsing, the database contains 644.906 articles with their title, authors and publication date. The articles of the dataset form a digraph where the articles are the nodes and the references are the arcs of the digraph. The in-degree of the network follows a power law distribution: there is an small set of articles referenced very often while most of the articles are rarely referenced. Two IR systems have been developed to search the dataset: the Title Based IR and the Citation Based IR. The first one compares the query of the user to the title of the articles, computes the Jaccard index as a similarity measure and ranks the articles according to their similarity. The second IR compares the query to the paragraphs where the citations were found. The analysis of both IRs showed that the execution time needed by the Citation Based IR was bigger. Nevertheless, the recommendations given were much better, which proved that the parsing of the citations was worth it. " ["AutorId"]=> string(0) "" ["author"]=> string(15) "Vega Bayo Marta" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "867" ["angestellt bei"]=> string(0) "" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Vega Bayo Marta" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(10) "02/10/2017" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Reference Recommendation for Scientific Articles i

Master

Master
During the last decades, the amount of information available for researches has increased several fold, making the searches more difficult. Thus, Information Retrieval Systems (IR) are needed. In this master thesis, a tool has been developed to create a dataset with metadata of scientific articles. This tool parses the articles of Pubmed, extracts metadata from them and saves the metadata in a relational database. Once all the articles have been parsed, the tool generates three XML files with that metadata: Articles.xml, ExtendedArticles.xml and Citations.xml. The first file contains the title, authors and publication date of the parsed articles and the articles referenced by them. The second one contains the abstract, keywords, body and reference list of the parsed articles. Finally, the file Citations.xml file contains the citations found within the articles and their context. The tool has been used to parse 45.000 articles. After the parsing, the database contains 644.906 articles with their title, authors and publication date. The articles of the dataset form a digraph where the articles are the nodes and the references are the arcs of the digraph. The in-degree of the network follows a power law distribution: there is an small set of articles referenced very often while most of the articles are rarely referenced. Two IR systems have been developed to search the dataset: the Title Based IR and the Citation Based IR. The first one compares the query of the user to the title of the articles, computes the Jaccard index as a similarity measure and ranks the articles according to their similarity. The second IR compares the query to the paragraphs where the citations were found. The analysis of both IRs showed that the execution time needed by the Citation Based IR was bigger. Nevertheless, the recommendations given were much better, which proved that the parsing of the citations was worth it.
Kontakt Karriere

Hiermit erkläre ich ausdrücklich meine Einwilligung zum Einsatz und zur Speicherung von Cookies. Weiter Informationen finden sich unter Datenschutzerklärung

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close