Wissenschaftliche Arbeiten

Hier finden Sie von Know-Center MitarbeiterInnen verfasste wissenschaftliche Abschlussarbeiten

2018

Anthofer Daniel

array(37) { ["Start"]=> string(10) "20.02.2017" ["year"]=> int(2018) ["title"]=> string(65) "A Neural Network for Open Information Extraction from German Text" ["Abstract de"]=> string(1148) "Systems that extract information from natural language texts usually need to consider language-dependent aspects like vocabulary and grammar. Compared to the development of individual systems for different languages, development of multilingual information extraction (IE) systems has the potential to reduce cost and effort. One path towards IE from different languages is to port an IE system from one language to another. PropsDE is an open IE (OIE) system that has been ported from the English system PropS to the German language. There are only few OIE methods for German available. Our goal is to develop a neural network that mimics the rules of an existing rule-based OIE system. For that, we need to learn about OIE from German text. By performing an analysis and a comparison of the rule-based systems PropS and PropsDE, we can observe a step towards multilinguality, and we learn about German OIE. Then we present a deep-learning based OIE system for German, which mimics the behaviour of PropsDE. The precision in directly imitating PropsDE is 28.1%. Our model produces many extractions that appear promising, but are not fully correct." ["AutorId"]=> string(0) "" ["author"]=> string(15) "Anthofer Daniel" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(86) "Open Information Extraction; Neuronale Netze; Text Mining; Data Science; Deep Learning" ["Link"]=> string(0) "" ["ID"]=> string(3) "888" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Anthofer Daniel" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "19/10/2017" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

A Neural Network for Open Information Extraction from German Text i

Master

Master
Systems that extract information from natural language texts usually need to consider language-dependent aspects like vocabulary and grammar. Compared to the development of individual systems for different languages, development of multilingual information extraction (IE) systems has the potential to reduce cost and effort. One path towards IE from different languages is to port an IE system from one language to another. PropsDE is an open IE (OIE) system that has been ported from the English system PropS to the German language. There are only few OIE methods for German available. Our goal is to develop a neural network that mimics the rules of an existing rule-based OIE system. For that, we need to learn about OIE from German text. By performing an analysis and a comparison of the rule-based systems PropS and PropsDE, we can observe a step towards multilinguality, and we learn about German OIE. Then we present a deep-learning based OIE system for German, which mimics the behaviour of PropsDE. The precision in directly imitating PropsDE is 28.1%. Our model produces many extractions that appear promising, but are not fully correct.
2018

Ziak Hermann

array(37) { ["Start"]=> string(10) "26.01.2017" ["year"]=> int(2018) ["title"]=> string(63) "Context Driven Federated Recommender in Uncooperative Settings " ["Abstract de"]=> string(2524) "Heutige Suchmaschinen sind auf den Informationsbedarf des Durchschnittsnutzers zugeschnitten. Dies führt zu dem Problem, dass wertvolle Informationsquellen, die sich auf ein Thema spezialisiert haben, unterrepräsentiert werden können, da Suchmaschinen im Allgemeinen auf die Rückgabe der beliebtesten Ergebnisse hin optimieren. Eine mögliche Lösung, um dieses Problem anzugehen, ist die Entwicklung eines Systems, das die Integration solcher Quellen in einem einzigen Framework ermöglicht; typischerweise wird es als föderierte Suchmaschine bezeichnet. Da die bloße Rückgabe aller Informationen aus allen Quellen an den Nutzer unter Umständen nicht den spezifischen Informationsbedarf des Nutzers erfüllt oder das fachspezifische Wissen über die Personalisierung des Nutzers widerspiegelt, ist eine Personalisierung erforderlich. Dies könnte durch die Anpassung von Algorithmen aus dem Bereich der Recommender Engines erreicht werden. Im Idealfall sollte ein solches System den Unterstützungsbedarf der Nutzer aus dem Kontext erkennen, ohne dass sie explizit eingreifen muss. Ausgehend von der Ausgangsfrage, wie ein solches System zu erstellen ist, das automatisch den Informationsbedarf des Nutzers erkennt und personalisierte Ergebnislisten zurückgibt, wurden vier Forschungsfragen formalisiert. Zunächst galt es, die notwendigen Mittel zu identifizieren, um Themen aus dem inhaltlichen Kontext zu extrahieren. Zweitens galt es, die Herausforderungen, die sich aus automatisch generierten Abfragen ergeben, zu identifizieren und mögliche Methoden zur Bewältigung dieser Herausforderungen zu identifizieren. Außerdem, welche Verarbeitungsschritte der Abfrage sind in dieser Einstellung von Vorteil? Drittens, ob und wie die Sammlungsdarstellung insbesondere in volatilen unkooperativen Situationen weiter optimiert werden kann. Viertens, wie die verteilte Dokumentensuche personalisiert werden kann, insbesondere wenn hochpräzise Ergebnisse von geringerer Bedeutung sind. Außerdem, welche Aggregationstechniken in diesem Umfeld von Vorteil sind. Auf der Grundlage dieser Fragen wurden insgesamt sechs Experimente durchgeführt, die in sechs Publikationen in Konferenzen und Workshops veröffentlicht wurden. Die Ergebnisse dieser Arbeit zeigen, dass der Rahmen einer föderierten Suchmaschine in Richtung eines föderierten Empfehlungssystems geändert werden kann, das es erlaubt, Artikel zu empfehlen. automatisch an die Nutzer, zugeschnitten auf den Informationsbedarf und den Kontext des Nutzers." ["AutorId"]=> string(3) "142" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(30) "Federated Recommender Engines " ["Link"]=> string(0) "" ["ID"]=> string(3) "881" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(12) "Ziak Hermann" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "02/10/2017" ["Letzter_Aufruf"]=> string(10) "26.04.2018" ["Letzte_Änderung_Person"]=> string(5) "rkern" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Context Driven Federated Recommender in Uncooperative Settings i

Master

Master
Heutige Suchmaschinen sind auf den Informationsbedarf des Durchschnittsnutzers zugeschnitten. Dies führt zu dem Problem, dass wertvolle Informationsquellen, die sich auf ein Thema spezialisiert haben, unterrepräsentiert werden können, da Suchmaschinen im Allgemeinen auf die Rückgabe der beliebtesten Ergebnisse hin optimieren. Eine mögliche Lösung, um dieses Problem anzugehen, ist die Entwicklung eines Systems, das die Integration solcher Quellen in einem einzigen Framework ermöglicht; typischerweise wird es als föderierte Suchmaschine bezeichnet. Da die bloße Rückgabe aller Informationen aus allen Quellen an den Nutzer unter Umständen nicht den spezifischen Informationsbedarf des Nutzers erfüllt oder das fachspezifische Wissen über die Personalisierung des Nutzers widerspiegelt, ist eine Personalisierung erforderlich. Dies könnte durch die Anpassung von Algorithmen aus dem Bereich der Recommender Engines erreicht werden. Im Idealfall sollte ein solches System den Unterstützungsbedarf der Nutzer aus dem Kontext erkennen, ohne dass sie explizit eingreifen muss. Ausgehend von der Ausgangsfrage, wie ein solches System zu erstellen ist, das automatisch den Informationsbedarf des Nutzers erkennt und personalisierte Ergebnislisten zurückgibt, wurden vier Forschungsfragen formalisiert. Zunächst galt es, die notwendigen Mittel zu identifizieren, um Themen aus dem inhaltlichen Kontext zu extrahieren. Zweitens galt es, die Herausforderungen, die sich aus automatisch generierten Abfragen ergeben, zu identifizieren und mögliche Methoden zur Bewältigung dieser Herausforderungen zu identifizieren. Außerdem, welche Verarbeitungsschritte der Abfrage sind in dieser Einstellung von Vorteil? Drittens, ob und wie die Sammlungsdarstellung insbesondere in volatilen unkooperativen Situationen weiter optimiert werden kann. Viertens, wie die verteilte Dokumentensuche personalisiert werden kann, insbesondere wenn hochpräzise Ergebnisse von geringerer Bedeutung sind. Außerdem, welche Aggregationstechniken in diesem Umfeld von Vorteil sind. Auf der Grundlage dieser Fragen wurden insgesamt sechs Experimente durchgeführt, die in sechs Publikationen in Konferenzen und Workshops veröffentlicht wurden. Die Ergebnisse dieser Arbeit zeigen, dass der Rahmen einer föderierten Suchmaschine in Richtung eines föderierten Empfehlungssystems geändert werden kann, das es erlaubt, Artikel zu empfehlen. automatisch an die Nutzer, zugeschnitten auf den Informationsbedarf und den Kontext des Nutzers.
2017

Huysman Dorien

array(37) { ["Start"]=> string(10) "26.10.2016" ["year"]=> int(2017) ["title"]=> string(59) "Ambulant Stress Detection in Patients with Stress Complaint" ["Abstract de"]=> string(3337) "This thesis demonstrates the potential and benefits of unsupervised learning with Self-Organizing Maps for stress detection in laboratory and free-living environment. The general increase in pace of life, both in the personal and work environment leads to the intensification and amount of work, constant time pressure and pressure to excel. It can cause psychosocial problems and negative health outcomes. Providing personal information about one’s stress level can counteract the adverse health effects of stress. Currently the most common way to detect stress is by the means of questionnaires. This is time consuming, subjective and only at discrete moments in time. Literature has shown that in a laboratory environment physiological signals can be used to detect stress in a continuous and objective way. Advances in wearable technology now make it feasible to continuously monitor physiological signals in daily life, allowing stress detection in a free-living environment. Ambulant stress detection is associated with several challenges. The data acquisition with wearables is less accurate compared to sensors used in a controlled environment and physical activity influences the physiological signals. Furthermore, the validation of stress detection with questionnaires provides an unreliable labelling of the data as it is subjective and delayed. This thesis explores an unsupervised learning technique, the Self-Organizing Map (SOM), to avoid the use of subjective labels. The provided data set originated from stress-inducing experiments in a con- trolled environment and ambulant data measured during daily-life activities. Blood volume pulse (BVP), skin temperature (ST), galvanic skin response (GSR), electromyogram (EMG), respiration, electrocardiogram (ECG) and acceleration were measured using both wearable and static devices. First, a supervised learning with Random Decision Forests (RDF) was applied to the laboratory data to provide a gold standard for unsupervised learning outcomes. A classification accuracy of 83.04% was reached using ECG and GSR features and 76.89% using ECG features only. Then the feasibility of the SOMs was tested on the laboratory data and compared a posteriori with the objective labels. Using a subset of ECG features, the classification accuracy was 76.42%. This is similar to supervised learning with ECG features, indicating the principal functioning of the SOMs for stress detection. In the last phase of this thesis the SOM was applied on the ambulant data. Training the SOM with ECG features from the ambulant data, enabled clustering from the feature space. The clusters were well separated with large cohesion (average silhouette coefficient of 0.49). Moreover, the clusters were similar over different test persons and days. According to literature the center values of the features in each cluster can indicate stress and relax phases. By mapping test samples on the trained and clustered SOM, stress predictions were made. Comparison against the subjective stress levels was however poor with a root mean squared error (RMSE) of 0.50. It is suggested to further explore the use of Self-Organizing Maps as it solely relies on the physiological data, excluding subjective labelling. Improvements can be made by applying multimodal feature sets, including for example GSR. " ["AutorId"]=> string(0) "" ["author"]=> string(14) "Huysman Dorien" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(11) "Denis Helic" ["Option_Betreuer_extern_intern"]=> string(6) "extern" ["Betreuer_extern"]=> string(11) "Denis Helic" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(12) "Kern Roman; " ["Zweitbetreuer1_ID"]=> string(3) "102" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(80) "https://online.tugraz.at/tug_online/wbAbs.showThesis?pThesisNr=62073&pOrgNr=2367" ["ID"]=> string(3) "915" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Huysman Dorien" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(10) "04/02/2018" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Ambulant Stress Detection in Patients with Stress Complaint i

Master

Master
This thesis demonstrates the potential and benefits of unsupervised learning with Self-Organizing Maps for stress detection in laboratory and free-living environment. The general increase in pace of life, both in the personal and work environment leads to the intensification and amount of work, constant time pressure and pressure to excel. It can cause psychosocial problems and negative health outcomes. Providing personal information about one’s stress level can counteract the adverse health effects of stress. Currently the most common way to detect stress is by the means of questionnaires. This is time consuming, subjective and only at discrete moments in time. Literature has shown that in a laboratory environment physiological signals can be used to detect stress in a continuous and objective way. Advances in wearable technology now make it feasible to continuously monitor physiological signals in daily life, allowing stress detection in a free-living environment. Ambulant stress detection is associated with several challenges. The data acquisition with wearables is less accurate compared to sensors used in a controlled environment and physical activity influences the physiological signals. Furthermore, the validation of stress detection with questionnaires provides an unreliable labelling of the data as it is subjective and delayed. This thesis explores an unsupervised learning technique, the Self-Organizing Map (SOM), to avoid the use of subjective labels. The provided data set originated from stress-inducing experiments in a con- trolled environment and ambulant data measured during daily-life activities. Blood volume pulse (BVP), skin temperature (ST), galvanic skin response (GSR), electromyogram (EMG), respiration, electrocardiogram (ECG) and acceleration were measured using both wearable and static devices. First, a supervised learning with Random Decision Forests (RDF) was applied to the laboratory data to provide a gold standard for unsupervised learning outcomes. A classification accuracy of 83.04% was reached using ECG and GSR features and 76.89% using ECG features only. Then the feasibility of the SOMs was tested on the laboratory data and compared a posteriori with the objective labels. Using a subset of ECG features, the classification accuracy was 76.42%. This is similar to supervised learning with ECG features, indicating the principal functioning of the SOMs for stress detection. In the last phase of this thesis the SOM was applied on the ambulant data. Training the SOM with ECG features from the ambulant data, enabled clustering from the feature space. The clusters were well separated with large cohesion (average silhouette coefficient of 0.49). Moreover, the clusters were similar over different test persons and days. According to literature the center values of the features in each cluster can indicate stress and relax phases. By mapping test samples on the trained and clustered SOM, stress predictions were made. Comparison against the subjective stress levels was however poor with a root mean squared error (RMSE) of 0.50. It is suggested to further explore the use of Self-Organizing Maps as it solely relies on the physiological data, excluding subjective labelling. Improvements can be made by applying multimodal feature sets, including for example GSR.
2017

Melbinger Paul

array(37) { ["Start"]=> string(10) "06.04.2016" ["year"]=> int(2017) ["title"]=> string(76) "Person Recognition System for Construction Vehicles in Tunnelling and Mining" ["Abstract de"]=> string(1671) "Im Rahmen der Masterarbeit wurde ein Prototyp für ein Assistenzsystem für Baufahrzeuge zur Erkennung von gefährdeten Personen im Baustellenbereich entwickelt und evaluiert. In Voruntersuchungen wurden ausgesuchte Sensorprinzipien zur Verwendung für die Personenerkennung analysiert. Eine Auswahl an kameraoptischen- und Distanzsensoren lieferten Daten aus der Umgebung des Fahrzeuges. Der Fokus der Arbeit lag auf dem Entwurf einer geeigneten Architektur, um alle im Assistenzsystem verwendeten Komponenten und Module für Personenerkennungsalgorithmen zu fusionieren. Im prototypischen Aufbau wurde die Mensch-Maschine-Schnittstelle in Form eines Live-Kamera-Streams, mit eingeblendeten Warnungen in einer einfach zu verstehenden und verwendbaren Benutzeroberfläche, integriert. Im Zuge von Testreihen wurde die Leistungsfähigkeit des Systems bei verschiedenen Fahrzeuggeschwindigkeiten untersucht. Für Kombinationen von eingesetzten Sensoren wurden höchste zugelassene Geschwindigkeiten ermittelt, damit das Fahrzeug zum Stillstand gebracht werden kann, um einen Unfall zu vermeiden. Testläufe unter möglichst realen Bedingunen haben gezeigt, dass Personenerkennung in Echtzeit durchgeführt werden kann, aber auch viel Raum für Verbesserungen vorhanden ist. Fahrer werden in Situationen mit hohem Unfallrisiko gut vom System unterstützt und sind dadurch in der Lage Unfälle zu vermeiden. Außerdem wurden die Stärken und Schwächen des Personenerkennungssystem analysiert und es konnten detaillierte und wichtige Informationen über Arbeitssituationen und -abläufe, Verhalten von Fahrern, einzelnen Komponenten und dem gesamten System gewonnen werden. " ["AutorId"]=> string(0) "" ["author"]=> string(14) "Melbinger Paul" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "254" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Melbinger Paul" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Person Recognition System for Construction Vehicles in Tunnelling and Mining i

Master

Master
Im Rahmen der Masterarbeit wurde ein Prototyp für ein Assistenzsystem für Baufahrzeuge zur Erkennung von gefährdeten Personen im Baustellenbereich entwickelt und evaluiert. In Voruntersuchungen wurden ausgesuchte Sensorprinzipien zur Verwendung für die Personenerkennung analysiert. Eine Auswahl an kameraoptischen- und Distanzsensoren lieferten Daten aus der Umgebung des Fahrzeuges. Der Fokus der Arbeit lag auf dem Entwurf einer geeigneten Architektur, um alle im Assistenzsystem verwendeten Komponenten und Module für Personenerkennungsalgorithmen zu fusionieren. Im prototypischen Aufbau wurde die Mensch-Maschine-Schnittstelle in Form eines Live-Kamera-Streams, mit eingeblendeten Warnungen in einer einfach zu verstehenden und verwendbaren Benutzeroberfläche, integriert. Im Zuge von Testreihen wurde die Leistungsfähigkeit des Systems bei verschiedenen Fahrzeuggeschwindigkeiten untersucht. Für Kombinationen von eingesetzten Sensoren wurden höchste zugelassene Geschwindigkeiten ermittelt, damit das Fahrzeug zum Stillstand gebracht werden kann, um einen Unfall zu vermeiden. Testläufe unter möglichst realen Bedingunen haben gezeigt, dass Personenerkennung in Echtzeit durchgeführt werden kann, aber auch viel Raum für Verbesserungen vorhanden ist. Fahrer werden in Situationen mit hohem Unfallrisiko gut vom System unterstützt und sind dadurch in der Lage Unfälle zu vermeiden. Außerdem wurden die Stärken und Schwächen des Personenerkennungssystem analysiert und es konnten detaillierte und wichtige Informationen über Arbeitssituationen und -abläufe, Verhalten von Fahrern, einzelnen Komponenten und dem gesamten System gewonnen werden.
2017

Köfler Armin

array(37) { ["Start"]=> string(10) "28.05.2013" ["year"]=> int(2017) ["title"]=> string(104) "Verbesserung des Lagebewusstseins und der Maßnahmenergreifung bei der Sicherung von Großveranstaltunge" ["Abstract de"]=> string(1813) "Während der Durchführung von Großveranstaltungen muss eine Einsatzleitung bestehend aus den führenden Mitgliedern der beteiligten Organisationen die Sicherheit der Besucher gewährleisten. Der leitende Stab benötigt laufend Information, um stets Bewusstsein über die aktuelle Lage zu haben und bei Bedarf Maßnahmen zu setzen. Zur Abwendung drohender Gefahren und Lösung bestehender Lagen ist Lageinformation entscheidend. Hat Information den Stab erreicht, so muss sie effizient und fehlerfrei darin verteilt werden. Dadurch kann ein gemeinsames Lagebewusstsein entstehen, das für alle Mitglieder gleichermaßen unmissverständlich verfügbar ist. Um die Erfüllung dieser Aufgaben zu unterstützen, wurde ein Führungsunterstützungssystem entwickelt, dessen Funktionen mittels der Prinzipien von Design Case Studies durch iterative Prototypenverbesserungen, qualitative Interviews mit Sicherheitskräften und Feldstudien bei Großveranstaltungen bestimmt wurden. Mit Domänenexperten wurde die Nutzung boden- und luftgestützter Sensoren zur fusionierten Aufbereitung und Präsentation der aktuellen Lage bezüglich Verteilungen von Menschenmengen in einem geographischen Informationssystem (GIS) diskutiert. Dazu wurde ihnen der Prototyp mit einem synthetischen Datensatz zur Evaluierung vorgelegt. Nach der Beobachtung von Arbeitsprozessen der Einsatzleitung bei Veranstaltungssicherungen zum Finden von Schwachpunkten wurde das GIS-System auf die effiziente Bereitstellung von Stammdaten sowie der Visualisierung von Lagen für alle aktiven Stabsmitarbeiter ausgerichtet. Erkannte Schwächen konnten durch unterstützende Prototyp-Funktionen gemildert werden, wie die vergleichende Nachstellung von beobachteten Vorfällen mit dem Führungsunterstützungssystem im abschließenden Workshop zeigte. " ["AutorId"]=> string(0) "" ["author"]=> string(13) "Köfler Armin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(47) " Lagebewusstsein; Sicherung; Großveranstaltung" ["Link"]=> string(0) "" ["ID"]=> string(3) "199" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(13) "Köfler Armin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Verbesserung des Lagebewusstseins und der Maßnahmenergreifung bei der Sicherung von Großveranstaltunge i

Master

Master
Während der Durchführung von Großveranstaltungen muss eine Einsatzleitung bestehend aus den führenden Mitgliedern der beteiligten Organisationen die Sicherheit der Besucher gewährleisten. Der leitende Stab benötigt laufend Information, um stets Bewusstsein über die aktuelle Lage zu haben und bei Bedarf Maßnahmen zu setzen. Zur Abwendung drohender Gefahren und Lösung bestehender Lagen ist Lageinformation entscheidend. Hat Information den Stab erreicht, so muss sie effizient und fehlerfrei darin verteilt werden. Dadurch kann ein gemeinsames Lagebewusstsein entstehen, das für alle Mitglieder gleichermaßen unmissverständlich verfügbar ist. Um die Erfüllung dieser Aufgaben zu unterstützen, wurde ein Führungsunterstützungssystem entwickelt, dessen Funktionen mittels der Prinzipien von Design Case Studies durch iterative Prototypenverbesserungen, qualitative Interviews mit Sicherheitskräften und Feldstudien bei Großveranstaltungen bestimmt wurden. Mit Domänenexperten wurde die Nutzung boden- und luftgestützter Sensoren zur fusionierten Aufbereitung und Präsentation der aktuellen Lage bezüglich Verteilungen von Menschenmengen in einem geographischen Informationssystem (GIS) diskutiert. Dazu wurde ihnen der Prototyp mit einem synthetischen Datensatz zur Evaluierung vorgelegt. Nach der Beobachtung von Arbeitsprozessen der Einsatzleitung bei Veranstaltungssicherungen zum Finden von Schwachpunkten wurde das GIS-System auf die effiziente Bereitstellung von Stammdaten sowie der Visualisierung von Lagen für alle aktiven Stabsmitarbeiter ausgerichtet. Erkannte Schwächen konnten durch unterstützende Prototyp-Funktionen gemildert werden, wie die vergleichende Nachstellung von beobachteten Vorfällen mit dem Führungsunterstützungssystem im abschließenden Workshop zeigte.
2017

Falk Stefan

array(37) { ["Start"]=> string(10) "12.04.2017" ["year"]=> int(2017) ["title"]=> string(79) "Supervised Aspect Category Detection in Sentiment Analysis for Opinionated Text" ["Abstract de"]=> string(1410) "The growth of user-generated data in the past decades has led to an increase in research being conducted in the field of natural language processing (NLP). Neural networks have shown promising results in several different language related tasks such as sentiment detection (Socher et al. 2013) or opinion mining (Pang and Lee 2008) both which have become a hot topic with the emergence of social networks and platforms that allow users to write reviews and express opinions towards entities. Detecting the sentiment of short texts (Severyn and Moschitti 2015) can be particularly challenging as missing context information can be encoded in just very few phrases. Building upon the information retrieved from sentiment detection, by combining it with information received from aspect category detection systems, allows the determination of positive or negative opinions towards entities or particular aspects of such. Aspect category detection is the task of obtaining the targeted aspect of an opinionated expression. It is the attempt to find out what is being talked about or referred to. The objective of this thesis is to develop system for aspect category detection in terms of NLP information retrieval using neural networks. Particularly, the requirements for the system lean on the definition of Task 5 Slot 1 (Pontiki et al. 2016) for constrained systems of the Semantic Evaluation challenge of 2016." ["AutorId"]=> string(3) "152" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "883" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(11) "Falk Stefan" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "02/10/2017" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Supervised Aspect Category Detection in Sentiment Analysis for Opinionated Text i

Master

Master
The growth of user-generated data in the past decades has led to an increase in research being conducted in the field of natural language processing (NLP). Neural networks have shown promising results in several different language related tasks such as sentiment detection (Socher et al. 2013) or opinion mining (Pang and Lee 2008) both which have become a hot topic with the emergence of social networks and platforms that allow users to write reviews and express opinions towards entities. Detecting the sentiment of short texts (Severyn and Moschitti 2015) can be particularly challenging as missing context information can be encoded in just very few phrases. Building upon the information retrieved from sentiment detection, by combining it with information received from aspect category detection systems, allows the determination of positive or negative opinions towards entities or particular aspects of such. Aspect category detection is the task of obtaining the targeted aspect of an opinionated expression. It is the attempt to find out what is being talked about or referred to. The objective of this thesis is to develop system for aspect category detection in terms of NLP information retrieval using neural networks. Particularly, the requirements for the system lean on the definition of Task 5 Slot 1 (Pontiki et al. 2016) for constrained systems of the Semantic Evaluation challenge of 2016.
2017

Widnig Dominik

array(37) { ["Start"]=> string(10) "27.01.2017" ["year"]=> int(2017) ["title"]=> string(74) "Evaluation of User Experience in a Location-Based Mobile Role-Playing Game" ["Abstract de"]=> string(2025) "Location-based games are currently more popular than ever for the general public. Games, such as Geocaching, Ingress and Pokemon Go have created a high demand in the app market and established themselves in a major category in the mobile gaming sector. Since location-based games are reliant on mobile sensors, battery life, cellular data connections and even environmental conditions, many problems can rise up while playing the game and hence, can reduce user experience and player enjoyment. The aim of this thesis is to improve the gaming experience of location-based games, which use map information to place virtual content at appropriate physical locations, with the assistance of an user-centered design approach. Therefore, a game named Geo Heroes was designed and implemented in order to evaluate it with existing quantitative and qualitative methods from research. The game was assessed in an empirical study with nine participants including a game-play session of about one hour. Participants were divided into an experimental and control group to author disparities in the implemented content placement algorithms. An already established questionnaire for traditional computer games, and one created by the author based on existing research in location-based games, were used to measure common factors in gaming experience. Additionally, participants sent log data with their current emotions during game-play after various interactions with game objects. Different outcome scenarios of interactions were considered to ensure a better analysis. Furthermore, an open group discussion was held to gather qualitative information from participants to reveal still undiscovered issues and to provide evidence from results of conducted quantitative methods. Results have shown that the questionnaire for location-based games is a useful tool to measure player enjoyment. In combination with the tracked emotions and a group interview, relevant information can be obtained in order to improve game design and mechanics." ["AutorId"]=> string(0) "" ["author"]=> string(14) "Widnig Dominik" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(144) "Location-Based Game, Pervasive Game, gaming experience, content placement algorithm, empirical evaluation, user experience, user centered design" ["Link"]=> string(0) "" ["ID"]=> string(3) "212" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Widnig Dominik" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Evaluation of User Experience in a Location-Based Mobile Role-Playing Game i

Master

Master
Location-based games are currently more popular than ever for the general public. Games, such as Geocaching, Ingress and Pokemon Go have created a high demand in the app market and established themselves in a major category in the mobile gaming sector. Since location-based games are reliant on mobile sensors, battery life, cellular data connections and even environmental conditions, many problems can rise up while playing the game and hence, can reduce user experience and player enjoyment. The aim of this thesis is to improve the gaming experience of location-based games, which use map information to place virtual content at appropriate physical locations, with the assistance of an user-centered design approach. Therefore, a game named Geo Heroes was designed and implemented in order to evaluate it with existing quantitative and qualitative methods from research. The game was assessed in an empirical study with nine participants including a game-play session of about one hour. Participants were divided into an experimental and control group to author disparities in the implemented content placement algorithms. An already established questionnaire for traditional computer games, and one created by the author based on existing research in location-based games, were used to measure common factors in gaming experience. Additionally, participants sent log data with their current emotions during game-play after various interactions with game objects. Different outcome scenarios of interactions were considered to ensure a better analysis. Furthermore, an open group discussion was held to gather qualitative information from participants to reveal still undiscovered issues and to provide evidence from results of conducted quantitative methods. Results have shown that the questionnaire for location-based games is a useful tool to measure player enjoyment. In combination with the tracked emotions and a group interview, relevant information can be obtained in order to improve game design and mechanics.
2017

Traub Matthias

array(37) { ["Start"]=> string(10) "01.07.2014" ["year"]=> int(2017) ["title"]=> string(126) "Anwendung von Konversationsmetriken zur automatisierten Orchestrierung von Video Konferenz Anzeigemodi: Zwei Vergleichsstudien" ["Abstract de"]=> string(2595) "In the last decade, video conferencing systems have become an essential part of modern communication. Initially predominantly a business application due to its high acquisition cost, video conferencing has made its way from the boardroom to the personal sector and even to hand-helds and mobiles. In addition to the basic combination of audio and video streams, there are many extra capabilities like onscreen drawing, file sharing, and facial recognition. Video conferencing enables real-time, synchronous communication independent of the participants’ location. Although technology has improved, video conferencing systems are still not considered to be as good as face-to-face meetings and therefore constitute a separate communication situation. One of the major problems of video conferences is that each participant has a di erent perception of the conversational situation and communication. The goal of this thesis is the evaluation of automated orchestration in the Vconect video conferencing system through two comparative studies. In the first study, two di erent view modes (tiled and full screen) were compared with regard to their impact on the communication and system quality. The study was designed as a repeated measures study with one independent measure being the view mode. A previous study showed that certain view modes are more suitable for particular scenarios. The goal of this study was to see whether this hypothesis holds true in a slow turn-taking scenario. The study was performed with 16 participants split into 4 groups of 4. The study showed no statistically significant preference for a particular view mode, but did reveal a tendency in preference towards tiled view mode, and also revealed other problems with the system. The second comparative study investigated the impact of voice activity detection sensitivity (start delay). Three di erent degrees of sensitivity were compared within full screen view mode. The thresholds for the three start delays were chosen at 300,600, and 900 ms according to insights from previous evaluations and simulations. The study was designed as a repeated measures study with one independent measure start delay. The study was performed with 40 participants divided into 10 groups of 4. The analysis of the subjective measures showed that the shortest start delay of 300 ms (highest sensitivity) was rated statistically significantly worse than longer start delays (lower sensitivity) in three aspects. However, overall preference showed only a tendency towards the two longer start delays (lower degrees of sensitivity)." ["AutorId"]=> string(3) "151" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "235" ["Betreuer"]=> string(13) "Andrews Keith" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(14) "Kaiser René; " ["Zweitbetreuer1_ID"]=> string(3) "217" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "859" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Traub Matthias" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "07/09/2017" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Anwendung von Konversationsmetriken zur automatisierten Orchestrierung von Video Konferenz Anzeigemodi: Zwei Vergleichsstudien i

Master

Master
In the last decade, video conferencing systems have become an essential part of modern communication. Initially predominantly a business application due to its high acquisition cost, video conferencing has made its way from the boardroom to the personal sector and even to hand-helds and mobiles. In addition to the basic combination of audio and video streams, there are many extra capabilities like onscreen drawing, file sharing, and facial recognition. Video conferencing enables real-time, synchronous communication independent of the participants’ location. Although technology has improved, video conferencing systems are still not considered to be as good as face-to-face meetings and therefore constitute a separate communication situation. One of the major problems of video conferences is that each participant has a di erent perception of the conversational situation and communication. The goal of this thesis is the evaluation of automated orchestration in the Vconect video conferencing system through two comparative studies. In the first study, two di erent view modes (tiled and full screen) were compared with regard to their impact on the communication and system quality. The study was designed as a repeated measures study with one independent measure being the view mode. A previous study showed that certain view modes are more suitable for particular scenarios. The goal of this study was to see whether this hypothesis holds true in a slow turn-taking scenario. The study was performed with 16 participants split into 4 groups of 4. The study showed no statistically significant preference for a particular view mode, but did reveal a tendency in preference towards tiled view mode, and also revealed other problems with the system. The second comparative study investigated the impact of voice activity detection sensitivity (start delay). Three di erent degrees of sensitivity were compared within full screen view mode. The thresholds for the three start delays were chosen at 300,600, and 900 ms according to insights from previous evaluations and simulations. The study was designed as a repeated measures study with one independent measure start delay. The study was performed with 40 participants divided into 10 groups of 4. The analysis of the subjective measures showed that the shortest start delay of 300 ms (highest sensitivity) was rated statistically significantly worse than longer start delays (lower sensitivity) in three aspects. However, overall preference showed only a tendency towards the two longer start delays (lower degrees of sensitivity).
2017

Draxler Fiona

array(37) { ["Start"]=> string(10) "01.09.2016" ["year"]=> int(2017) ["title"]=> string(86) "Adaptive Writing Support – Suggesting Appropriate Tools based on Cognitive Processes" ["Abstract de"]=> string(1431) "Texts are of crucial importance for communicating and managing information. How- ever, text composition is still a challenge for many people: in order to effectively convey their message, writers need skills in planning and structuring, linguistic abil- ity, and also the ability to evaluate their own work. In this thesis, we look at how writers can be supported in all the tasks encom- passed in the writing process. To this end, and in addition to literature research, we conducted an experiment to analyse the characteristics of the writing processes as well as difficulties writers typically encounter when they search for information, plan the structure of their text, translate their ideas to words, and review their writing. We formulate requirements for aiding these tasks and propose support possibilities, with a special focus on digital solutions. Issues with existing tools are that they generally support only one aspect and interrupt the writing task. This was our motivation for developing a prototype of a comprehensive text composition tool which supports writers in all stages of their task. We chose to implement it as a Google Docs add-on, which means that it can be integrated seamlessly into the Google Docs text editor. The add-on offers a number of features specifically tailored to each writing process. Finally, we performed a user study to evaluate the features and the workflow while using the add-on. " ["AutorId"]=> string(0) "" ["author"]=> string(13) "Draxler Fiona" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(3) "135" ["Betreuer"]=> string(20) "Veas Eduardo Enrique" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(106) "Human-Computer Interaction; Writing Processes; Writing Aid; Cognitive Psychology; Google Docs; Distraction" ["Link"]=> string(0) "" ["ID"]=> string(3) "216" ["angestellt bei"]=> string(7) "TUG-IWT" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(13) "Draxler Fiona" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Adaptive Writing Support – Suggesting Appropriate Tools based on Cognitive Processes i

Master

Master
Texts are of crucial importance for communicating and managing information. How- ever, text composition is still a challenge for many people: in order to effectively convey their message, writers need skills in planning and structuring, linguistic abil- ity, and also the ability to evaluate their own work. In this thesis, we look at how writers can be supported in all the tasks encom- passed in the writing process. To this end, and in addition to literature research, we conducted an experiment to analyse the characteristics of the writing processes as well as difficulties writers typically encounter when they search for information, plan the structure of their text, translate their ideas to words, and review their writing. We formulate requirements for aiding these tasks and propose support possibilities, with a special focus on digital solutions. Issues with existing tools are that they generally support only one aspect and interrupt the writing task. This was our motivation for developing a prototype of a comprehensive text composition tool which supports writers in all stages of their task. We chose to implement it as a Google Docs add-on, which means that it can be integrated seamlessly into the Google Docs text editor. The add-on offers a number of features specifically tailored to each writing process. Finally, we performed a user study to evaluate the features and the workflow while using the add-on.
2017

Lusser Michael

array(37) { ["Start"]=> string(10) "01.10.2016" ["year"]=> int(2017) ["title"]=> string(82) "Predictive Analytics zur Wartungsoptimierung in der elektrischen Energiewirtschaft" ["Abstract de"]=> string(1161) "Die elektrische Energiewirtschaft befindet sich in einer Wende. Sowohl Energieerzeuger, wie auch Netzbetreiber sind von der Hinwendung zu regenerativen Energien betroffen.Höhere Kosten für Erzeugung und Übertragung stehen regulierten Einnahmen gegenüber. Instandhaltungskosten sind ein erheblicher Kostenfaktor. Es stellt sich die Frage, ob Predictive Analytics im Allgemeinen bzw. Predictive Maintenance im Speziellen eine Option zur Verminderung dieser Kosten bei gleichbleibender oder verbesserter Zuverlässigkeit sind. Nach einer Aufarbeitung der technologischen, wirtschaftlichen und rechtlichen Rahmenbedingungen, wird mittels Szenariotechnik ein narratives Szenario erstellt. Dieses dient der Stimulation von Experten aus verschiedenen Bereichen der elektrischen Energiewirtschaft. In der Folge werden diese Experten zu ihrer Meinung befragt. Auch wenn aktuell rechtliche Bedenken vorhanden sind, herrscht Einigkeit darüber, dass Predictive Maintenance in der elektrischen Energiewirtschaft kommen wird. Diese Änderungen sind nicht auf die Energieversorger beschränkt. Auch Zulieferbetriebe, Dienstleister und Kunden werden davon betroffen sein. " ["AutorId"]=> string(0) "" ["author"]=> string(14) "Lusser Michael" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "163" ["Betreuer"]=> string(15) "Ginthör Robert" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "858" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Lusser Michael" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "01/09/2017" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Predictive Analytics zur Wartungsoptimierung in der elektrischen Energiewirtschaft i

Master

Master
Die elektrische Energiewirtschaft befindet sich in einer Wende. Sowohl Energieerzeuger, wie auch Netzbetreiber sind von der Hinwendung zu regenerativen Energien betroffen.Höhere Kosten für Erzeugung und Übertragung stehen regulierten Einnahmen gegenüber. Instandhaltungskosten sind ein erheblicher Kostenfaktor. Es stellt sich die Frage, ob Predictive Analytics im Allgemeinen bzw. Predictive Maintenance im Speziellen eine Option zur Verminderung dieser Kosten bei gleichbleibender oder verbesserter Zuverlässigkeit sind. Nach einer Aufarbeitung der technologischen, wirtschaftlichen und rechtlichen Rahmenbedingungen, wird mittels Szenariotechnik ein narratives Szenario erstellt. Dieses dient der Stimulation von Experten aus verschiedenen Bereichen der elektrischen Energiewirtschaft. In der Folge werden diese Experten zu ihrer Meinung befragt. Auch wenn aktuell rechtliche Bedenken vorhanden sind, herrscht Einigkeit darüber, dass Predictive Maintenance in der elektrischen Energiewirtschaft kommen wird. Diese Änderungen sind nicht auf die Energieversorger beschränkt. Auch Zulieferbetriebe, Dienstleister und Kunden werden davon betroffen sein.
2017

Lukas Sabine

array(37) { ["Start"]=> string(10) "01.04.2016" ["year"]=> int(2017) ["title"]=> string(69) "Coordination support for firebrigade teams in the case of forest fire" ["Abstract de"]=> string(1398) "Bei Waldbrandsituation steht der Krisenstab oft vor Problemen in Bezug auf die Koordination, Entwicklung einer Einsatzstrategie und dem Bewahren der Übersicht während des Einsatzes. Ziel dieser Arbeit war ein Basisprototyp zur Demonstration von Unterstützungsmöglichkeiten für den Operator in der Einsatzleitung. Bei der Entwicklung dieses Prototypen stand die Usability im Vordergrund. Zur Verbesserung der Usability wurden während des Softwareentwicklungsprozesses Methoden des User Centered Designs(UCD) angewendet. Bei der Entwicklung einer Software mit kleiner Nutzergruppe, konnte herausgefunden werden, dass durch die Gegebenheit der Nischenposition der Nutzer andere Methoden angewendet werden müssen als bei einer größeren Nutzergruppe. Für die finale Präsentation des Prototyps wurde ein internationaler Expertenworkshop ausgewählt, bei dem die Software demonstriert und anschließend mit den Experten diskutiert wurde. Aus den Diskussionen konnte die Schlussfolgerung getroffen werden, dass eine solche Software derzeit noch nicht existiert und in vielen Aufgaben des Einsatzstabes benötigt wird. Grundsätzlich kann gesagt werden, dass Methoden aus dem UCD eine gute Basis für die Softwareentwicklung von Katastrophenschutzsoftware bilden und die Weiterentwicklung dieses Softwareprototyp einen guten Anfang für die Entwicklung eines Waldbrandmanagementsystems darstellt." ["AutorId"]=> string(0) "" ["author"]=> string(12) "Lukas Sabine" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(162) "User Centered Design; Waldbrand; Katastrophenmanagement; Krisenmanagement; Feuerwehr; Entscheidungsunterstützung;Koordination;nahezu Echtzeit Lagebilddarstellung" ["Link"]=> string(0) "" ["ID"]=> string(3) "261" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(12) "Lukas Sabine" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Coordination support for firebrigade teams in the case of forest fire i

Master

Master
Bei Waldbrandsituation steht der Krisenstab oft vor Problemen in Bezug auf die Koordination, Entwicklung einer Einsatzstrategie und dem Bewahren der Übersicht während des Einsatzes. Ziel dieser Arbeit war ein Basisprototyp zur Demonstration von Unterstützungsmöglichkeiten für den Operator in der Einsatzleitung. Bei der Entwicklung dieses Prototypen stand die Usability im Vordergrund. Zur Verbesserung der Usability wurden während des Softwareentwicklungsprozesses Methoden des User Centered Designs(UCD) angewendet. Bei der Entwicklung einer Software mit kleiner Nutzergruppe, konnte herausgefunden werden, dass durch die Gegebenheit der Nischenposition der Nutzer andere Methoden angewendet werden müssen als bei einer größeren Nutzergruppe. Für die finale Präsentation des Prototyps wurde ein internationaler Expertenworkshop ausgewählt, bei dem die Software demonstriert und anschließend mit den Experten diskutiert wurde. Aus den Diskussionen konnte die Schlussfolgerung getroffen werden, dass eine solche Software derzeit noch nicht existiert und in vielen Aufgaben des Einsatzstabes benötigt wird. Grundsätzlich kann gesagt werden, dass Methoden aus dem UCD eine gute Basis für die Softwareentwicklung von Katastrophenschutzsoftware bilden und die Weiterentwicklung dieses Softwareprototyp einen guten Anfang für die Entwicklung eines Waldbrandmanagementsystems darstellt.
2017

Müller Andreas

array(37) { ["Start"]=> string(10) "16.03.2017" ["year"]=> int(2017) ["title"]=> string(43) "Supporting online learning for Starcraft II" ["Abstract de"]=> string(1271) "Question and answer (Q&A) systems are and will always be crucial in the digital life. Famous Q&A systems succeeded with having text, images and markup language as input possibilities. While this is sufficient for most questions, I think that this is not always the case for questions with a complex background. By implementing and evaluating a prototype of a domain-tailored Q&A tool I want to tackle the problem that formulating complex questions in text only and finding them consequently can be a hard task. Testing several non-text input possibilities including to parse standardized documents to populate metadata automatically and mixing exploratory and facetted search should lead to a more satisfying user experience when creating and searching questions. By choosing the community of StarCraft II it is ensured to have many questions with a complex background belonging to one domain. The evaluation results show that the implemented Q&A system, in form of a website, can hardly be compared to existing ones without having big data. Regardless users do see a potential for the website to succeed within the community which seems convincing that domain-tailored Q&A systems, where questions with metadata exist, can succeed in other fields of application as well." ["AutorId"]=> string(0) "" ["author"]=> string(15) "Müller Andreas" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(60) " Q&A; Question & Answer; StarCraft II; structured Q&A system" ["Link"]=> string(0) "" ["ID"]=> string(3) "193" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Müller Andreas" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Supporting online learning for Starcraft II i

Master

Master
Question and answer (Q&A) systems are and will always be crucial in the digital life. Famous Q&A systems succeeded with having text, images and markup language as input possibilities. While this is sufficient for most questions, I think that this is not always the case for questions with a complex background. By implementing and evaluating a prototype of a domain-tailored Q&A tool I want to tackle the problem that formulating complex questions in text only and finding them consequently can be a hard task. Testing several non-text input possibilities including to parse standardized documents to populate metadata automatically and mixing exploratory and facetted search should lead to a more satisfying user experience when creating and searching questions. By choosing the community of StarCraft II it is ensured to have many questions with a complex background belonging to one domain. The evaluation results show that the implemented Q&A system, in form of a website, can hardly be compared to existing ones without having big data. Regardless users do see a potential for the website to succeed within the community which seems convincing that domain-tailored Q&A systems, where questions with metadata exist, can succeed in other fields of application as well.
2016

Vega Bayo Marta

array(37) { ["Start"]=> string(10) "01.10.2015" ["year"]=> int(2016) ["title"]=> string(48) "Reference Recommendation for Scientific Articles" ["Abstract de"]=> string(1926) "During the last decades, the amount of information available for researches has increased several fold, making the searches more difficult. Thus, Information Retrieval Systems (IR) are needed. In this master thesis, a tool has been developed to create a dataset with metadata of scientific articles. This tool parses the articles of Pubmed, extracts metadata from them and saves the metadata in a relational database. Once all the articles have been parsed, the tool generates three XML files with that metadata: Articles.xml, ExtendedArticles.xml and Citations.xml. The first file contains the title, authors and publication date of the parsed articles and the articles referenced by them. The second one contains the abstract, keywords, body and reference list of the parsed articles. Finally, the file Citations.xml file contains the citations found within the articles and their context. The tool has been used to parse 45.000 articles. After the parsing, the database contains 644.906 articles with their title, authors and publication date. The articles of the dataset form a digraph where the articles are the nodes and the references are the arcs of the digraph. The in-degree of the network follows a power law distribution: there is an small set of articles referenced very often while most of the articles are rarely referenced. Two IR systems have been developed to search the dataset: the Title Based IR and the Citation Based IR. The first one compares the query of the user to the title of the articles, computes the Jaccard index as a similarity measure and ranks the articles according to their similarity. The second IR compares the query to the paragraphs where the citations were found. The analysis of both IRs showed that the execution time needed by the Citation Based IR was bigger. Nevertheless, the recommendations given were much better, which proved that the parsing of the citations was worth it. " ["AutorId"]=> string(0) "" ["author"]=> string(15) "Vega Bayo Marta" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "867" ["angestellt bei"]=> string(0) "" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Vega Bayo Marta" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(10) "02/10/2017" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Reference Recommendation for Scientific Articles i

Master

Master
During the last decades, the amount of information available for researches has increased several fold, making the searches more difficult. Thus, Information Retrieval Systems (IR) are needed. In this master thesis, a tool has been developed to create a dataset with metadata of scientific articles. This tool parses the articles of Pubmed, extracts metadata from them and saves the metadata in a relational database. Once all the articles have been parsed, the tool generates three XML files with that metadata: Articles.xml, ExtendedArticles.xml and Citations.xml. The first file contains the title, authors and publication date of the parsed articles and the articles referenced by them. The second one contains the abstract, keywords, body and reference list of the parsed articles. Finally, the file Citations.xml file contains the citations found within the articles and their context. The tool has been used to parse 45.000 articles. After the parsing, the database contains 644.906 articles with their title, authors and publication date. The articles of the dataset form a digraph where the articles are the nodes and the references are the arcs of the digraph. The in-degree of the network follows a power law distribution: there is an small set of articles referenced very often while most of the articles are rarely referenced. Two IR systems have been developed to search the dataset: the Title Based IR and the Citation Based IR. The first one compares the query of the user to the title of the articles, computes the Jaccard index as a similarity measure and ranks the articles according to their similarity. The second IR compares the query to the paragraphs where the citations were found. The analysis of both IRs showed that the execution time needed by the Citation Based IR was bigger. Nevertheless, the recommendations given were much better, which proved that the parsing of the citations was worth it.
2016

Hirv Jaanika

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(94) "Digital Transformation: Learning Practices and Organisational Change in a Regional VET Centre " ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(12) "Hirv Jaanika" ["Autor_extern_Geschlecht"]=> string(8) "weiblich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "274" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(12) "Hirv Jaanika" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(8) "weiblich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Digital Transformation: Learning Practices and Organisational Change in a Regional VET Centre

Master

Master
2016

Herrera Timoteo

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(81) "Development of an augmented reality supported positioning system for radiotherapy" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(15) "Herrera Timoteo" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(22) "Veas Eduardo Enrique; " ["Zweitbetreuer1_ID"]=> string(3) "135" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(3) "TUG" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "272" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Herrera Timoteo" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Development of an augmented reality supported positioning system for radiotherapy

Master

Master
2016

Teixeira dos Santos Tiago Filipe

array(37) { ["Start"]=> string(10) "01.05.2016" ["year"]=> int(2016) ["title"]=> string(55) "Early Classification on Time Series Using Deep Learning" ["Abstract de"]=> string(1928) "This thesis aims to shed light on the early classification of time series problem, by deriving the trade-off between classification accuracy and time series length for a number of different time series types and classification algorithms. Previous research on early classification of time series focused on keeping classification accuracy of reduced time series roughly at the level of the complete ones. Furthermore, that research work does not employ cutting-edge approaches like Deep Learning. This work fills that research gap by computing trade-off curves on classification ”earlyness” vs. accuracy and by empirically comparing algorithm performance in that context, with a focus on the comparison of Deep Learning with classical approaches. Such early classification trade-off curves are calculated for univariate and multivariate time series and the following algorithms: 1-Nearest Neighbor search with both the Euclidean and Frobenius distance, 1-Nearest Neighbor search with forecasts from ARIMA and linear models, and Deep Learning. The results obtained indicate that early classification is feasible in all types of time series considered. The derived tradeoff curves all share the common trait of slowly decreasing at first, and featuring sharp drops as time series lengths become exceedingly short. Results showed Deep Learning models were able to maintain higher classification accuracies for larger time series length reductions than other algorithms. However, their long run-times, coupled with complexity in parameter configuration, implies that faster, albeit less accurate, baseline algorithms like 1-Nearest Neighbor search may still be a sensible choice on a case-by-case basis. This thesis draws its motivation from areas like predictive maintenance, where the early classification of multivariate time series data may boost performance of early warning systems, for example in manufacturing processes." ["AutorId"]=> string(0) "" ["author"]=> string(32) "Teixeira dos Santos Tiago Filipe" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(75) "Time series classification, Early time series classification, Deep Learning" ["Link"]=> string(0) "" ["ID"]=> string(3) "271" ["angestellt bei"]=> string(21) "Wiss. Partner TUG-IWT" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(32) "Teixeira dos Santos Tiago Filipe" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Early Classification on Time Series Using Deep Learning i

Master

Master
This thesis aims to shed light on the early classification of time series problem, by deriving the trade-off between classification accuracy and time series length for a number of different time series types and classification algorithms. Previous research on early classification of time series focused on keeping classification accuracy of reduced time series roughly at the level of the complete ones. Furthermore, that research work does not employ cutting-edge approaches like Deep Learning. This work fills that research gap by computing trade-off curves on classification ”earlyness” vs. accuracy and by empirically comparing algorithm performance in that context, with a focus on the comparison of Deep Learning with classical approaches. Such early classification trade-off curves are calculated for univariate and multivariate time series and the following algorithms: 1-Nearest Neighbor search with both the Euclidean and Frobenius distance, 1-Nearest Neighbor search with forecasts from ARIMA and linear models, and Deep Learning. The results obtained indicate that early classification is feasible in all types of time series considered. The derived tradeoff curves all share the common trait of slowly decreasing at first, and featuring sharp drops as time series lengths become exceedingly short. Results showed Deep Learning models were able to maintain higher classification accuracies for larger time series length reductions than other algorithms. However, their long run-times, coupled with complexity in parameter configuration, implies that faster, albeit less accurate, baseline algorithms like 1-Nearest Neighbor search may still be a sensible choice on a case-by-case basis. This thesis draws its motivation from areas like predictive maintenance, where the early classification of multivariate time series data may boost performance of early warning systems, for example in manufacturing processes.
2016

Bassa Akim

array(37) { ["Start"]=> string(10) "22.02.2016" ["year"]=> int(2016) ["title"]=> string(51) "GerIE: Open Information Extraction for German Texts" ["Abstract de"]=> string(926) "Open Information Extraction (OIE) targets domain- and relation-independent discovery of relations in text, scalable to the Web. Although German is a major European language, no research has been conducted in German OIE yet. In this paper we fill this knowledge gap and present GerIE, the first German OIE system. As OIE has received increasing attention lately and various potent approaches have already been proposed, we surveyed to what extent these methods can be applied to German language and which additionally principles could be valuable in a new system. The most promising approach, hand-crafted rules working on dependency parsed sentences, was implemented in GerIE. We also created two German OIE evaluation datasets, which showed that GerIE achieves at least 0.88 precision and recall with correctly parsed sentences, while errors made by the used dependency parser can reduce precision to 0.54 and recall to 0.48." ["AutorId"]=> string(0) "" ["author"]=> string(10) "Bassa Akim" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "188" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(10) "Bassa Akim" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

GerIE: Open Information Extraction for German Texts i

Master

Master
Open Information Extraction (OIE) targets domain- and relation-independent discovery of relations in text, scalable to the Web. Although German is a major European language, no research has been conducted in German OIE yet. In this paper we fill this knowledge gap and present GerIE, the first German OIE system. As OIE has received increasing attention lately and various potent approaches have already been proposed, we surveyed to what extent these methods can be applied to German language and which additionally principles could be valuable in a new system. The most promising approach, hand-crafted rules working on dependency parsed sentences, was implemented in GerIE. We also created two German OIE evaluation datasets, which showed that GerIE achieves at least 0.88 precision and recall with correctly parsed sentences, while errors made by the used dependency parser can reduce precision to 0.54 and recall to 0.48.
2016

Bischofter Heimo

array(37) { ["Start"]=> string(10) "01.07.2015" ["year"]=> int(2016) ["title"]=> string(137) "Vergleich der Leistungsfähigkeit von Graphen-Datenbanken für Informationsvernetzung anhand der Abbildbarkeit von Berechtigungskonzepten" ["Abstract de"]=> string(2322) "Vernetzte Daten und Strukturen erfahren ein wachsendes Interesse und verdrängen bewährte Methoden der Datenhaltung in den Hintergrund. Einen neuen Ansatz für die Herausforderungen, die das Management von ausgeprägten und stark vernetzten Datenmengen mit sich bringen, liefern Graphdatenbanken. In der vorliegenden Masterarbeit wird die Leistungsfähigkeit von Graphdatenbanken gegenüber der etablierten relationalen Datenbank evaluiert. Die Ermittlung der Leistungsfähigkeit erfolgt durch Benchmarktests hinsichtlich der Verarbeitung von hochgradig vernetzten Daten, unter der Berücksichtigung eines umgesetzten feingranularen Berechtigungskonzepts. Im Rahmen der theoretischen Ausarbeitung wird zuerst auf die Grundlagen von Datenbanken und der Graphentheorie eingegangen. Diese liefern die Basis für die Bewertung des Funktionsumfangs und der Funktionalität der zur Evaluierung ausgewählten Graphdatenbanken. Die beschriebenen Berechtigungskonzepte liefern einen Überblick unterschiedlicher Zugriffskonzepte sowie die Umsetzung von Zugriffskontrollen in den Graphdatenbanken. Anhand der gewonnenen Informationen wird ein Java-Framework umgesetzt, welches es ermöglicht, die Graphdatenbanken, als auch die relationale Datenbank unter der Berücksichtigung des umgesetzten feingranularen Berechtigungskonzepts zu testen. Durch die Ausführung von geeigneten Testläufen kann die Leistungsfähigkeit in Bezug auf Schreib- und Lesevorgänge ermittelt werden. Benchmarktests für den schreibenden Zugriff erfolgen für Datenbestände unterschiedlicher Größe. Einzelne definierte Suchanfragen für die unterschiedlichen Größen an Daten erlauben die Ermittlung der Leseperformance. Es hat sich gezeigt, dass die relationale Datenbank beim Schreiben der Daten besser skaliert als die Graphdatenbanken. Das Erzeugen von Knoten und Kanten ist in Graphdatenbanken aufwendiger, als die Erzeugung eines neuen Tabelleneintrags in der relationalen Datenbank. Die Bewertung der Suchanfragen unter der Berücksichtigung des umgesetzten Zugriffkonzepts hat gezeigt, dass Graphdatenbanken bei ausgeprägten und stark vernetzten Datenmengen bedeutend besser skalieren als die relationale Datenbank. Je ausgeprägter der Vernetzungsgrad der Daten, desto mehr wird die JOIN-Problematik der relationalen Datenbank verdeutlicht." ["AutorId"]=> string(0) "" ["author"]=> string(16) "Bischofter Heimo" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(120) "Graphen-Datenbanken, Graphdatenbanken, Berechtigungskonzepte, Informationsvernetzung, feingranuales Berechtigungskonzept" ["Link"]=> string(0) "" ["ID"]=> string(3) "220" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Bischofter Heimo" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Vergleich der Leistungsfähigkeit von Graphen-Datenbanken für Informationsvernetzung anhand der Abbildbarkeit von Berechtigungskonzepten i

Master

Master
Vernetzte Daten und Strukturen erfahren ein wachsendes Interesse und verdrängen bewährte Methoden der Datenhaltung in den Hintergrund. Einen neuen Ansatz für die Herausforderungen, die das Management von ausgeprägten und stark vernetzten Datenmengen mit sich bringen, liefern Graphdatenbanken. In der vorliegenden Masterarbeit wird die Leistungsfähigkeit von Graphdatenbanken gegenüber der etablierten relationalen Datenbank evaluiert. Die Ermittlung der Leistungsfähigkeit erfolgt durch Benchmarktests hinsichtlich der Verarbeitung von hochgradig vernetzten Daten, unter der Berücksichtigung eines umgesetzten feingranularen Berechtigungskonzepts. Im Rahmen der theoretischen Ausarbeitung wird zuerst auf die Grundlagen von Datenbanken und der Graphentheorie eingegangen. Diese liefern die Basis für die Bewertung des Funktionsumfangs und der Funktionalität der zur Evaluierung ausgewählten Graphdatenbanken. Die beschriebenen Berechtigungskonzepte liefern einen Überblick unterschiedlicher Zugriffskonzepte sowie die Umsetzung von Zugriffskontrollen in den Graphdatenbanken. Anhand der gewonnenen Informationen wird ein Java-Framework umgesetzt, welches es ermöglicht, die Graphdatenbanken, als auch die relationale Datenbank unter der Berücksichtigung des umgesetzten feingranularen Berechtigungskonzepts zu testen. Durch die Ausführung von geeigneten Testläufen kann die Leistungsfähigkeit in Bezug auf Schreib- und Lesevorgänge ermittelt werden. Benchmarktests für den schreibenden Zugriff erfolgen für Datenbestände unterschiedlicher Größe. Einzelne definierte Suchanfragen für die unterschiedlichen Größen an Daten erlauben die Ermittlung der Leseperformance. Es hat sich gezeigt, dass die relationale Datenbank beim Schreiben der Daten besser skaliert als die Graphdatenbanken. Das Erzeugen von Knoten und Kanten ist in Graphdatenbanken aufwendiger, als die Erzeugung eines neuen Tabelleneintrags in der relationalen Datenbank. Die Bewertung der Suchanfragen unter der Berücksichtigung des umgesetzten Zugriffkonzepts hat gezeigt, dass Graphdatenbanken bei ausgeprägten und stark vernetzten Datenmengen bedeutend besser skalieren als die relationale Datenbank. Je ausgeprägter der Vernetzungsgrad der Daten, desto mehr wird die JOIN-Problematik der relationalen Datenbank verdeutlicht.
2016

Hasitschka Peter

array(37) { ["Start"]=> string(10) "01.01.2016" ["year"]=> int(2016) ["title"]=> string(80) " Visualisierung und Analyse von Empfehlungs-Historien unter Einsatz von WebGL " ["Abstract de"]=> string(1151) "Content-based recommender systems are commonly used to automatically provide context-based resource suggestions to users. This work introduces ECHO (Explorer of Collection HistOries), a visual tool supporting isualizationof a recommender system’s entire query history. It provides an interactive three-dimensional scene resembling the CoverFlow layout to browse through all collections in several Levels of Detail, compare collections, and find similarities in previous result sets. The user has the possibility to analyze a single collection through an intuitive visual representation of the results and their metadata, which is embedded into the 3D scene. These visualizations give insights into the metadata distribution of a collection and support the application of faceted filters on the whole query-history. Search results can be explored by the user in detail, organised in bookmark-collections for a later usage, and may also be used in external tools such as editors. ECHO implementation supports graphics card acceleration to avoid rendering performance issues and to provide smooth, animated transitions by using the WebGL technology. " ["AutorId"]=> string(3) "181" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "121" ["Betreuer"]=> string(12) "Sabol Vedran" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(49) "Visualization; Visual Analytics; Query-History " ["Link"]=> string(0) "" ["ID"]=> string(3) "255" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Hasitschka Peter" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Visualisierung und Analyse von Empfehlungs-Historien unter Einsatz von WebGL i

Master

Master
Content-based recommender systems are commonly used to automatically provide context-based resource suggestions to users. This work introduces ECHO (Explorer of Collection HistOries), a visual tool supporting isualizationof a recommender system’s entire query history. It provides an interactive three-dimensional scene resembling the CoverFlow layout to browse through all collections in several Levels of Detail, compare collections, and find similarities in previous result sets. The user has the possibility to analyze a single collection through an intuitive visual representation of the results and their metadata, which is embedded into the 3D scene. These visualizations give insights into the metadata distribution of a collection and support the application of faceted filters on the whole query-history. Search results can be explored by the user in detail, organised in bookmark-collections for a later usage, and may also be used in external tools such as editors. ECHO implementation supports graphics card acceleration to avoid rendering performance issues and to provide smooth, animated transitions by using the WebGL technology.
2016

Bassa Kevin

array(37) { ["Start"]=> string(10) "22.02.2016" ["year"]=> int(2016) ["title"]=> string(84) "Validation of Information: On-The-Fly Data Set Generation for Single Fact Validation" ["Abstract de"]=> string(1595) "Information validation is the process of determining whether a certain piece of information is true or false. Existing research in this area focuses on specific domains, but neglects cross-domain relations. This work will attempt to fill this gap and examine how various domains deal with the validation of information, providing a big picture across multiple domains. Therefore, we study how research areas, application domains and their definition of related terms in the field of information validation are related to each other, and show that there is no uniform use of the key terms. In addition we give an overview of existing fact finding approaches, with a focus on the data sets used for evaluation. We show that even baseline methods already achieve very good results, and that more sophisticated methods often improve the results only when they are tailored to specific data sets. Finally, we present the first step towards a new dynamic approach for information validation, which will generate a data set for existing fact finding methods on the fly by utilizing web search engines and information extraction tools. We show that with some limitations, it is possible to use existing fact finding methods to validate facts without a preexisting data set. We generate four different data sets with this approach, and use them to compare seven existing fact finding methods to each other. We discover that the performance of the fact validation process is strongly dependent on the type of fact that has to be validated as well as on the quality of the used information extraction tool." ["AutorId"]=> string(0) "" ["author"]=> string(11) "Bassa Kevin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(13) "Kröll Mark; " ["Zweitbetreuer1_ID"]=> string(3) "108" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "237" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(11) "Bassa Kevin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Validation of Information: On-The-Fly Data Set Generation for Single Fact Validation i

Master

Master
Information validation is the process of determining whether a certain piece of information is true or false. Existing research in this area focuses on specific domains, but neglects cross-domain relations. This work will attempt to fill this gap and examine how various domains deal with the validation of information, providing a big picture across multiple domains. Therefore, we study how research areas, application domains and their definition of related terms in the field of information validation are related to each other, and show that there is no uniform use of the key terms. In addition we give an overview of existing fact finding approaches, with a focus on the data sets used for evaluation. We show that even baseline methods already achieve very good results, and that more sophisticated methods often improve the results only when they are tailored to specific data sets. Finally, we present the first step towards a new dynamic approach for information validation, which will generate a data set for existing fact finding methods on the fly by utilizing web search engines and information extraction tools. We show that with some limitations, it is possible to use existing fact finding methods to validate facts without a preexisting data set. We generate four different data sets with this approach, and use them to compare seven existing fact finding methods to each other. We discover that the performance of the fact validation process is strongly dependent on the type of fact that has to be validated as well as on the quality of the used information extraction tool.
2015

Steinkellner Christof

array(37) { ["Start"]=> string(10) "06.03.2015" ["year"]=> int(2015) ["title"]=> string(59) "Empirische Analyse von sozialen Netwerken von Informatikern" ["Abstract de"]=> string(1923) "Unter Wissenschaftlern ist Twitter ein sehr beliebtes soziales Netzwerk. Dort diskutieren sie verschiedenste Themen und werben für neue Ideen oder präsentieren Ergebnisse ihrer aktuellen Forschungsarbeit. Die in dieser Arbeit durchgeführten Experimente beruhen auf einem Twitter-Datensatz welcher aus den Tweets von Informatikern, deren Forschungsbereiche bekannt sind, besteht. Die vorliegende Diplomarbeit kann grob in vier Teile unterteilt werden: Zunächst wird beschrieben, wie der Twitter-Datensatz erstellt wurde. Danach werden diverse Statistiken zu diesem Datensatz präsentiert. Beispielsweise wurden die meisten Tweets während der Arbeitszeit erstellt und die Nutzer sind unterschiedlich stark aktiv. Aus den Follower-Beziehungen der Nutzer wurde ein Netzwerk erstellt, welches nachweislich small world Eigenschaften hat. Darüber hinaus sind in diesem Netzwerk auch die verschiedenen Forschungsbereiche sichtbar. Der dritte Teil dieser Arbeit ist der Untersuchung der Hashtagbenutzung gewidmet. Dabei zeigte sich, dass die meisten Hashtags nur selten benutzt werden. Über den gesamten Beobachtungszeitraum betrachtet ändert sich die Verwendung von Hashtags kaum, jedoch gibt es viele kurzfristige Schwankungen. Da die Forschungsbereiche der Nutzer bekannt sind, können auch die Bereiche der Hashtags bestimmt werden. Dadurch können die Hashtags dann in fachspezifische und generelle Hashtags unterteilt werden. Die Analyse der Weitergabe von Hashtags über das Twitter-Netzwerk wird im vierten Teil mittels sogenannter Informationsflussbäume betrachtet. Aufgrund dieser Informationsflussbäume kann gemessen werden wie gut ein Nutzer Informationen verbreitet und erzeugt. Dabei wurde auch die Hypothese bestätigt, dass diese Eigenschaften von der Anzahl der Tweets und Retweets und der Stellung im sozialen Netzwerk abhängen. Jedoch ist dieser Zusammenhang nur in Einzelfällen stark ausgeprägt.   " ["AutorId"]=> string(0) "" ["author"]=> string(22) "Steinkellner Christof " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "110" ["Betreuer"]=> string(13) "Lex Elisabeth" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(92) "Online social network analysis, Information diffusion, Science 2.0, Information cascades   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "231" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(22) "Steinkellner Christof " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Empirische Analyse von sozialen Netwerken von Informatikern i

Master

Master
Unter Wissenschaftlern ist Twitter ein sehr beliebtes soziales Netzwerk. Dort diskutieren sie verschiedenste Themen und werben für neue Ideen oder präsentieren Ergebnisse ihrer aktuellen Forschungsarbeit. Die in dieser Arbeit durchgeführten Experimente beruhen auf einem Twitter-Datensatz welcher aus den Tweets von Informatikern, deren Forschungsbereiche bekannt sind, besteht. Die vorliegende Diplomarbeit kann grob in vier Teile unterteilt werden: Zunächst wird beschrieben, wie der Twitter-Datensatz erstellt wurde. Danach werden diverse Statistiken zu diesem Datensatz präsentiert. Beispielsweise wurden die meisten Tweets während der Arbeitszeit erstellt und die Nutzer sind unterschiedlich stark aktiv. Aus den Follower-Beziehungen der Nutzer wurde ein Netzwerk erstellt, welches nachweislich small world Eigenschaften hat. Darüber hinaus sind in diesem Netzwerk auch die verschiedenen Forschungsbereiche sichtbar. Der dritte Teil dieser Arbeit ist der Untersuchung der Hashtagbenutzung gewidmet. Dabei zeigte sich, dass die meisten Hashtags nur selten benutzt werden. Über den gesamten Beobachtungszeitraum betrachtet ändert sich die Verwendung von Hashtags kaum, jedoch gibt es viele kurzfristige Schwankungen. Da die Forschungsbereiche der Nutzer bekannt sind, können auch die Bereiche der Hashtags bestimmt werden. Dadurch können die Hashtags dann in fachspezifische und generelle Hashtags unterteilt werden. Die Analyse der Weitergabe von Hashtags über das Twitter-Netzwerk wird im vierten Teil mittels sogenannter Informationsflussbäume betrachtet. Aufgrund dieser Informationsflussbäume kann gemessen werden wie gut ein Nutzer Informationen verbreitet und erzeugt. Dabei wurde auch die Hypothese bestätigt, dass diese Eigenschaften von der Anzahl der Tweets und Retweets und der Stellung im sozialen Netzwerk abhängen. Jedoch ist dieser Zusammenhang nur in Einzelfällen stark ausgeprägt.  
2015

Eberhard Lukas

array(37) { ["Start"]=> string(10) "01.06.2014" ["year"]=> int(2015) ["title"]=> string(85) "Predicting Trading Interactions in Trading, Online and Location-Based Social Networks" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(15) "Eberhard Lukas " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(20) "Trattner Christoph; " ["Zweitbetreuer1_ID"]=> string(3) "132" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "228" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Eberhard Lukas " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Predicting Trading Interactions in Trading, Online and Location-Based Social Networks

Master

Master
2015

Perez Alberto

array(37) { ["Start"]=> string(10) "14.10.2014" ["year"]=> int(2015) ["title"]=> string(38) "English Wiktionary Parser & Lemmatizer" ["Abstract de"]=> string(376) "

“Wiktionary”, is a free dictionary which is part of Wikmedia Foundation. This webpage contains translations, etymologies, synonyms and pronunciations of words in multiple languages in that case we just focus on English.

A syntactic analyser (parser) turns the entry text in other structures, which will make easier the analysis and capture of nest entrance.

" ["AutorId"]=> string(0) "" ["author"]=> string(14) "Perez Alberto " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(12) "Kern Roman; " ["Zweitbetreuer1_ID"]=> string(3) "102" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "227" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Perez Alberto " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

English Wiktionary Parser & Lemmatizer i

Master

Master

“Wiktionary”, is a free dictionary which is part of Wikmedia Foundation. This webpage contains translations, etymologies, synonyms and pronunciations of words in multiple languages in that case we just focus on English.

A syntactic analyser (parser) turns the entry text in other structures, which will make easier the analysis and capture of nest entrance.

2015

Steinkogler Michael

array(37) { ["Start"]=> string(10) "05.11.2013" ["year"]=> int(2015) ["title"]=> string(85) "Verbesserung von Query Suggestions für seltene Queries auf facettierten Dokumenten. " ["Abstract de"]=> string(1605) "The goal of this thesis is to improve query suggestions for rare queries on faceted documents. While there has been extensive work on query suggestions for single facet documents there is only little known about how to provide query suggestions in the context of faceted documents. The constraint to provide suggestions also for uncommon or even previously unseen queries (so-called rare queries) increases the difficulty of the problem as the commonly used technique of mining query logs can not be easily applied.

In this thesis it was further assumed that the user of the information retrieval system always searches for one specific document - leading to uniformly distributed queries. Under these constraints it was tried to exploit the structure of the faceted documents to provide helpful query suggestions. In addition to theoretical exploration of such improvements a custom datastructure was developed to efficiently provide interactive query suggestions. Evaluation of the developed query suggestion algorithms was done on multiple document collections by comparing them to a baseline algorithm that reduces faceted documents to single facet documents. Results are promising as the final version of the new query suggestion algorithm consistently outperformed the baseline.

Motivation for and potential application of this work can be found in call centers for customer support. For call center employees it is crucial to quickly locate relevant customer information - information that is available in structured form (and can thus easily be transformed into faceted documents).

" ["AutorId"]=> string(0) "" ["author"]=> string(19) "Steinkogler Michael" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "225" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(19) "Steinkogler Michael" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Verbesserung von Query Suggestions für seltene Queries auf facettierten Dokumenten. i

Master

Master
The goal of this thesis is to improve query suggestions for rare queries on faceted documents. While there has been extensive work on query suggestions for single facet documents there is only little known about how to provide query suggestions in the context of faceted documents. The constraint to provide suggestions also for uncommon or even previously unseen queries (so-called rare queries) increases the difficulty of the problem as the commonly used technique of mining query logs can not be easily applied.

In this thesis it was further assumed that the user of the information retrieval system always searches for one specific document - leading to uniformly distributed queries. Under these constraints it was tried to exploit the structure of the faceted documents to provide helpful query suggestions. In addition to theoretical exploration of such improvements a custom datastructure was developed to efficiently provide interactive query suggestions. Evaluation of the developed query suggestion algorithms was done on multiple document collections by comparing them to a baseline algorithm that reduces faceted documents to single facet documents. Results are promising as the final version of the new query suggestion algorithm consistently outperformed the baseline.

Motivation for and potential application of this work can be found in call centers for customer support. For call center employees it is crucial to quickly locate relevant customer information - information that is available in structured form (and can thus easily be transformed into faceted documents).

2015

Parekodi Sathvik

array(37) { ["Start"]=> string(10) "01.01.2015" ["year"]=> int(2015) ["title"]=> string(48) "A RESTful Web-based Expert Recommender Framework" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(16) "Parekodi Sathvik" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "275" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Parekodi Sathvik" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

A RESTful Web-based Expert Recommender Framework

Master

Master
2015

Rella Matthias

array(37) { ["Start"]=> string(10) "01.01.2015" ["year"]=> int(2015) ["title"]=> string(93) "Bits And Pieces: Ein generisches Widget-Framework für die Sinngewinnung im World Wide Web  " ["Abstract de"]=> string(1350) "The term “sensemaking” refers to a universal concept being investigated in various sciences specifically or interdisciplinary. Briefly spoken from the perspective of the information sciences, sensemaking occurs when a person has to deal with a huge, perhaps overwhelming, and heterogeneous amount of information and make sense out of it. This process, which is probable to happen in everyday life, and the sense being made as a product of this process are subject to constant research, especially under the nowadays threat of the information deluge. The World Wide Web is the media for today’s huge and heterogeneous amount of information which pervades our everyday’s life. Whether we need to do a deep search for scientific literature, figure out which hotel to book when travelling or simply need to keep track of our Web surfing, we engage in a kind of sensemaking on the Web. However, common-purpose user interfaces capable of the dynamic and heterogeneous nature of the information on the Web are missing. This thesis enlightens the term sensemaking from various theoretical perspectives and reviews existing user interface approaches. Then, it develops a novel theoretical and technical framework approach for building user interfaces for sensemaking on the Web, which is finally evaluated in a user study and in expert interviews.   " ["AutorId"]=> string(3) "119" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(63) "HCI, semantic Web, informal learning, Web-technology, Big Data " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "213" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Rella Matthias" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Bits And Pieces: Ein generisches Widget-Framework für die Sinngewinnung im World Wide Web   i

Master

Master
The term “sensemaking” refers to a universal concept being investigated in various sciences specifically or interdisciplinary. Briefly spoken from the perspective of the information sciences, sensemaking occurs when a person has to deal with a huge, perhaps overwhelming, and heterogeneous amount of information and make sense out of it. This process, which is probable to happen in everyday life, and the sense being made as a product of this process are subject to constant research, especially under the nowadays threat of the information deluge. The World Wide Web is the media for today’s huge and heterogeneous amount of information which pervades our everyday’s life. Whether we need to do a deep search for scientific literature, figure out which hotel to book when travelling or simply need to keep track of our Web surfing, we engage in a kind of sensemaking on the Web. However, common-purpose user interfaces capable of the dynamic and heterogeneous nature of the information on the Web are missing. This thesis enlightens the term sensemaking from various theoretical perspectives and reviews existing user interface approaches. Then, it develops a novel theoretical and technical framework approach for building user interfaces for sensemaking on the Web, which is finally evaluated in a user study and in expert interviews.  
2015

Höffernig Martin

array(37) { ["Start"]=> string(10) "28.04.2009" ["year"]=> int(2015) ["title"]=> string(63) "Formalisierung der Semantic Constraints von MPEG-7 Profilen   " ["Abstract de"]=> string(2355) "The amount of multimedia content being created is growing tremendously. In addition, the number of applications for processing, consuming, and sharing multimedia content is growing. Being able to create and process metadata describing this content is an important prerequisite to ensure a correct workflow of applications. The MPEG-7 standard enables the description of different types of multimedia content by creating standardized metadata descriptions. When using MPEG-7 practically, two major drawbacks are identified, namely complexity and fuzziness. Complexity is mainly based on the comprehensiveness of MPEG-7, while fuzziness is a result of the syntax variability. The notion of MPEG-7 profiles were introduced in order to address and possibly solve these issues. A profile defines the usage and semantics of MPEG-7 tailored to a particular application domain. Thus usage instructions and explanations, denoted as semantic constraints, can be expressed as English prose. However, this textual explanations leave space for potential misinterpretations since they have no formal grounding. While checking the conformance of an MPEG-7 profile description is possible on a syntactical level, the semantic constraints currently cannot be checked in an automated way. Being unable to handle the semantic constraints, inconsistent MPEG-7 profile descriptions can be created or processed leading to potential interoperability issues. Thus an approach for formalizing the semantic constraints of MPEG-7 profiles using ontologies and logical rules is presented in this thesis. Ontologies are used to model the characteristics of the different profiles with respect to the semantic constraints, while validation rules detect and flag violations of these constraints. In similar manner, profile-independent temporal semantic constraints are also formalized. The presented approach is the basis for a semantic validation service for MPEG-7 profile descriptions, called VAMP. VAMP verifies the conformance of a given MPEG-7 profile description with a selected MPEG-7 profile specification in terms of syntax and semantics. Three different profiles are integrated in VAMP. The temporal semantic constraints are also considered. As a proof of concept, VAMP is implemented as a web application for human users and as a RESTful web service for software agents.   " ["AutorId"]=> string(0) "" ["author"]=> string(17) "Höffernig Martin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "211" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Höffernig Martin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Formalisierung der Semantic Constraints von MPEG-7 Profilen   i

Master

Master
The amount of multimedia content being created is growing tremendously. In addition, the number of applications for processing, consuming, and sharing multimedia content is growing. Being able to create and process metadata describing this content is an important prerequisite to ensure a correct workflow of applications. The MPEG-7 standard enables the description of different types of multimedia content by creating standardized metadata descriptions. When using MPEG-7 practically, two major drawbacks are identified, namely complexity and fuzziness. Complexity is mainly based on the comprehensiveness of MPEG-7, while fuzziness is a result of the syntax variability. The notion of MPEG-7 profiles were introduced in order to address and possibly solve these issues. A profile defines the usage and semantics of MPEG-7 tailored to a particular application domain. Thus usage instructions and explanations, denoted as semantic constraints, can be expressed as English prose. However, this textual explanations leave space for potential misinterpretations since they have no formal grounding. While checking the conformance of an MPEG-7 profile description is possible on a syntactical level, the semantic constraints currently cannot be checked in an automated way. Being unable to handle the semantic constraints, inconsistent MPEG-7 profile descriptions can be created or processed leading to potential interoperability issues. Thus an approach for formalizing the semantic constraints of MPEG-7 profiles using ontologies and logical rules is presented in this thesis. Ontologies are used to model the characteristics of the different profiles with respect to the semantic constraints, while validation rules detect and flag violations of these constraints. In similar manner, profile-independent temporal semantic constraints are also formalized. The presented approach is the basis for a semantic validation service for MPEG-7 profile descriptions, called VAMP. VAMP verifies the conformance of a given MPEG-7 profile description with a selected MPEG-7 profile specification in terms of syntax and semantics. Three different profiles are integrated in VAMP. The temporal semantic constraints are also considered. As a proof of concept, VAMP is implemented as a web application for human users and as a RESTful web service for software agents.  
2015

Daum Martin

array(37) { ["Start"]=> string(10) "31.01.2013" ["year"]=> int(2015) ["title"]=> string(98) "Lokalisierung von verlorenen Gegenständen durch Dead Reckoning von Fußgängern auf Smartphones. " ["Abstract de"]=> string(1226) "Many people face the problem of misplaced personal items in their daily routine, especially when they are in a hurry, and often waste a lot of time searching these items. There are different gadgets and applications available on the market, which are trying to help people find lost items. Most often, help is given by creating an infrastructure that can locate lost items. This thesis presents a novel approach for finding lost items, namely by helping people re-trace their movements throughout the day. Movements are logged by indoor localization based on mobile phone sensing. An external infrastructure is not needed. The application is based on a step based pedestrian dead reckoning system, which is developed to collect real-time localization data. This data is used to draw a live visualization of the whole trace the user has covered, from where the user can retrieve the position of the lost personal items, after they were tagged using simple speech commands. The results from the field experiment, that was performed with twelve participants of different age and gender, showed that the application could successfully visualize the covered route of the pedestrians and reveal the position of the placed items.   " ["AutorId"]=> string(0) "" ["author"]=> string(11) "Daum Martin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(19) "Simon Jörg Peter; " ["Zweitbetreuer1_ID"]=> string(3) "127" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(89) " indoor localization; smartphone; dead reckoning; lost items tracker; step detection   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "210" ["angestellt bei"]=> string(28) "TUG-IWT Extern Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(11) "Daum Martin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Lokalisierung von verlorenen Gegenständen durch Dead Reckoning von Fußgängern auf Smartphones. i

Master

Master
Many people face the problem of misplaced personal items in their daily routine, especially when they are in a hurry, and often waste a lot of time searching these items. There are different gadgets and applications available on the market, which are trying to help people find lost items. Most often, help is given by creating an infrastructure that can locate lost items. This thesis presents a novel approach for finding lost items, namely by helping people re-trace their movements throughout the day. Movements are logged by indoor localization based on mobile phone sensing. An external infrastructure is not needed. The application is based on a step based pedestrian dead reckoning system, which is developed to collect real-time localization data. This data is used to draw a live visualization of the whole trace the user has covered, from where the user can retrieve the position of the lost personal items, after they were tagged using simple speech commands. The results from the field experiment, that was performed with twelve participants of different age and gender, showed that the application could successfully visualize the covered route of the pedestrians and reveal the position of the placed items.  
2015

Perndorfer Markus

array(37) { ["Start"]=> string(10) "01.02.2013" ["year"]=> int(2015) ["title"]=> string(57) "Soziale Interaktionen mittels Mobile Sensing erkennen   " ["Abstract de"]=> string(2667) "With this thesis we try to determine the feasibility of detecting face-to-face social interactions based on standard smartphone sensors like Bluetooth, Global Positioning System (GPS) data, microphone or magnetic field sen- sor. We try to detect the number of social interactions by leveraging Mobile Sens- ing on modern smartphones. Mobile Sensing is the use of smartphones as ubiquitous sensing devices to collect data. Our focus lies on the standard smartphone sensors provided by the Android Software Development Kit (SDK) as opposed to previous work which mostly leverages only audio sig- nal processing or Bluetooth data. To mine data and collect ground truth data, we write an Android 2 app that collects sensor data using the Funf Open Sensing Framework[1] and addi- tionally allows the user to label their social interaction as they take place. With the app we perform two user studies over the course of three days with three participants each. We collect the data and add additional meta-data for every user during an interview. This meta-data consists of semantic labels for location data and the distinction of social interactions into private and business social interactions. We collected a total of 16M data points for the first group and 35M data points for the second group. Using the collected data and the ground truth labels collected by our partici- pants, we then explore how time of day, audio data, calendar appointments, magnetic field values, Bluetooth data and location data interacts with the number of social interactions of a person. We perform this exploration by creating various visualization for the data points and use time correlation to determine if they influence the social interaction behavior. We find that only calendar appointments provide some correlation with the social interactions and could be used in a detection algorithm to boost the accuracy of the result. The other data points show no correlation during our exploratory evaluation of the collected data. We also find that visualizing the interactions in the form of a heatmap on a map is a visualization that most participants find very interesting. Our participants also made clear that la- beling all social interactions over the course of a day is a very tedious task. We recommend that further research has to include audio signal process- ing and a carefully designed study setup. This design has to include what data needs to be sampled at what frequency and accuracy and must provide further assistance to the user for labeling the data. We release the data mining app and the code used to analyze the data as open source under the MIT License.   " ["AutorId"]=> string(0) "" ["author"]=> string(17) "Perndorfer Markus" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(19) "Simon Jörg Peter; " ["Zweitbetreuer1_ID"]=> string(3) "127" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(51) "mobile sensing; data mining; social; interactions; " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "208" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Perndorfer Markus" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Soziale Interaktionen mittels Mobile Sensing erkennen   i

Master

Master
With this thesis we try to determine the feasibility of detecting face-to-face social interactions based on standard smartphone sensors like Bluetooth, Global Positioning System (GPS) data, microphone or magnetic field sen- sor. We try to detect the number of social interactions by leveraging Mobile Sens- ing on modern smartphones. Mobile Sensing is the use of smartphones as ubiquitous sensing devices to collect data. Our focus lies on the standard smartphone sensors provided by the Android Software Development Kit (SDK) as opposed to previous work which mostly leverages only audio sig- nal processing or Bluetooth data. To mine data and collect ground truth data, we write an Android 2 app that collects sensor data using the Funf Open Sensing Framework[1] and addi- tionally allows the user to label their social interaction as they take place. With the app we perform two user studies over the course of three days with three participants each. We collect the data and add additional meta-data for every user during an interview. This meta-data consists of semantic labels for location data and the distinction of social interactions into private and business social interactions. We collected a total of 16M data points for the first group and 35M data points for the second group. Using the collected data and the ground truth labels collected by our partici- pants, we then explore how time of day, audio data, calendar appointments, magnetic field values, Bluetooth data and location data interacts with the number of social interactions of a person. We perform this exploration by creating various visualization for the data points and use time correlation to determine if they influence the social interaction behavior. We find that only calendar appointments provide some correlation with the social interactions and could be used in a detection algorithm to boost the accuracy of the result. The other data points show no correlation during our exploratory evaluation of the collected data. We also find that visualizing the interactions in the form of a heatmap on a map is a visualization that most participants find very interesting. Our participants also made clear that la- beling all social interactions over the course of a day is a very tedious task. We recommend that further research has to include audio signal process- ing and a carefully designed study setup. This design has to include what data needs to be sampled at what frequency and accuracy and must provide further assistance to the user for labeling the data. We release the data mining app and the code used to analyze the data as open source under the MIT License.  
2015

Strohmaier David

array(37) { ["Start"]=> string(10) "15.02.2015" ["year"]=> int(2015) ["title"]=> string(88) "Visual analytics for automatic quality assessment of user-generated content in Wikipedia" ["Abstract de"]=> string(2397) "Wikipedia has become a major source of information in the web. It consists of user-generated content and has about 12 million edits/contributions per month. One of the keys to its success being the user-generated content, is also a hindrance to its growth and quality: in the context of user-generated content contributions can be of poor quality because everyone, even anonymous users, can participate. Therefore, the Wikipedia community defined criteria for high-quality articles also based on community review, called featured articles. However, reviewing all contributions and identifying featured articles is a long-winded process. In 2014, 269000 new articles were created, however, only 602 peer-reviews were performed and thus only 581 new featured article candidates were nominated. The amount of new featured articles in the year 2014 was 298. Thus, a lot of non-featured articles are yet to be reviewed, because the amount of data is far too large to review all edits/contributions only with human power. Related work has shown that it is possible to automatically measure the quality of Wikipedia articles, in order to detect non-featured articles that would likely to meet these high-quality standards. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating  new quality metrics and comparing them with state-of-the-art ones. ii) A Quality Assisted Editor to view which parts of the article should be improved in order to reach a higher overall article quality. Additionally, a case study–for the Quality Analyzer–and an office user study–for the Quality Assisted Editor–were conducted. The case study mainly describes how domain experts used the Quality Analyzer to create quality metrics. Furthermore, usability aspects and workload were analyzed. The user study for the Quality Assisted Editor was conducted with 24 participants, that had to perform tasks either with the Quality Assisted Editor or a benchmark tool. Three aspects were examined: Detecting (potential) featured and non-featured articles, the workload of the participants and the usability of the Quality Assisted Editor." ["AutorId"]=> string(3) "182" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "135" ["Betreuer"]=> string(20) "Veas Eduardo Enrique" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(27) "di Sciascio Maria Cecilia; " ["Zweitbetreuer1_ID"]=> string(2) "88" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(85) "Wikipedia, Visual Analytics, Automatic Quality Assessment, User-Generated Content   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "195" ["angestellt bei"]=> string(10) "KC Student" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Strohmaier David" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Visual analytics for automatic quality assessment of user-generated content in Wikipedia i

Master

Master
Wikipedia has become a major source of information in the web. It consists of user-generated content and has about 12 million edits/contributions per month. One of the keys to its success being the user-generated content, is also a hindrance to its growth and quality: in the context of user-generated content contributions can be of poor quality because everyone, even anonymous users, can participate. Therefore, the Wikipedia community defined criteria for high-quality articles also based on community review, called featured articles. However, reviewing all contributions and identifying featured articles is a long-winded process. In 2014, 269000 new articles were created, however, only 602 peer-reviews were performed and thus only 581 new featured article candidates were nominated. The amount of new featured articles in the year 2014 was 298. Thus, a lot of non-featured articles are yet to be reviewed, because the amount of data is far too large to review all edits/contributions only with human power. Related work has shown that it is possible to automatically measure the quality of Wikipedia articles, in order to detect non-featured articles that would likely to meet these high-quality standards. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating  new quality metrics and comparing them with state-of-the-art ones. ii) A Quality Assisted Editor to view which parts of the article should be improved in order to reach a higher overall article quality. Additionally, a case study–for the Quality Analyzer–and an office user study–for the Quality Assisted Editor–were conducted. The case study mainly describes how domain experts used the Quality Analyzer to create quality metrics. Furthermore, usability aspects and workload were analyzed. The user study for the Quality Assisted Editor was conducted with 24 participants, that had to perform tasks either with the Quality Assisted Editor or a benchmark tool. Three aspects were examined: Detecting (potential) featured and non-featured articles, the workload of the participants and the usability of the Quality Assisted Editor.
2014

Wertner Alfred

array(37) { ["Start"]=> string(0) "" ["year"]=> int(2014) ["title"]=> string(76) "Stress prediction for knowledge workers based on PC activity and noise level" ["Abstract de"]=> string(3313) "Knowledge workers are exposed to many influences which have the potential to interrupt work. The impact of these influences on individual’s, not only knowledge workers, often cause detrimental effects on physical health and well-being. Twelve knowledge workers took part as participants of the experiment conducted for this thesis. The focus of the experiment was to analyse if sound level and computer interactions of knowledge workers can predict their self reported stress levels. A software system was developed using sensors on knowledge worker’s mobile and desktop devices. Records of PC activity contain information about foreground windows and computer idle times. Foreground window records include the timestamp when a window received focus, the duration the window was held in the foreground, the window title and the unique number identifying the window. Computer idle time records contain information about the timestamp when idle time began and the duration. Computer idle time was recorded only after a minimum idle interval of one minute. Sound levels were recorded using an smartphone’s microphone (Android). The average sound pressure level from the audio samples was computed over an one minute timeframe. Once initialized with an anonymous participant code, the sensors record PC activity and sound level and upload the records enriched with the code to a remote service. The service uses a key value based database system with the code as key and the collection of records as value. The service stores the records for each knowledge worker over a period of ten days. After this period, the preprocessing component of the system splits the records of PC activity and sound level into working days and computes measures approximating worktime fragmentation and noise. Foreground window records were used to compute the average time a window was held in the foreground and the average time an application was held in the foreground. Applications are sets of foreground window records which share the same window title. Computer idle time records were used to compute the number of idle times between one and five minutes and the period of those idle times which lasted more than twenty. From the sound pressure levels the average level and the period of all levels which exceeded 60 decibels were computed. The figures were computed with the scope of an participant’s working day for five different temporal resolutions. Additionally, the stress levels are computed from midday and evening scales. Participants recorded stress levels two times a working day and entered them manually in the system. The first self report was made close to lunch break and the second at the end of an day at work. Since participants forgot to enter self assessed stress levels, the number of working days containing data of all types ranges between eight and ten. As a result, the preprocessing component stores the measures and stress levels used by the stress predicition analysis component. The correlation of the measures with the self reported stress levels showed that a prediction of those stress levels is possible. The state of well-being (mood, calm) increased the higher the number of idle times between one and five minutes in combination with an sound pressure level not exceeding 60 decibels." ["AutorId"]=> string(0) "" ["author"]=> string(14) "Wertner Alfred" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "143" ["Betreuer"]=> string(28) "Pammer-Schindler_TU Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "889" ["angestellt bei"]=> string(10) "TUG-IWT KC" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Wertner Alfred" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(10) "30/10/2017" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Stress prediction for knowledge workers based on PC activity and noise level i

Master

Master
Knowledge workers are exposed to many influences which have the potential to interrupt work. The impact of these influences on individual’s, not only knowledge workers, often cause detrimental effects on physical health and well-being. Twelve knowledge workers took part as participants of the experiment conducted for this thesis. The focus of the experiment was to analyse if sound level and computer interactions of knowledge workers can predict their self reported stress levels. A software system was developed using sensors on knowledge worker’s mobile and desktop devices. Records of PC activity contain information about foreground windows and computer idle times. Foreground window records include the timestamp when a window received focus, the duration the window was held in the foreground, the window title and the unique number identifying the window. Computer idle time records contain information about the timestamp when idle time began and the duration. Computer idle time was recorded only after a minimum idle interval of one minute. Sound levels were recorded using an smartphone’s microphone (Android). The average sound pressure level from the audio samples was computed over an one minute timeframe. Once initialized with an anonymous participant code, the sensors record PC activity and sound level and upload the records enriched with the code to a remote service. The service uses a key value based database system with the code as key and the collection of records as value. The service stores the records for each knowledge worker over a period of ten days. After this period, the preprocessing component of the system splits the records of PC activity and sound level into working days and computes measures approximating worktime fragmentation and noise. Foreground window records were used to compute the average time a window was held in the foreground and the average time an application was held in the foreground. Applications are sets of foreground window records which share the same window title. Computer idle time records were used to compute the number of idle times between one and five minutes and the period of those idle times which lasted more than twenty. From the sound pressure levels the average level and the period of all levels which exceeded 60 decibels were computed. The figures were computed with the scope of an participant’s working day for five different temporal resolutions. Additionally, the stress levels are computed from midday and evening scales. Participants recorded stress levels two times a working day and entered them manually in the system. The first self report was made close to lunch break and the second at the end of an day at work. Since participants forgot to enter self assessed stress levels, the number of working days containing data of all types ranges between eight and ten. As a result, the preprocessing component stores the measures and stress levels used by the stress predicition analysis component. The correlation of the measures with the self reported stress levels showed that a prediction of those stress levels is possible. The state of well-being (mood, calm) increased the higher the number of idle times between one and five minutes in combination with an sound pressure level not exceeding 60 decibels.
Kontakt Karriere

Hiermit erkläre ich ausdrücklich meine Einwilligung zum Einsatz und zur Speicherung von Cookies. Weiter Informationen finden sich unter Datenschutzerklärung

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close