Theses

Here you can find theses of Know-Center employees

2015

Strohmaier David

array(37) { ["Start"]=> string(10) "15.02.2015" ["year"]=> int(2015) ["title"]=> string(88) "Visual analytics for automatic quality assessment of user-generated content in Wikipedia" ["Abstract de"]=> string(2397) "Wikipedia has become a major source of information in the web. It consists of user-generated content and has about 12 million edits/contributions per month. One of the keys to its success being the user-generated content, is also a hindrance to its growth and quality: in the context of user-generated content contributions can be of poor quality because everyone, even anonymous users, can participate. Therefore, the Wikipedia community defined criteria for high-quality articles also based on community review, called featured articles. However, reviewing all contributions and identifying featured articles is a long-winded process. In 2014, 269000 new articles were created, however, only 602 peer-reviews were performed and thus only 581 new featured article candidates were nominated. The amount of new featured articles in the year 2014 was 298. Thus, a lot of non-featured articles are yet to be reviewed, because the amount of data is far too large to review all edits/contributions only with human power. Related work has shown that it is possible to automatically measure the quality of Wikipedia articles, in order to detect non-featured articles that would likely to meet these high-quality standards. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating  new quality metrics and comparing them with state-of-the-art ones. ii) A Quality Assisted Editor to view which parts of the article should be improved in order to reach a higher overall article quality. Additionally, a case study–for the Quality Analyzer–and an office user study–for the Quality Assisted Editor–were conducted. The case study mainly describes how domain experts used the Quality Analyzer to create quality metrics. Furthermore, usability aspects and workload were analyzed. The user study for the Quality Assisted Editor was conducted with 24 participants, that had to perform tasks either with the Quality Assisted Editor or a benchmark tool. Three aspects were examined: Detecting (potential) featured and non-featured articles, the workload of the participants and the usability of the Quality Assisted Editor." ["AutorId"]=> string(3) "182" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "135" ["Betreuer"]=> string(20) "Veas Eduardo Enrique" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(27) "di Sciascio Maria Cecilia; " ["Zweitbetreuer1_ID"]=> string(2) "88" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(85) "Wikipedia, Visual Analytics, Automatic Quality Assessment, User-Generated Content   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "195" ["angestellt bei"]=> string(10) "KC Student" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Strohmaier David" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Visual analytics for automatic quality assessment of user-generated content in Wikipedia i

Master

Master
Wikipedia has become a major source of information in the web. It consists of user-generated content and has about 12 million edits/contributions per month. One of the keys to its success being the user-generated content, is also a hindrance to its growth and quality: in the context of user-generated content contributions can be of poor quality because everyone, even anonymous users, can participate. Therefore, the Wikipedia community defined criteria for high-quality articles also based on community review, called featured articles. However, reviewing all contributions and identifying featured articles is a long-winded process. In 2014, 269000 new articles were created, however, only 602 peer-reviews were performed and thus only 581 new featured article candidates were nominated. The amount of new featured articles in the year 2014 was 298. Thus, a lot of non-featured articles are yet to be reviewed, because the amount of data is far too large to review all edits/contributions only with human power. Related work has shown that it is possible to automatically measure the quality of Wikipedia articles, in order to detect non-featured articles that would likely to meet these high-quality standards. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating  new quality metrics and comparing them with state-of-the-art ones. ii) A Quality Assisted Editor to view which parts of the article should be improved in order to reach a higher overall article quality. Additionally, a case study–for the Quality Analyzer–and an office user study–for the Quality Assisted Editor–were conducted. The case study mainly describes how domain experts used the Quality Analyzer to create quality metrics. Furthermore, usability aspects and workload were analyzed. The user study for the Quality Assisted Editor was conducted with 24 participants, that had to perform tasks either with the Quality Assisted Editor or a benchmark tool. Three aspects were examined: Detecting (potential) featured and non-featured articles, the workload of the participants and the usability of the Quality Assisted Editor.
2015

Perndorfer Markus

array(37) { ["Start"]=> string(10) "01.02.2013" ["year"]=> int(2015) ["title"]=> string(57) "Soziale Interaktionen mittels Mobile Sensing erkennen   " ["Abstract de"]=> string(2667) "With this thesis we try to determine the feasibility of detecting face-to-face social interactions based on standard smartphone sensors like Bluetooth, Global Positioning System (GPS) data, microphone or magnetic field sen- sor. We try to detect the number of social interactions by leveraging Mobile Sens- ing on modern smartphones. Mobile Sensing is the use of smartphones as ubiquitous sensing devices to collect data. Our focus lies on the standard smartphone sensors provided by the Android Software Development Kit (SDK) as opposed to previous work which mostly leverages only audio sig- nal processing or Bluetooth data. To mine data and collect ground truth data, we write an Android 2 app that collects sensor data using the Funf Open Sensing Framework[1] and addi- tionally allows the user to label their social interaction as they take place. With the app we perform two user studies over the course of three days with three participants each. We collect the data and add additional meta-data for every user during an interview. This meta-data consists of semantic labels for location data and the distinction of social interactions into private and business social interactions. We collected a total of 16M data points for the first group and 35M data points for the second group. Using the collected data and the ground truth labels collected by our partici- pants, we then explore how time of day, audio data, calendar appointments, magnetic field values, Bluetooth data and location data interacts with the number of social interactions of a person. We perform this exploration by creating various visualization for the data points and use time correlation to determine if they influence the social interaction behavior. We find that only calendar appointments provide some correlation with the social interactions and could be used in a detection algorithm to boost the accuracy of the result. The other data points show no correlation during our exploratory evaluation of the collected data. We also find that visualizing the interactions in the form of a heatmap on a map is a visualization that most participants find very interesting. Our participants also made clear that la- beling all social interactions over the course of a day is a very tedious task. We recommend that further research has to include audio signal process- ing and a carefully designed study setup. This design has to include what data needs to be sampled at what frequency and accuracy and must provide further assistance to the user for labeling the data. We release the data mining app and the code used to analyze the data as open source under the MIT License.   " ["AutorId"]=> string(0) "" ["author"]=> string(17) "Perndorfer Markus" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(19) "Simon Jörg Peter; " ["Zweitbetreuer1_ID"]=> string(3) "127" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(51) "mobile sensing; data mining; social; interactions; " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "208" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Perndorfer Markus" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Soziale Interaktionen mittels Mobile Sensing erkennen   i

Master

Master
With this thesis we try to determine the feasibility of detecting face-to-face social interactions based on standard smartphone sensors like Bluetooth, Global Positioning System (GPS) data, microphone or magnetic field sen- sor. We try to detect the number of social interactions by leveraging Mobile Sens- ing on modern smartphones. Mobile Sensing is the use of smartphones as ubiquitous sensing devices to collect data. Our focus lies on the standard smartphone sensors provided by the Android Software Development Kit (SDK) as opposed to previous work which mostly leverages only audio sig- nal processing or Bluetooth data. To mine data and collect ground truth data, we write an Android 2 app that collects sensor data using the Funf Open Sensing Framework[1] and addi- tionally allows the user to label their social interaction as they take place. With the app we perform two user studies over the course of three days with three participants each. We collect the data and add additional meta-data for every user during an interview. This meta-data consists of semantic labels for location data and the distinction of social interactions into private and business social interactions. We collected a total of 16M data points for the first group and 35M data points for the second group. Using the collected data and the ground truth labels collected by our partici- pants, we then explore how time of day, audio data, calendar appointments, magnetic field values, Bluetooth data and location data interacts with the number of social interactions of a person. We perform this exploration by creating various visualization for the data points and use time correlation to determine if they influence the social interaction behavior. We find that only calendar appointments provide some correlation with the social interactions and could be used in a detection algorithm to boost the accuracy of the result. The other data points show no correlation during our exploratory evaluation of the collected data. We also find that visualizing the interactions in the form of a heatmap on a map is a visualization that most participants find very interesting. Our participants also made clear that la- beling all social interactions over the course of a day is a very tedious task. We recommend that further research has to include audio signal process- ing and a carefully designed study setup. This design has to include what data needs to be sampled at what frequency and accuracy and must provide further assistance to the user for labeling the data. We release the data mining app and the code used to analyze the data as open source under the MIT License.  
2015

Tobitsch Markus

array(37) { ["Start"]=> string(10) "18.12.2012" ["year"]=> int(2015) ["title"]=> string(56) "Projektfortscrhittstracking durch Informationsmanagement" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(16) "Tobitsch Markus " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(14) "Sabol Vedran; " ["Zweitbetreuer1_ID"]=> string(3) "121" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(3) "TUG" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(1) " " ["Link"]=> string(0) "" ["ID"]=> string(3) "209" ["angestellt bei"]=> string(14) "Extern Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Tobitsch Markus " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

Projektfortscrhittstracking durch Informationsmanagement

Bakk

Bakk
2015

Daum Martin

array(37) { ["Start"]=> string(10) "31.01.2013" ["year"]=> int(2015) ["title"]=> string(98) "Lokalisierung von verlorenen Gegenständen durch Dead Reckoning von Fußgängern auf Smartphones. " ["Abstract de"]=> string(1226) "Many people face the problem of misplaced personal items in their daily routine, especially when they are in a hurry, and often waste a lot of time searching these items. There are different gadgets and applications available on the market, which are trying to help people find lost items. Most often, help is given by creating an infrastructure that can locate lost items. This thesis presents a novel approach for finding lost items, namely by helping people re-trace their movements throughout the day. Movements are logged by indoor localization based on mobile phone sensing. An external infrastructure is not needed. The application is based on a step based pedestrian dead reckoning system, which is developed to collect real-time localization data. This data is used to draw a live visualization of the whole trace the user has covered, from where the user can retrieve the position of the lost personal items, after they were tagged using simple speech commands. The results from the field experiment, that was performed with twelve participants of different age and gender, showed that the application could successfully visualize the covered route of the pedestrians and reveal the position of the placed items.   " ["AutorId"]=> string(0) "" ["author"]=> string(11) "Daum Martin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "212" ["Betreuer"]=> string(25) "Pammer-Schindler Viktoria" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(19) "Simon Jörg Peter; " ["Zweitbetreuer1_ID"]=> string(3) "127" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(89) " indoor localization; smartphone; dead reckoning; lost items tracker; step detection   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "210" ["angestellt bei"]=> string(28) "TUG-IWT Extern Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(11) "Daum Martin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Lokalisierung von verlorenen Gegenständen durch Dead Reckoning von Fußgängern auf Smartphones. i

Master

Master
Many people face the problem of misplaced personal items in their daily routine, especially when they are in a hurry, and often waste a lot of time searching these items. There are different gadgets and applications available on the market, which are trying to help people find lost items. Most often, help is given by creating an infrastructure that can locate lost items. This thesis presents a novel approach for finding lost items, namely by helping people re-trace their movements throughout the day. Movements are logged by indoor localization based on mobile phone sensing. An external infrastructure is not needed. The application is based on a step based pedestrian dead reckoning system, which is developed to collect real-time localization data. This data is used to draw a live visualization of the whole trace the user has covered, from where the user can retrieve the position of the lost personal items, after they were tagged using simple speech commands. The results from the field experiment, that was performed with twelve participants of different age and gender, showed that the application could successfully visualize the covered route of the pedestrians and reveal the position of the placed items.  
2015

Höffernig Martin

array(37) { ["Start"]=> string(10) "28.04.2009" ["year"]=> int(2015) ["title"]=> string(63) "Formalisierung der Semantic Constraints von MPEG-7 Profilen   " ["Abstract de"]=> string(2355) "The amount of multimedia content being created is growing tremendously. In addition, the number of applications for processing, consuming, and sharing multimedia content is growing. Being able to create and process metadata describing this content is an important prerequisite to ensure a correct workflow of applications. The MPEG-7 standard enables the description of different types of multimedia content by creating standardized metadata descriptions. When using MPEG-7 practically, two major drawbacks are identified, namely complexity and fuzziness. Complexity is mainly based on the comprehensiveness of MPEG-7, while fuzziness is a result of the syntax variability. The notion of MPEG-7 profiles were introduced in order to address and possibly solve these issues. A profile defines the usage and semantics of MPEG-7 tailored to a particular application domain. Thus usage instructions and explanations, denoted as semantic constraints, can be expressed as English prose. However, this textual explanations leave space for potential misinterpretations since they have no formal grounding. While checking the conformance of an MPEG-7 profile description is possible on a syntactical level, the semantic constraints currently cannot be checked in an automated way. Being unable to handle the semantic constraints, inconsistent MPEG-7 profile descriptions can be created or processed leading to potential interoperability issues. Thus an approach for formalizing the semantic constraints of MPEG-7 profiles using ontologies and logical rules is presented in this thesis. Ontologies are used to model the characteristics of the different profiles with respect to the semantic constraints, while validation rules detect and flag violations of these constraints. In similar manner, profile-independent temporal semantic constraints are also formalized. The presented approach is the basis for a semantic validation service for MPEG-7 profile descriptions, called VAMP. VAMP verifies the conformance of a given MPEG-7 profile description with a selected MPEG-7 profile specification in terms of syntax and semantics. Three different profiles are integrated in VAMP. The temporal semantic constraints are also considered. As a proof of concept, VAMP is implemented as a web application for human users and as a RESTful web service for software agents.   " ["AutorId"]=> string(0) "" ["author"]=> string(17) "Höffernig Martin" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "211" ["angestellt bei"]=> string(6) "Extern" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Höffernig Martin" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Formalisierung der Semantic Constraints von MPEG-7 Profilen   i

Master

Master
The amount of multimedia content being created is growing tremendously. In addition, the number of applications for processing, consuming, and sharing multimedia content is growing. Being able to create and process metadata describing this content is an important prerequisite to ensure a correct workflow of applications. The MPEG-7 standard enables the description of different types of multimedia content by creating standardized metadata descriptions. When using MPEG-7 practically, two major drawbacks are identified, namely complexity and fuzziness. Complexity is mainly based on the comprehensiveness of MPEG-7, while fuzziness is a result of the syntax variability. The notion of MPEG-7 profiles were introduced in order to address and possibly solve these issues. A profile defines the usage and semantics of MPEG-7 tailored to a particular application domain. Thus usage instructions and explanations, denoted as semantic constraints, can be expressed as English prose. However, this textual explanations leave space for potential misinterpretations since they have no formal grounding. While checking the conformance of an MPEG-7 profile description is possible on a syntactical level, the semantic constraints currently cannot be checked in an automated way. Being unable to handle the semantic constraints, inconsistent MPEG-7 profile descriptions can be created or processed leading to potential interoperability issues. Thus an approach for formalizing the semantic constraints of MPEG-7 profiles using ontologies and logical rules is presented in this thesis. Ontologies are used to model the characteristics of the different profiles with respect to the semantic constraints, while validation rules detect and flag violations of these constraints. In similar manner, profile-independent temporal semantic constraints are also formalized. The presented approach is the basis for a semantic validation service for MPEG-7 profile descriptions, called VAMP. VAMP verifies the conformance of a given MPEG-7 profile description with a selected MPEG-7 profile specification in terms of syntax and semantics. Three different profiles are integrated in VAMP. The temporal semantic constraints are also considered. As a proof of concept, VAMP is implemented as a web application for human users and as a RESTful web service for software agents.  
2015

Rella Matthias

array(37) { ["Start"]=> string(10) "01.01.2015" ["year"]=> int(2015) ["title"]=> string(93) "Bits And Pieces: Ein generisches Widget-Framework für die Sinngewinnung im World Wide Web  " ["Abstract de"]=> string(1350) "The term “sensemaking” refers to a universal concept being investigated in various sciences specifically or interdisciplinary. Briefly spoken from the perspective of the information sciences, sensemaking occurs when a person has to deal with a huge, perhaps overwhelming, and heterogeneous amount of information and make sense out of it. This process, which is probable to happen in everyday life, and the sense being made as a product of this process are subject to constant research, especially under the nowadays threat of the information deluge. The World Wide Web is the media for today’s huge and heterogeneous amount of information which pervades our everyday’s life. Whether we need to do a deep search for scientific literature, figure out which hotel to book when travelling or simply need to keep track of our Web surfing, we engage in a kind of sensemaking on the Web. However, common-purpose user interfaces capable of the dynamic and heterogeneous nature of the information on the Web are missing. This thesis enlightens the term sensemaking from various theoretical perspectives and reviews existing user interface approaches. Then, it develops a novel theoretical and technical framework approach for building user interfaces for sensemaking on the Web, which is finally evaluated in a user study and in expert interviews.   " ["AutorId"]=> string(3) "119" ["author"]=> string(0) "" ["Autor_extern_Geschlecht"]=> string(0) "" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(63) "HCI, semantic Web, informal learning, Web-technology, Big Data " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "213" ["angestellt bei"]=> string(2) "KC" ["Text_intern_extern"]=> string(2) "KC" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Rella Matthias" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "05.04.2018" ["Letzte_Änderung_Person"]=> string(14) "dhinterleitner" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Bits And Pieces: Ein generisches Widget-Framework für die Sinngewinnung im World Wide Web   i

Master

Master
The term “sensemaking” refers to a universal concept being investigated in various sciences specifically or interdisciplinary. Briefly spoken from the perspective of the information sciences, sensemaking occurs when a person has to deal with a huge, perhaps overwhelming, and heterogeneous amount of information and make sense out of it. This process, which is probable to happen in everyday life, and the sense being made as a product of this process are subject to constant research, especially under the nowadays threat of the information deluge. The World Wide Web is the media for today’s huge and heterogeneous amount of information which pervades our everyday’s life. Whether we need to do a deep search for scientific literature, figure out which hotel to book when travelling or simply need to keep track of our Web surfing, we engage in a kind of sensemaking on the Web. However, common-purpose user interfaces capable of the dynamic and heterogeneous nature of the information on the Web are missing. This thesis enlightens the term sensemaking from various theoretical perspectives and reviews existing user interface approaches. Then, it develops a novel theoretical and technical framework approach for building user interfaces for sensemaking on the Web, which is finally evaluated in a user study and in expert interviews.  
2015

Steinkogler Michael

array(37) { ["Start"]=> string(10) "05.11.2013" ["year"]=> int(2015) ["title"]=> string(85) "Verbesserung von Query Suggestions für seltene Queries auf facettierten Dokumenten. " ["Abstract de"]=> string(1605) "The goal of this thesis is to improve query suggestions for rare queries on faceted documents. While there has been extensive work on query suggestions for single facet documents there is only little known about how to provide query suggestions in the context of faceted documents. The constraint to provide suggestions also for uncommon or even previously unseen queries (so-called rare queries) increases the difficulty of the problem as the commonly used technique of mining query logs can not be easily applied.

In this thesis it was further assumed that the user of the information retrieval system always searches for one specific document - leading to uniformly distributed queries. Under these constraints it was tried to exploit the structure of the faceted documents to provide helpful query suggestions. In addition to theoretical exploration of such improvements a custom datastructure was developed to efficiently provide interactive query suggestions. Evaluation of the developed query suggestion algorithms was done on multiple document collections by comparing them to a baseline algorithm that reduces faceted documents to single facet documents. Results are promising as the final version of the new query suggestion algorithm consistently outperformed the baseline.

Motivation for and potential application of this work can be found in call centers for customer support. For call center employees it is crucial to quickly locate relevant customer information - information that is available in structured form (and can thus easily be transformed into faceted documents).

" ["AutorId"]=> string(0) "" ["author"]=> string(19) "Steinkogler Michael" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "225" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(19) "Steinkogler Michael" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Verbesserung von Query Suggestions für seltene Queries auf facettierten Dokumenten. i

Master

Master
The goal of this thesis is to improve query suggestions for rare queries on faceted documents. While there has been extensive work on query suggestions for single facet documents there is only little known about how to provide query suggestions in the context of faceted documents. The constraint to provide suggestions also for uncommon or even previously unseen queries (so-called rare queries) increases the difficulty of the problem as the commonly used technique of mining query logs can not be easily applied.

In this thesis it was further assumed that the user of the information retrieval system always searches for one specific document - leading to uniformly distributed queries. Under these constraints it was tried to exploit the structure of the faceted documents to provide helpful query suggestions. In addition to theoretical exploration of such improvements a custom datastructure was developed to efficiently provide interactive query suggestions. Evaluation of the developed query suggestion algorithms was done on multiple document collections by comparing them to a baseline algorithm that reduces faceted documents to single facet documents. Results are promising as the final version of the new query suggestion algorithm consistently outperformed the baseline.

Motivation for and potential application of this work can be found in call centers for customer support. For call center employees it is crucial to quickly locate relevant customer information - information that is available in structured form (and can thus easily be transformed into faceted documents).

2015

Perez Alberto

array(37) { ["Start"]=> string(10) "14.10.2014" ["year"]=> int(2015) ["title"]=> string(38) "English Wiktionary Parser & Lemmatizer" ["Abstract de"]=> string(376) "

“Wiktionary”, is a free dictionary which is part of Wikmedia Foundation. This webpage contains translations, etymologies, synonyms and pronunciations of words in multiple languages in that case we just focus on English.

A syntactic analyser (parser) turns the entry text in other structures, which will make easier the analysis and capture of nest entrance.

" ["AutorId"]=> string(0) "" ["author"]=> string(14) "Perez Alberto " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "102" ["Betreuer"]=> string(10) "Kern Roman" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(11) "Know-Center" ["Zweitbetreuer"]=> string(12) "Kern Roman; " ["Zweitbetreuer1_ID"]=> string(3) "102" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "227" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(14) "Perez Alberto " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

English Wiktionary Parser & Lemmatizer i

Master

Master

“Wiktionary”, is a free dictionary which is part of Wikmedia Foundation. This webpage contains translations, etymologies, synonyms and pronunciations of words in multiple languages in that case we just focus on English.

A syntactic analyser (parser) turns the entry text in other structures, which will make easier the analysis and capture of nest entrance.

2015

Eberhard Lukas

array(37) { ["Start"]=> string(10) "01.06.2014" ["year"]=> int(2015) ["title"]=> string(85) "Predicting Trading Interactions in Trading, Online and Location-Based Social Networks" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(15) "Eberhard Lukas " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(20) "Trattner Christoph; " ["Zweitbetreuer1_ID"]=> string(3) "132" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "228" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Eberhard Lukas " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Predicting Trading Interactions in Trading, Online and Location-Based Social Networks

Master

Master
2015

Greussing Lukas

array(37) { ["Start"]=> string(10) "01.06.2014" ["year"]=> int(2015) ["title"]=> string(116) "The Social Question & Answer Tool: A prototype of a Help Seeking Tool within the European project of Learning Layers" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(15) "Greussing Lukas" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(20) "Trattner Christoph; " ["Zweitbetreuer1_ID"]=> string(3) "132" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "229" ["angestellt bei"]=> string(21) "Wiss. Partner Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(15) "Greussing Lukas" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

The Social Question & Answer Tool: A prototype of a Help Seeking Tool within the European project of Learning Layers

Bakk

Bakk
2015

Moesslang Dominik

array(37) { ["Start"]=> string(10) "01.06.2014" ["year"]=> int(2015) ["title"]=> string(84) "KnowBrain: A Social Repository for Sharing Knowledge and Managing Learning Artifacts" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(17) "Moesslang Dominik" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "214" ["Betreuer"]=> string(20) "Lindstaedt Stefanie " ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(20) "Trattner Christoph; " ["Zweitbetreuer1_ID"]=> string(3) "132" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(11) "Know-Center" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "1" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "230" ["angestellt bei"]=> string(21) "Wiss. Partner Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(17) "Moesslang Dominik" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "1" ["organ"]=> string(4) "Bakk" ["thesis"]=> string(4) "Bakk" }

KnowBrain: A Social Repository for Sharing Knowledge and Managing Learning Artifacts

Bakk

Bakk
2015

Steinkellner Christof

array(37) { ["Start"]=> string(10) "06.03.2015" ["year"]=> int(2015) ["title"]=> string(59) "Empirische Analyse von sozialen Netwerken von Informatikern" ["Abstract de"]=> string(1923) "Unter Wissenschaftlern ist Twitter ein sehr beliebtes soziales Netzwerk. Dort diskutieren sie verschiedenste Themen und werben für neue Ideen oder präsentieren Ergebnisse ihrer aktuellen Forschungsarbeit. Die in dieser Arbeit durchgeführten Experimente beruhen auf einem Twitter-Datensatz welcher aus den Tweets von Informatikern, deren Forschungsbereiche bekannt sind, besteht. Die vorliegende Diplomarbeit kann grob in vier Teile unterteilt werden: Zunächst wird beschrieben, wie der Twitter-Datensatz erstellt wurde. Danach werden diverse Statistiken zu diesem Datensatz präsentiert. Beispielsweise wurden die meisten Tweets während der Arbeitszeit erstellt und die Nutzer sind unterschiedlich stark aktiv. Aus den Follower-Beziehungen der Nutzer wurde ein Netzwerk erstellt, welches nachweislich small world Eigenschaften hat. Darüber hinaus sind in diesem Netzwerk auch die verschiedenen Forschungsbereiche sichtbar. Der dritte Teil dieser Arbeit ist der Untersuchung der Hashtagbenutzung gewidmet. Dabei zeigte sich, dass die meisten Hashtags nur selten benutzt werden. Über den gesamten Beobachtungszeitraum betrachtet ändert sich die Verwendung von Hashtags kaum, jedoch gibt es viele kurzfristige Schwankungen. Da die Forschungsbereiche der Nutzer bekannt sind, können auch die Bereiche der Hashtags bestimmt werden. Dadurch können die Hashtags dann in fachspezifische und generelle Hashtags unterteilt werden. Die Analyse der Weitergabe von Hashtags über das Twitter-Netzwerk wird im vierten Teil mittels sogenannter Informationsflussbäume betrachtet. Aufgrund dieser Informationsflussbäume kann gemessen werden wie gut ein Nutzer Informationen verbreitet und erzeugt. Dabei wurde auch die Hypothese bestätigt, dass diese Eigenschaften von der Anzahl der Tweets und Retweets und der Stellung im sozialen Netzwerk abhängen. Jedoch ist dieser Zusammenhang nur in Einzelfällen stark ausgeprägt.   " ["AutorId"]=> string(0) "" ["author"]=> string(22) "Steinkellner Christof " ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(3) "110" ["Betreuer"]=> string(13) "Lex Elisabeth" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(3) "TUG" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(92) "Online social network analysis, Information diffusion, Science 2.0, Information cascades   " ["Link"]=> string(42) "Volltext nicht öffentlich verfügbar (TU)" ["ID"]=> string(3) "231" ["angestellt bei"]=> string(7) "Student" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(22) "Steinkellner Christof " ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

Empirische Analyse von sozialen Netwerken von Informatikern i

Master

Master
Unter Wissenschaftlern ist Twitter ein sehr beliebtes soziales Netzwerk. Dort diskutieren sie verschiedenste Themen und werben für neue Ideen oder präsentieren Ergebnisse ihrer aktuellen Forschungsarbeit. Die in dieser Arbeit durchgeführten Experimente beruhen auf einem Twitter-Datensatz welcher aus den Tweets von Informatikern, deren Forschungsbereiche bekannt sind, besteht. Die vorliegende Diplomarbeit kann grob in vier Teile unterteilt werden: Zunächst wird beschrieben, wie der Twitter-Datensatz erstellt wurde. Danach werden diverse Statistiken zu diesem Datensatz präsentiert. Beispielsweise wurden die meisten Tweets während der Arbeitszeit erstellt und die Nutzer sind unterschiedlich stark aktiv. Aus den Follower-Beziehungen der Nutzer wurde ein Netzwerk erstellt, welches nachweislich small world Eigenschaften hat. Darüber hinaus sind in diesem Netzwerk auch die verschiedenen Forschungsbereiche sichtbar. Der dritte Teil dieser Arbeit ist der Untersuchung der Hashtagbenutzung gewidmet. Dabei zeigte sich, dass die meisten Hashtags nur selten benutzt werden. Über den gesamten Beobachtungszeitraum betrachtet ändert sich die Verwendung von Hashtags kaum, jedoch gibt es viele kurzfristige Schwankungen. Da die Forschungsbereiche der Nutzer bekannt sind, können auch die Bereiche der Hashtags bestimmt werden. Dadurch können die Hashtags dann in fachspezifische und generelle Hashtags unterteilt werden. Die Analyse der Weitergabe von Hashtags über das Twitter-Netzwerk wird im vierten Teil mittels sogenannter Informationsflussbäume betrachtet. Aufgrund dieser Informationsflussbäume kann gemessen werden wie gut ein Nutzer Informationen verbreitet und erzeugt. Dabei wurde auch die Hypothese bestätigt, dass diese Eigenschaften von der Anzahl der Tweets und Retweets und der Stellung im sozialen Netzwerk abhängen. Jedoch ist dieser Zusammenhang nur in Einzelfällen stark ausgeprägt.  
2015

Parekodi Sathvik

array(37) { ["Start"]=> string(10) "01.01.2015" ["year"]=> int(2015) ["title"]=> string(48) "A RESTful Web-based Expert Recommender Framework" ["Abstract de"]=> string(0) "" ["AutorId"]=> string(0) "" ["author"]=> string(16) "Parekodi Sathvik" ["Autor_extern_Geschlecht"]=> string(9) "männlich" ["BetreuerId"]=> string(0) "" ["Betreuer"]=> string(0) "" ["Option_Betreuer_extern_intern"]=> string(6) "intern" ["Betreuer_extern"]=> string(0) "" ["BetreuerAffiliation"]=> string(0) "" ["Zweitbetreuer"]=> string(0) "" ["Zweitbetreuer1_ID"]=> string(0) "" ["Option_Zweitbetreuer1_extern_intern"]=> string(6) "intern" ["Zweitbetreuer1_extern"]=> string(0) "" ["ZweitBetreuer1Affiliation"]=> string(0) "" ["Zweitbetreuer2_ID"]=> string(0) "" ["Option_Zweitbetreuer2_extern_intern"]=> string(6) "intern" ["Zweitbetreuer2_extern"]=> string(0) "" ["ZweitBetreuer2Affiliation"]=> string(0) "" ["meta"]=> string(1) "2" ["Dont Publish"]=> string(0) "" ["Keywords"]=> string(0) "" ["Link"]=> string(0) "" ["ID"]=> string(3) "275" ["angestellt bei"]=> string(13) "Wiss. Partner" ["Text_intern_extern"]=> string(0) "" ["Anzahl_Wissenschaftliche_Arbeiten"]=> string(3) "118" ["Kombifeld_Autoren"]=> string(16) "Parekodi Sathvik" ["Kombifeld_AutorIntern_Autor_Extern_geschlecht"]=> string(9) "männlich" ["Erstelldatum"]=> string(0) "" ["Letzter_Aufruf"]=> string(10) "11.06.2018" ["Letzte_Änderung_Person"]=> string(10) "alangmaier" ["Wissenschaftliche Arbeiten_Art::ID"]=> string(1) "2" ["organ"]=> string(6) "Master" ["thesis"]=> string(6) "Master" }

A RESTful Web-based Expert Recommender Framework

Master

Master
Contact Career

Hiermit erkläre ich ausdrücklich meine Einwilligung zum Einsatz und zur Speicherung von Cookies. Weiter Informationen finden sich unter Datenschutzerklärung

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close