Wissenschaftliche Arbeiten

Hier finden Sie von Know-Center MitarbeiterInnen verfasste wissenschaftliche Abschlussarbeiten

2018

Polz Hans Georg

Is Google’s Wisdom-Of-The-Crowd a Valid Approach to Discerning Truth in the Age of Fake News

Bakk

Bakk
2018

Resch Sebastian

Implementation and Evaluation of a Bookmark and History Content Search Browser Add-on

Bakk

Bakk
2018

Schaffer Robert

Evaluation of Vote/Veto Classifier i

Bakk

Bakk
Authorship identification techniques are used to determine whether a document or text was written by a specific author or not. This includes discovering the rightful author from a finite list of authors for a previously unseen text or to verify if a text was written by a specific author. As digital media continues to get more important every day these techniques need to be also applied to shorter texts like emails, newsgroup posts, social media entries, forum posts and other forms of text. Especially because of the anonymity of the Internet this has become an important task. The existing Vote/Veto framework evaluated in this thesis is a system for authorship identification. The evaluation covers experiments to find reasonable settings for the framework and of course all tests to determine the accuracy and runtime of it. The same tests for accuracy and runtime have been carried out by a number of inbuilt classifiers of the existing software Weka to compare the results. All results have been written to tables and were compared to each other. In terms of accuracy Vote/Veto mostly delivered better results than Weka’s inbuilt classifiers even though the runtime was longer and more memory was necessary. Some settings provided good accuracy results with reasonable runtimes.
2018

Bruchmann Andreas

Privacy Protection via Pseudo Relevance Feedback

Bakk

Bakk
2018

Schlacher Jan Peter

Neo4-js: Object-Graph Mapping with Typed JavaScript and Neo4j

Bakk

Bakk
2018

Leitner Lorenz

Implementation and Evaluation of a Bookmark and History Content Search Browser Add-on

Bakk

Bakk
2017

Frank Sarah

Automatic Generation of Regular Expressions i

Bakk

Bakk
This thesis deals with the creation of regular expressions from a list of input that should match the resulting expression. Since regular expressions match a pattern, they can be used to speed up work that includes large amounts of data, under the assumption that the user knows some examples of the pattern that should be matched. In the herein discussed program, a regular expression was created iteratively by working away from a very rudimentary regular expression, allowing for an adjustment of a threshold to mitigate the effect of not having any negative matches as input. The result is an easy creation of a sufficiently well-working regular expression, assuming a representative collection of input strings while requiring no negative examples from the user.
2017

Kuhs Stefan Claudio

DART: The Dataset Anonymization and Replication Tool i

Bakk

Bakk
Due to persistent issues concerning sensitive information, when working with big data, we present a new approach of generating arti cial data1in the form of datasets. For this purpose, we specify the term dataset to represent a UNIX directory structure, consisting of various les and folders. Especially in computer science, there exists a distinct need for data. Mostly, this data already exists, but contains sensitive information. Thus, such critical data is supposed to stay protected against third parties. Hence, this reservation of data leads to a lack of available data for open source developers as well as for researchers. Therefore, we discovered a way to produce replicated datasets, given an origin dataset as input. Such replicated datasets represent the origin dataset as accurate as possible, without leaking any sensitive information. Thus, we introduce the Dataset Anonymization and Replication Tool, short DART, a Python based framework, which allows the replication of datasets. Since we aim to encourage the data science community to participate in our work, we constructed DART as a framework with high degree of adaptability and extensibility. We started with the analysis of datasets and various le and MIME types to nd suitable properties which characterize datasets. Thus, we de ned a broad range of properties, respectively characteristics, initiating with the number of les, to the point of le speci c characteristics like permissions. In the next step, we explored several mathematical and statistical approaches to replicate the selected characteristics. Therefore, we chose to model characteristics using relative frequency distributions, respectively unigrams, discrete as well as continuous random variables. Finally, we started to produce replicated datasets and analyzed the replicated characteristics against the characteristics of the corresponding origin dataset. Thus, the comparison between origin and replicated datasets is exclusively based on the selected characteristics. The achieved results highly depend on the origin dataset as well as on the characteristics of interest. Thus, origin datasets, which indicate a simple structure, tend more likely to deliver utilizable results. Otherwise, large and complex origin datasets might struggle to be replicated succiently. Nevertheless, the results aspire that tools like DART will be utilized to provide arti cial data1for persistent use cases.
2017

Kurzmann Lukas

Data Mining - Variables and Feature Selection with Greedy and Non Greedy Algorithm i

Bakk

Bakk
This paper is about comparing variables and feature selection with greedy and non greedy algorithms. For the greedy solution the ID3 [J. Quinlan, 1986] algorithm is used in this paper, which serves as a baseline. This algorithm is fast and provides good results for smaller datasets. However if the dataset gets larger and the information, which we want to get out of it has to be more precise, several combinations should be checked. Therefore a non greedy solution is a possible way to achieve that goal. This way of getting information out of data tries every possibility/combination to get the optimal results. This results may contain combinations of variables. One variable on its own possibly provides no information about the dataset, but in combination with another variable it does. That is one reason, why it is useful to check every combination. Besides the precision, which is very good, the algorithm needs higher computational time, at least W(n!). The higher the amount of attributes in a dataset is the higher the computational complexity is. The results have shown, even for smaller datasets that the non greedy algorithm finds more precise results, especially in view of combination of several attributes/variables. Taken together, if the dataset needs to be analysed in a more precise way and the hardware allows it, then the non greedy version of the algorithm is a tool, which provides precise data especially at combinational point of view.
2017

Suppan Johannes

Semantischer RESTFul Web Service für die Visualisierung und Verwaltung von automotiven Entwicklungs-Tätigkeiten in einem Informations-Cockpit i

Bakk

Bakk
Product development starts with the product requirements. If these are defined, solutions are created for the individual components, which then correspond to the entire product requirements. The process of solution approaches and solution refinement is operated in many iterations until a corresponding quality of the product requirements is achieved. This entire ”knowledge process “is to be transferred into a knowledge management. This is why we are showing ways to make new information technologies of Web 2.0 usable for knowledge management in the automotive industry. It is based on a research project of the Virtual Vehicle Competence Center, which includes a software prototype (”information cockpit “). ”The information cockpit “links both the product requirements and development tasks with the project organization. Thus a Product Data Management (PDM) as well as a Requirement Management System (RQM) is mapped. The networking has succeeded in uniting the individual systems, which represents a novelty in this area. By networking the product data, request data and project organization, the user is able to obtain a quick overview of different data in the automotive development. As a result, the management as well as the design is able to use existing knowledge quickly and to provide newly generated knowledge for others in an unconventional manner. At present only the visualization is implemented. The data to be used are made available by ”Link-Nodes “from the data system. The goal is to transfer the demonstrator to the application ”information cockpit “. The ontology PROTARES (PROject TAsks RESources) is used here as a basis. This ontology includes the entire data schema. A semanitc representation-based transfer (REST) Ful Web Service was designed and implemented accordingly. The data storage layer is a triple-store database. ”The information cockpit “can be used to query the system, which graphically and structurally displays the information to the user. Through the use of these technologies it was possible to create a modular whole system for the system architecture. In the near future, data management can be tackled, not just visualization, but also changing the data. After that, you can still think about user administration, access control, and so on.
2017

Valentan Stephan

How Design Patterns Impact Code Quality: A Controlled Experiment i

Bakk

Bakk
While design patterns are proposed as a standard way to achieve good software design little research is done on the actual impact of using these strategies on the code quality. Many books suggest that such methods increase flexibility and maintainability however they often lack any evi- dence. This bachelor thesis intends to empirically demonstrate that the use of design patterns actually improves code quality. To gather data about the code two applications were implemented, that are designed to meet the same requirements. While one application is developed following widespread guidelines and principles proposed by the object oriented programming, the other is implemented without paying attention to the topics of software maintenance. After complying to the basic requirements a number of additional features were implemented in two phases. At first a new graphical user interface is being supported, then a different data tier is added. The results show that the initial effort of implementing the program version following object oriented programming guidelines are noticeably higher in terms of code lines and necessary files. However, during the implementation of additional features fewer files needed to be modified and during one phase transition considerably less code was needed to be written while not performing worse in the other and furthermore the cyclomatic complexity of the code increased less rapid.
2017

Rebol Manuel

Automatic Classification of Business Intent on Social Platforms i

Bakk

Bakk
People spend hours on social media and similar web platforms each day. They express a lot of their feelings and desires in the texts which they post online. Data analysts always try to find clever ways to get use of this information. The aim of this thesis is to first detect business intent in the different types of information users post on the internet. In a second step, the identified business intent is grouped into the two classes: buyers and sellers. This supports the idea of linking the two groups. Machine learning algorithms are used for classification. All the necessary data, which is needed to train the classifiers is retrieved and preprocessed using a Python tool which was developed. The data was taken from the web platforms Twitter and HolidayCheck. Results show that classification works accurately when focusing on a specific platform and domain. On Twitter 96 % of test data is classified correctly whereas on HolidayCheck the degree of accuracy reaches 67 %. When con- sidering cross-platform multiclass classification, the scores drop to 50 %. Although individual scores increase up to 95 % when performing binary classification, the findings suggest that features need to be improved fur- ther in order to achieve acceptable accuracy for cross-platform multiclass classification. The challenge for future work is to fully link buyers and sellers automatically. This would create business opportunities without the need of parties to know about each other beforehand.
2017

Veigl Robert

Multiplatform Mobile App for Data Acquisition from External Sensors i

Bakk

Bakk
Mobile apps become more and more important for companies, because apps are needed to sell or operate their products. For being able to serve a wide range of customers, apps must be available for the most common platforms, at least Android and iOS. Considering Windows Phones as well, a company would need to provide three identical apps - one for each platform. As each platform comes with their own tools for app development, the apps must be implemented separately. That means development costs may raise by a factor of three in worst case. The Qt framework promises multi platform ability. This means an app needs to be implemented just once but still runs on several platforms. This bachelor’s thesis shall prove that by developing such a multi platform app using the Qt framework. The app shall be able to collect data from sensors connected to the mobile device and store the retrieved data on the phone. For the proof the supported platforms are limited to the most common ones - Android and iOS. Using this app for recording data from a real life scenario demonstrates its proper functioning.
2016

Fraz Koini Josef

Study on Health Trackers i

Bakk

Bakk
The rising distribution of compact devices with numerous sensors in the last decade has led to an increasing popularity of tracking fitness and health data and storing those data sets in apps and cloud environments for further evaluation. However, this massive collection of data is becoming more and more interesting for companies to reduce costs and increase productivity. All this possibly leads to problematic impacts on people’s privacy in the future. Hence, the main research question of this bachelor’s thesis is: “To what extent are people aware of the processing and pro- tection of their personal health data concerning the utilisation of various health tracking solutions?” This thesis investigates the historical development of personal fitness and health tracking, gives an overview of current options for users and presents potential problems and possible solutions regarding the use of health track- ing technology. Furthermore, it outlines the societal impact and legal issues. The results of a conducted online survey concerning the distribution and usage of health tracking solutions as well as the participants’ views on privacy concerning data sharing with service and insurance providers, ad- vertisers and employers are presented. Given those results, the necessity and importance of data protection according to the fierce opposition of the participants to various data sharing scenarios is expressed.
2016

Suschnigg Josef

Mobile Unterstützung zur Reflexion der Übungspraxis bei Musikstudierenden i

Bakk

Bakk
Es wird eine mobile Anwendung entwickelt, die Musikstudierende dabei unterstützt reflexiv ein Instrument zu lernen. Der Anwender soll in der Lage sein seinen Übungserfolg über Selbstbeobachtung festzustellen, um in weiterer Folge Übungsstrategien zu finden, die die Übungspraxis optimieren soll. Kurzfristig stellt die Anwendung dem Benutzer für verschiedene Handlungsphasen einer Übungseinheit (preaktional, aktional und postaktional) Benutzeroberflächen zur Verfügung. Mit Hilfe von Leitfragen, oder vom Anwender formulierten Fragen, wird das Üben organisiert, strukturiert bzw. selbstreflektiert und evaluiert. Im Optimalfall kann der Anwender seinen Lernprozess auch auf Basis von Tonaufnahmen mitverfolgen. Langfristig können alle Benutzereingaben wieder abgerufen werden. Diese werden journalartig dargestellt und können zur Selbstreflexion oder auch gemeinsam mit einer Lehrperson ausgewertet werden.

2016

Ivantstits Matthias

Quantitative & qualitative Market-Analysis i

Bakk

Bakk
The buzzword big data is ubiquitous and has much impact on our everyday live and many businesses. Since the outset of the financial market, it is the aim to find some explanatory factors which contribute to the development of stock prices, therefore big data is a chance to do so. Gathering a vast amount of data concerning the financial market, filtering and analysing it, is of course tightly tied to predicting future stock prices. A lot of work has already been done with noticeable outcomes in this field of research. However, the question was raised, whether it is possible to build a tool with a large quantity of companies and news indexed and a natural language processing tool suitable for everyday applications. The sentiment analysis tool that was utilised in the development of this implementation is sensium.io. To achieve this goal two main modules were built. The first is responsible for constructing a filtered company index and for gathering detailed information about them, for example news, balance sheet figures and stock prices. The second is accountable for preprocessing the collected data and analysing them. This includes filtering unwanted news, translating them, calculating the text polarity and predicting the price development based on these facts. Utilising all these modules, the optimal period for buying and selling shares was found to be three days. This means buying some shares on the day of the news publication and selling them three days later. Pursuant to this analysis expected return is 0.07 percent a day, which might not seem much, however this would result in an annualised performance of 30.18 percent. This idea can also be outlaid in the contrary direction, telling the user when to sell his shares. Which could help an investor to find the ideal time to sell his company shares.
2016

Toller Maximilian

Automated Season Length Detection in Time Series i

Bakk

Bakk
The in-depth analysis of time series has been a central topic of research in the last years. Many of the present methods for finding periodic patterns and features require the use to input the time series’ season length. Today, there exist a few algorithms for automated season length approximation, yet many of them rely on simplifications such as data discretization. This thesis aims to develop an algorithm for season length detection that is more reliable than existing methods. The process developed in this thesis estimates a time series’ season length by interpolating, filtering and detrending the data and then analyzing the distances between zeros in the directly corresponding autocorrelation function. This method was tested against the only comparable open source algorithm and outperformed it by passing 94 out of 125 tests, while the existing algorithm only passed 62 tests. The results do not necessarily suggest a superiority of the new autocorrelation based method, but rather a supremacy of the new implementation. Further related studies might assess and compare the value of the theoretical concept.
2016

Steinbauer Florian

German Sentiment Analysis on Facebook Posts i

Bakk

Bakk
Social media monitoring has become an important means for business analytics and trend detection, comparing companies with each other or keeping a healthy customer relationship. While English sentiment analysis is very closely researched, not much work has been done on German data analysis. In this work we will (i) annotate ~700 posts from 15 corporate Facebook pages, (ii) evaluate existing approaches capable of processing German data against the annotated data set and (iii) due to the insufficient results train a two-step hierarchical classifier capable of predicting posts with an accuracy of 70%. The first binary classifier decides whether the post is opinionated. If the outcome is not neutral, the second classifier predicts the polarity of the document. Further we will apply the algorithm in two application scenarios where German Facebook posts, in particular the fashion trade chain Peek&Cloppenburg and the Austrian railway operators OeBB and Westbahn will be analyzed.
2015

Tobitsch Markus

Projektfortscrhittstracking durch Informationsmanagement

Bakk

Bakk
2015

Moesslang Dominik

KnowBrain: A Social Repository for Sharing Knowledge and Managing Learning Artifacts

Bakk

Bakk
2015

Greussing Lukas

The Social Question & Answer Tool: A prototype of a Help Seeking Tool within the European project of Learning Layers

Bakk

Bakk
2014

Prinz Martin

Mobile Sensordata to Support Stroke Rehabilitatio

Bakk

Bakk
2014

Prinz Martin

Mobile Sensordata to Support Stroke Rehabilitation

Bakk

Bakk
2014

Keller Stephan

Praktische Anwendungen von BCI unter Android – Tic Tac Toe

Bakk

Bakk
2014

Esmaeili Hossein

Evaluation of Big Data Solutions i

Bakk

Bakk
die herkömmlichen Datenbanklösungen wie RDBMS wurden zu der Zeit entworfen, in der das heutige Wachstum der Daten nicht vorstellbar war. Während dieses Wachstum besonders in den letzten Jahren geschah, versuchten Unternehmen ihre Datenbanklösungen der neuen Anforderung anzupassen. Die Tatsache ist aber, dass die klassischen Datenbanksysteme wie RDBMS für Skalierung nicht geeignet sind. Neue Technologien mussten geschaffen werden, um mit diesem Problem leichte umgehen zu können und das ist genau das Thema dieser Arbeit. Die neuen Technologien, die zum Bearbeiten von Big Data entworfen sing gehören meistens zu der Hauptkategorie NoSQL. Diese Arbeit diskutiert die Herausforderungen vom Umgang mit großen Datenmengen und versucht, eine Grenze klarzustellen, mit der z.B. eine Firma wissen kann, ob sie für ihre Anwendungen eine NoSQL-Technologie braucht oder würde auch ein RDBMS reichen. Diese Arbeit diskutiert auch das geeignete Datenmodel das für verschiede NoSQL Technologien. Am Ende der Arbeit gibt es einen praktischen Teil, wo drei Kandidaten  von verschiedenen NoSQL-Kategorien gegeneinander evaluiert werden. 
2014

Bischofter Heimo

Eine Prototypische Umsetzung von Enterprise Search für Engineeringdaten am Virtual Vehicle i

Bakk

Bakk
Eine Vielzahl von Softwareherstellern haben sich mit der Unternehmenssuche beschäftigt, undunterschiedliche Enterprise Search Lösungen mit breitem Funktionsspektrum präsentiert. Um dieEnterprise Search Lösungen schnell und effizient untereinander vergleichen zu können, wurdedie Systemarchitektur der Suchlösungen modelliert und mittels Fundamental Modeling Concepts(FMC) dargestellt. Dies bietet die Möglichkeit sich einen Überblick über die einzelnen Lösungenzu verschaffen, ohne sich mit unzähligen Informationen in Datenblättern und Whitepapers herum-zuschlagen. Das Portfolio der zu vergleichenden Enterprise Search Lösungen erstreckt sich von denMarktführern wie Microsoft und Google, dem marktführendem Unternehmen für Suchtechnologieim Raum Deutschland, Österreich und der Schweiz - IntraFind - bis hin zu den Visionären wieCoevo, Sinequa und Dassault Systems.Aus den durch den Vergleich gewonnenen Informationen wurde der Microsoft SharePoint 2013 fürdie prototypische Umsetzung in einem Systemlabor ausgewählt. Entscheidender Grund dafür wardie Kosten/Nutzen-Frage. Microsoft ist einer der wenigen Anbieter die eine kostenlose Versionfür eine Einstiegs- bzw. Pilotlösung zur Verfügung stellen. Die Enterprise Search Lösung wurdeauf einer virtuellen Maschine installiert, und vor der vollständigen Ausrollung am Virtual VehicleResearch Center von zehn Mitarbeitern aus zwei verschiedenen Arbeitsbereichen (Informationsma-nagement und Engineering-Bereich) auf Nützlichkeit und Qualität der Suchergebnisse getestet. Esgibt kaum Studien wie Suchlösungen im Engineeringbereich eingesetzt werden bzw. wie Engineersmit solchen Suchlösungen umgehen und wie zufrieden sie eigentlich damit sind. Diese Tatsacheführte dazu, dass zur Evaluierung der Pilot-Suchlösung eine Kombination aus Thinking AloudTest und Interview eingesetzt wurde. Mittels Interview wurden Informationen zu den Probandengesammelt, aus welchen geeignete Suchtasks für die Testpersonen abgeleitet wurden, welcheim Rahmen des Thinking Aloud Tests von dem jeweiligen Probanden gelöst werden mussten.Anschließend wurde die Testperson zu Qualität der Suchergebnisse, Sucherlebnis und Nützlichkeitder Unternehmenssuche befragt.Es hat sich gezeigt, dass die Mitarbeiter Schwierigkeiten haben, die geeigneten Keywörter für dieSuche zu definieren. Je mehr sie jedoch über die gesuchte Information Bescheid wussten, destoleichter fiel es den Probanden passende Keywörter zu definieren. Kritisch wurde auch die Relevanzder Suchergebnisse bewertet. Die Probanden waren der Meinung, dass sie beim Suchen dergewünschten Informationen mittels Suchinterface mehr Zeit beanspruchen, als bei ihrer derzeitigenSuchmethodik. Es hat sich herausgestellt, dass Metadaten für die Suche von großer Bedeutung sind.Sie enthalten wichtige Informationen, welche das Suchen von Informationen wesentlich erleichtert.Die Probanden müssen ihren Informationsbedarf auch ohne die Unternehmenssuche decken, daherwurde im Rahmen der Evaluierung die derzeitige Suchstrategie der Probanden behandelt.Basierend auf den Aussagen der Probanden konnten aus der Evaluierung Anforderungen an Enter-prise Search Lösungen abgeleitet und Informationen gesammelt werden. Diese Anforderungenund Informationen liefern für die IT-Abteilung wichtiges Feedback, welches bei der Ausrollungdes Pilotprojektes unterstützen soll. 
2014

Frey Matthias

Bakk

Bakk
2012

Laufer Paul

A general, web-based Information Extraction System using GATE and the Stanford Parser

Bakk

Bakk
2012

Hollerit Bernd

Detecting Commercial Intent in Twitter

Bakk

Bakk
2012

Kralowetz Michael R.

Extraction and Evaluation of Facts from Tweets i

Bakk

Bakk
Das Ziel dieser Arbeit ist das Extrahieren, Evaluieren und Speichern von Informationen aus tweets, wodurch eine Schnittstelle zwischen dem World Wide Web und dem semantischen Web modelliert wird. Unter der Verwendung des microblogging- und sozialen Netzwerkdienstes Twitter, wird ein Datensatz von tweets generiert. Dieser wird auf so genannte, von uns definierten, facts untersucht. Diese Filterung wird mit Hilfe von regular expressions (regex) durchgeführt. Die so gefundenen facts werden mit spezifischer Metainformation versehen und in einer Datenbank abgespeichert. Dies ermöglicht Maschinen die Daten intelligent zu durchsuchen und logische Verknüpfungen zwischen den Daten herzustellen. Durch die Verwendung der Programmiersprache Java ist die Applikation systemunabhängig. Die Arbeit liefert einfache Verständnisserklärungen zum semantischen Web, regex und Twitter, welche für die Applikation notwendig sind. Weiters werden das Konzept, verwendete Methoden, auftretende Probleme und gewonnene Resultate diskutiert.
2011

Salbrechter Florian

Rule-based test data extraction for training machine learning algorithms

Bakk

Bakk
2011

Baumgartner Philip

MicroConf – An Extension for Microblogging at Conferences

Bakk

Bakk
2011

Bürbaumer Claus

Erfolgsfaktoren virtueller Communities

Bakk

Bakk
2011

Wagner Mario

RFID-driven Information Delivery with RSS-Feeds

Bakk

Bakk
2011

Presenhuber Martin

Plagiarism Analysis & Misuse Detection

Bakk

Bakk
2011

Sommerauer Bettina

Entwicklung eines semantische Mediawikis für die Lehre

Bakk

Bakk
2011

Jobstmann Wilhelm

Entwicklung einer Toolbar zur kollaborativen Qualitätsbewertung und Qualitätssteigerung in MediaWiki i

Bakk

Bakk
Ein MediaWiki ist eine Social Web Applikation, welche es einer Gruppe von Personen einfach ermöglicht Informationen kollaborativ zusammenzutragen, Text zu erstellen und aktuell zu halten. Die wichtigsten Funktionen eines MediaWikis sind, das Erstellen und Bearbeiten von Artikeln, das Verlinken der Artikeln um eine Navigation zwischen den Artikeln zu ermöglichen und das Zusammenfassen der Artikeln in Kategorien ([Barrett, 2009]). Unter gewissen Umständen ist es notwendig die Qualität eines Artikels in einem MediaWiki einzuschätzen bzw. die Qualität eines Artikels zu steigern. Nach [Wang & Strong, 1996] hat schlechte Qualität von Daten einen erheblichen sozialen und wirtschaftlichen Einfluss. Im Rahmen dieser Arbeit wurde wissenschaftliche Literatur, die sich mit der Qualität von Artikeln und Daten beschäftigt, analysiert und zusammengefasst. Aus dem Ergebnis dieser Analyse und in Zusammenarbeit mit einem Unternehmen wurden Features formuliert, die dazu führen, dass die Qualität von MediaWiki Artikeln eingeschätzt werden kann und des Weiteren den Benutzern eines Mediawikis dabei unterstützen die Qualität von Artikeln zu steigern. Nachdem Features gefunden wurden, wurde ein Prototyp von einer Toolbar mit diesen Features in Adobe Flex entwickelt, die als Erweiterung in ein MediaWiki eingebunden werden kann.
2010

Singer Philipp

Synchronnutzung von Medien

Bakk

Bakk
2010

Woehrister Ferdinand

Studying the Effects of Goal-Oriented Search

Bakk

Bakk
2010

Lamprecht Daniel

Extracting Human Goals from Weblogs

Bakk

Bakk
2010

Kohler Philip

Reflection Widget

Bakk

Bakk
2010

Heher Stefan

Weblogs als Instrument der Unternehmenskommunikation

Bakk

Bakk
2010

Suzic Bojan

Extraction of temporal units from news corpora

Bakk

Bakk
2010

Hackhofer Fabian

Entwicklung eines Video Portals im Web

Bakk

Bakk
2010

Kappaun Karl

Entwicklung eines Video Portals im Web

Bakk

Bakk
2009

Lautischer Marco

Information Gathering durch Microformate

Bakk

Bakk
2009

Chouhan Pulkit

Findr: Ein Framework Für Webbasierte Suche

Bakk

Bakk
2009

Pölz Benjamin

Schema zur Kategorisierung von Web 2.0 Anwendungen in Unternehmen

Bakk

Bakk
2009

Weitlaner Doris

Usability-Evaluierung von Visualisierungskomponenten zur temporal-thematischen Analyse von Textdokumentsätzen

Bakk

Bakk
2009

Steinparz Sophie

Programmverhalten nachhaltig nach Drupal portieren -- Unterstützung des Prozess "User generated Content" innerhalb einer bestehenden Community

Bakk

Bakk
2009

Kober Michael

Bedarfsanalyse eines Web 2.0 Portals für IT-Jobnomaden

Bakk

Bakk
2009

Mellacher Daniela

Erfolgsfaktoren virtueller Communities

Bakk

Bakk
2008

Tiran Stefan Herbert

Ursprung von Inferenzen in Ontologien erkennen und bei Bedarf löschen – Interaktiver Ontologie Fragebogen

Bakk

Bakk
2008

Niederl Daniel

Vergleich von Internet/DMS/CMS Technologien zur Unterstützung von geschäftsprozessorientiertem Wissensmanagement

Bakk

Bakk
2008

Tiesenhausen Nikolaus

Services for Knowledge Management

Bakk

Bakk
2008

Michaljuk Claudia

Wissensmanagement mit Wikis: Eine Bedarfserhebung in steirischen Unternehmen

Bakk

Bakk
2008

Dietl Martin

Entwicklung eines Flash Rich Clients und Integration in eine Web 2.0 Plattform

Bakk

Bakk
2008

Weilbuchner Michael

Userprofilverwaltung und interaktive Kommunikationsmöglichkeiten in Liferay

Bakk

Bakk
2008

Eggenberger Georg

Userprofilverwaltung und interaktive Kommunikationsmöglichkeiten in Liferay

Bakk

Bakk
2008

Bader Markus

Entwicklung eines Sequenz-Players zum Abspielen von eLearning-Inhalten

Bakk

Bakk
2008

Plaschzug Patrick

Corporate Blogging – Anwendung von Blogs in Unternehmen

Bakk

Bakk
2008

Krnjic Vesna

Usability Überlegungen für Multiple Coordinated Views

Bakk

Bakk
2008

Wunder Stefan

Entwicklung eines Flash Rich Clients und Integration in eine Web 2.0 Plattform

Bakk

Bakk
2007

Rechberger Andreas

Sensoren zur Kontexerkennung

Bakk

Bakk
2006

Wagner Claudia

Semantische Modellierung des Journal of Computer Science

Bakk

Bakk
2005

Zorn-Pauli Gabriele

Foko Wiki – semantisches Forschungskooperations-Wiki für die Styria Medien AG

Bakk

Bakk
Kontakt Karriere

Hiermit erkläre ich ausdrücklich meine Einwilligung zum Einsatz und zur Speicherung von Cookies. Weiter Informationen finden sich unter Datenschutzerklärung

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close