Für verschiedene Interessensgruppen wie Betreiber, Ordner, Exekutive, usw. ist die Erfassung und Präsentation von Menschenströmen und lokalen Dichten auf dem Gelände einer Großveranstaltung von großer Bedeutung.Um dieses Ziel zu erreichen wird ein Framework zur Multi-Sensor-Datenfusion erstellt, mittels dessen ein Modell der Besucherpopulation auf einem definierten Veranstaltungsgelände beliefert wird. Der Einsatz verschiedener Arten von Sensoren (Bluetooth-Scanner, Zählsensoren, Video und GSM-Zellen-Information) führt zu Aussagen über Personenzählungen in unterschiedlichen räumlichen Ausdehnungen mit unterschiedlicher Aussagekraft. Nach Bestimmung der Aussagekraft jedes Sensors können Zählungen auf den erfassten Bereichen des Geländes erfolgen. Überlappende Bereiche werden mittels Datenfusion mit höherer Genauigkeit gezählt. Um Aussagen über nicht direkt erfasste Bereiche des Geländes treffen zu können, wird ein einfaches Weltmodell eingesetzt, das seine Information aus den Zählungen der überwachten Bereiche bezieht sowie dem modellierten Verhalten von Veranstaltungsgästen.  

This thesis demonstrates the potential and benefits of unsupervised learning with Self-Organizing Maps for stress detection in laboratory and free-living environment. The general increase in pace of life, both in the personal and work environment leads to the intensification and amount of work, constant time pressure and pressure to excel. It can cause psychosocial problems and negative health outcomes. Providing personal information about one’s stress level can counteract the adverse health effects of stress. Currently the most common way to detect stress is by the means of questionnaires. This is time consuming, subjective and only at discrete moments in time. Literature has shown that in a laboratory environment physiological signals can be used to detect stress in a continuous and objective way. Advances in wearable technology now make it feasible to continuously monitor physiological signals in daily life, allowing stress detection in a free-living environment. Ambulant stress detection is associated with several challenges. The data acquisition with wearables is less accurate compared to sensors used in a controlled environment and physical activity influences the physiological signals. Furthermore, the validation of stress detection with questionnaires provides an unreliable labelling of the data as it is subjective and delayed. This thesis explores an unsupervised learning technique, the Self-Organizing Map (SOM), to avoid the use of subjective labels. The provided data set originated from stress-inducing experiments in a con- trolled environment and ambulant data measured during daily-life activities. Blood volume pulse (BVP), skin temperature (ST), galvanic skin response (GSR), electromyogram (EMG), respiration, electrocardiogram (ECG) and acceleration were measured using both wearable and static devices. First, a supervised learning with Random Decision Forests (RDF) was applied to the laboratory data to provide a gold standard for unsupervised learning outcomes. A classification accuracy of 83.04% was reached using ECG and GSR features and 76.89% using ECG features only. Then the feasibility of the SOMs was tested on the laboratory data and compared a posteriori with the objective labels. Using a subset of ECG features, the classification accuracy was 76.42%. This is similar to supervised learning with ECG features, indicating the principal functioning of the SOMs for stress detection. In the last phase of this thesis the SOM was applied on the ambulant data. Training the SOM with ECG features from the ambulant data, enabled clustering from the feature space. The clusters were well separated with large cohesion (average silhouette coefficient of 0.49). Moreover, the clusters were similar over different test persons and days. According to literature the center values of the features in each cluster can indicate stress and relax phases. By mapping test samples on the trained and clustered SOM, stress predictions were made. Comparison against the subjective stress levels was however poor with a root mean squared error (RMSE) of 0.50. It is suggested to further explore the use of Self-Organizing Maps as it solely relies on the physiological data, excluding subjective labelling. Improvements can be made by applying multimodal feature sets, including for example GSR.

The Web is a central part of modern everyday life. Many people access it on a daily basis for a variety of reasons such as to retrieve news, watch videos, engage in social networks, buy goods in online shops or simply to procrastinate. Yet, we are still uncertain about how humans navigate the Web and the potential of factors influencing this process. To shed light on this topic, this thesis deals with modeling aspects of human navigation on the Web and the effects arising due to manipulations of this process. Mainly, this work provides a solid theoretical framework which allows to examine the potential effects of two different strategies aiming to guide visitors of a website. The framework builds upon the random surfer model, which is shown to be a sufficiently accurate model of human navigation on the Web in the first part of this work. In a next step, this thesis examines to which extent various click biases influence the typical whereabouts of the random surfer. Based on this analysis, this work demonstrates that exploiting common human cognitive biases exhibits a high potential of manipulating the frequencies with which the random surfer visits certain webpages. However, besides taking advantage of these biases, there exist further possibilities to steer users who navigate a website. Specifically, simply inserting new links to a webpage opens up new routes for visitors to explore a website. To investigate which of the two guiding strategies bears the higher potential, this work applies both of them to webgraphs of several websites and provides a detailed comparison of the emerging effects. The results presented in this thesis lead to actionable insights for website administrators and further broaden our understanding of how humans navigate the Web. Additionally, the presented model builds the foundation for further research in this field.

People spend hours on social media and similar web platforms each day. They express a lot of their feelings and desires in the texts which they post online. Data analysts always try to find clever ways to get use of this information. The aim of this thesis is to first detect business intent in the different types of information users post on the internet. In a second step, the identified business intent is grouped into the two classes: buyers and sellers. This supports the idea of linking the two groups. Machine learning algorithms are used for classification. All the necessary data, which is needed to train the classifiers is retrieved and preprocessed using a Python tool which was developed. The data was taken from the web platforms Twitter and HolidayCheck. Results show that classification works accurately when focusing on a specific platform and domain. On Twitter 96 % of test data is classified correctly whereas on HolidayCheck the degree of accuracy reaches 67 %. When con- sidering cross-platform multiclass classification, the scores drop to 50 %. Although individual scores increase up to 95 % when performing binary classification, the findings suggest that features need to be improved fur- ther in order to achieve acceptable accuracy for cross-platform multiclass classification. The challenge for future work is to fully link buyers and sellers automatically. This would create business opportunities without the need of parties to know about each other beforehand.

While design patterns are proposed as a standard way to achieve good software design little research is done on the actual impact of using these strategies on the code quality. Many books suggest that such methods increase flexibility and maintainability however they often lack any evi- dence. This bachelor thesis intends to empirically demonstrate that the use of design patterns actually improves code quality. To gather data about the code two applications were implemented, that are designed to meet the same requirements. While one application is developed following widespread guidelines and principles proposed by the object oriented programming, the other is implemented without paying attention to the topics of software maintenance. After complying to the basic requirements a number of additional features were implemented in two phases. At first a new graphical user interface is being supported, then a different data tier is added. The results show that the initial effort of implementing the program version following object oriented programming guidelines are noticeably higher in terms of code lines and necessary files. However, during the implementation of additional features fewer files needed to be modified and during one phase transition considerably less code was needed to be written while not performing worse in the other and furthermore the cyclomatic complexity of the code increased less rapid.

Product development starts with the product requirements. If these are defined, solutions are created for the individual components, which then correspond to the entire product requirements. The process of solution approaches and solution refinement is operated in many iterations until a corresponding quality of the product requirements is achieved. This entire ”knowledge process “is to be transferred into a knowledge management. This is why we are showing ways to make new information technologies of Web 2.0 usable for knowledge management in the automotive industry. It is based on a research project of the Virtual Vehicle Competence Center, which includes a software prototype (”information cockpit “). ”The information cockpit “links both the product requirements and development tasks with the project organization. Thus a Product Data Management (PDM) as well as a Requirement Management System (RQM) is mapped. The networking has succeeded in uniting the individual systems, which represents a novelty in this area. By networking the product data, request data and project organization, the user is able to obtain a quick overview of different data in the automotive development. As a result, the management as well as the design is able to use existing knowledge quickly and to provide newly generated knowledge for others in an unconventional manner. At present only the visualization is implemented. The data to be used are made available by ”Link-Nodes “from the data system. The goal is to transfer the demonstrator to the application ”information cockpit “. The ontology PROTARES (PROject TAsks RESources) is used here as a basis. This ontology includes the entire data schema. A semanitc representation-based transfer (REST) Ful Web Service was designed and implemented accordingly. The data storage layer is a triple-store database. ”The information cockpit “can be used to query the system, which graphically and structurally displays the information to the user. Through the use of these technologies it was possible to create a modular whole system for the system architecture. In the near future, data management can be tackled, not just visualization, but also changing the data. After that, you can still think about user administration, access control, and so on.

Die elektrische Energiewirtschaft befindet sich in einer Wende. Sowohl Energieerzeuger, wie auch Netzbetreiber sind von der Hinwendung zu regenerativen Energien betroffen.Höhere Kosten für Erzeugung und Übertragung stehen regulierten Einnahmen gegenüber. Instandhaltungskosten sind ein erheblicher Kostenfaktor. Es stellt sich die Frage, ob Predictive Analytics im Allgemeinen bzw. Predictive Maintenance im Speziellen eine Option zur Verminderung dieser Kosten bei gleichbleibender oder verbesserter Zuverlässigkeit sind. Nach einer Aufarbeitung der technologischen, wirtschaftlichen und rechtlichen Rahmenbedingungen, wird mittels Szenariotechnik ein narratives Szenario erstellt. Dieses dient der Stimulation von Experten aus verschiedenen Bereichen der elektrischen Energiewirtschaft. In der Folge werden diese Experten zu ihrer Meinung befragt. Auch wenn aktuell rechtliche Bedenken vorhanden sind, herrscht Einigkeit darüber, dass Predictive Maintenance in der elektrischen Energiewirtschaft kommen wird. Diese Änderungen sind nicht auf die Energieversorger beschränkt. Auch Zulieferbetriebe, Dienstleister und Kunden werden davon betroffen sein.

Question and answer (Q&A) systems are and will always be crucial in the digital life. Famous Q&A systems succeeded with having text, images and markup language as input possibilities. While this is sufficient for most questions, I think that this is not always the case for questions with a complex background. By implementing and evaluating a prototype of a domain-tailored Q&A tool I want to tackle the problem that formulating complex questions in text only and finding them consequently can be a hard task. Testing several non-text input possibilities including to parse standardized documents to populate metadata automatically and mixing exploratory and facetted search should lead to a more satisfying user experience when creating and searching questions. By choosing the community of StarCraft II it is ensured to have many questions with a complex background belonging to one domain. The evaluation results show that the implemented Q&A system, in form of a website, can hardly be compared to existing ones without having big data. Regardless users do see a potential for the website to succeed within the community which seems convincing that domain-tailored Q&A systems, where questions with metadata exist, can succeed in other fields of application as well.

Während der Durchführung von Großveranstaltungen muss eine Einsatzleitung bestehend aus den führenden Mitgliedern der beteiligten Organisationen die Sicherheit der Besucher gewährleisten. Der leitende Stab benötigt laufend Information, um stets Bewusstsein über die aktuelle Lage zu haben und bei Bedarf Maßnahmen zu setzen. Zur Abwendung drohender Gefahren und Lösung bestehender Lagen ist Lageinformation entscheidend. Hat Information den Stab erreicht, so muss sie effizient und fehlerfrei darin verteilt werden. Dadurch kann ein gemeinsames Lagebewusstsein entstehen, das für alle Mitglieder gleichermaßen unmissverständlich verfügbar ist. Um die Erfüllung dieser Aufgaben zu unterstützen, wurde ein Führungsunterstützungssystem entwickelt, dessen Funktionen mittels der Prinzipien von Design Case Studies durch iterative Prototypenverbesserungen, qualitative Interviews mit Sicherheitskräften und Feldstudien bei Großveranstaltungen bestimmt wurden. Mit Domänenexperten wurde die Nutzung boden- und luftgestützter Sensoren zur fusionierten Aufbereitung und Präsentation der aktuellen Lage bezüglich Verteilungen von Menschenmengen in einem geographischen Informationssystem (GIS) diskutiert. Dazu wurde ihnen der Prototyp mit einem synthetischen Datensatz zur Evaluierung vorgelegt. Nach der Beobachtung von Arbeitsprozessen der Einsatzleitung bei Veranstaltungssicherungen zum Finden von Schwachpunkten wurde das GIS-System auf die effiziente Bereitstellung von Stammdaten sowie der Visualisierung von Lagen für alle aktiven Stabsmitarbeiter ausgerichtet. Erkannte Schwächen konnten durch unterstützende Prototyp-Funktionen gemildert werden, wie die vergleichende Nachstellung von beobachteten Vorfällen mit dem Führungsunterstützungssystem im abschließenden Workshop zeigte.

Social tagging systems enable users to collaboratively assign freely chosen keywords (i.e., tags) to resources (e.g., Web links). In order to support users in finding descrip- tive tags, tag recommendation algorithms have been proposed. One issue of current state-of-the-art tag recommendation algorithms is that they are often designed in a purely data-driven way and thus, lack a thorough understanding of the cognitive processes that play a role when people assign tags to resources. A prominent exam- ple is the activation equation of the cognitive architecture ACT-R, which formalizes activation processes in human memory to determine if a specific memory unit (e.g., a word or tag) will be needed in a specific context. It is the aim of this thesis to investigate if a cognitive-inspired approach, which models activation processes in human memory, can improve tag recommendations. For this, the relation between activation processes in human memory and usage practices of tags is studied, which reveals that (i) past usage frequency, (ii) recency, and (iii) semantic context cues are important factors when people reuse tags. Based on this, a cognitive-inspired tag recommendation approach termed BLL AC +MP r is developed based on the activation equation of ACT-R. An extensive evaluation using six real-world folksonomy datasets shows that BLL AC +MP r outperforms current state-of-the-art tag recommendation algorithms with respect to various evaluation metrics. Finally, BLL AC +MP r is utilized for hashtag recommendations in Twitter to demonstrate its generalizability in related areas of tag-based recommender systems. The findings of this thesis demonstrate that activation processes in human memory can be utilized to improve not only social tag recommendations but also hashtag recommendations. This opens up a number of possible research strands for future work, such as the design of cognitive-inspired resource recommender systems

Location-based games are currently more popular than ever for the general public. Games, such as Geocaching, Ingress and Pokemon Go have created a high demand in the app market and established themselves in a major category in the mobile gaming sector. Since location-based games are reliant on mobile sensors, battery life, cellular data connections and even environmental conditions, many problems can rise up while playing the game and hence, can reduce user experience and player enjoyment. The aim of this thesis is to improve the gaming experience of location-based games, which use map information to place virtual content at appropriate physical locations, with the assistance of an user-centered design approach. Therefore, a game named Geo Heroes was designed and implemented in order to evaluate it with existing quantitative and qualitative methods from research. The game was assessed in an empirical study with nine participants including a game-play session of about one hour. Participants were divided into an experimental and control group to author disparities in the implemented content placement algorithms. An already established questionnaire for traditional computer games, and one created by the author based on existing research in location-based games, were used to measure common factors in gaming experience. Additionally, participants sent log data with their current emotions during game-play after various interactions with game objects. Different outcome scenarios of interactions were considered to ensure a better analysis. Furthermore, an open group discussion was held to gather qualitative information from participants to reveal still undiscovered issues and to provide evidence from results of conducted quantitative methods. Results have shown that the questionnaire for location-based games is a useful tool to measure player enjoyment. In combination with the tracked emotions and a group interview, relevant information can be obtained in order to improve game design and mechanics.

Texts are of crucial importance for communicating and managing information. How- ever, text composition is still a challenge for many people: in order to effectively convey their message, writers need skills in planning and structuring, linguistic abil- ity, and also the ability to evaluate their own work. In this thesis, we look at how writers can be supported in all the tasks encom- passed in the writing process. To this end, and in addition to literature research, we conducted an experiment to analyse the characteristics of the writing processes as well as difficulties writers typically encounter when they search for information, plan the structure of their text, translate their ideas to words, and review their writing. We formulate requirements for aiding these tasks and propose support possibilities, with a special focus on digital solutions. Issues with existing tools are that they generally support only one aspect and interrupt the writing task. This was our motivation for developing a prototype of a comprehensive text composition tool which supports writers in all stages of their task. We chose to implement it as a Google Docs add-on, which means that it can be integrated seamlessly into the Google Docs text editor. The add-on offers a number of features specifically tailored to each writing process. Finally, we performed a user study to evaluate the features and the workflow while using the add-on.

This thesis develops a tool to collaboratively explore a collection of EEG signals and identify events. Certain data require events to be tagged in a post-hoc process. Current state-of-the-art tools used in research allow a single user to manually label events or artifacts in signal data. Although automatic methods can be applied, they usually have a precision below 80% and require subsequent manual labelling steps. We propose a tool to collaboratively label data. It allows several users to work together in identifying events/artifacts in the signal space. This tool offers several advantages, from saving time by splitting up work between users to obtaining a consensus between experts on the occurrence of events. The talk will describe the collaborative aspects of labelling events in signal data.

Im Rahmen der Masterarbeit wurde ein Prototyp für ein Assistenzsystem für Baufahrzeuge zur Erkennung von gefährdeten Personen im Baustellenbereich entwickelt und evaluiert. In Voruntersuchungen wurden ausgesuchte Sensorprinzipien zur Verwendung für die Personenerkennung analysiert. Eine Auswahl an kameraoptischen- und Distanzsensoren lieferten Daten aus der Umgebung des Fahrzeuges. Der Fokus der Arbeit lag auf dem Entwurf einer geeigneten Architektur, um alle im Assistenzsystem verwendeten Komponenten und Module für Personenerkennungsalgorithmen zu fusionieren. Im prototypischen Aufbau wurde die Mensch-Maschine-Schnittstelle in Form eines Live-Kamera-Streams, mit eingeblendeten Warnungen in einer einfach zu verstehenden und verwendbaren Benutzeroberfläche, integriert. Im Zuge von Testreihen wurde die Leistungsfähigkeit des Systems bei verschiedenen Fahrzeuggeschwindigkeiten untersucht. Für Kombinationen von eingesetzten Sensoren wurden höchste zugelassene Geschwindigkeiten ermittelt, damit das Fahrzeug zum Stillstand gebracht werden kann, um einen Unfall zu vermeiden. Testläufe unter möglichst realen Bedingunen haben gezeigt, dass Personenerkennung in Echtzeit durchgeführt werden kann, aber auch viel Raum für Verbesserungen vorhanden ist. Fahrer werden in Situationen mit hohem Unfallrisiko gut vom System unterstützt und sind dadurch in der Lage Unfälle zu vermeiden. Außerdem wurden die Stärken und Schwächen des Personenerkennungssystem analysiert und es konnten detaillierte und wichtige Informationen über Arbeitssituationen und -abläufe, Verhalten von Fahrern, einzelnen Komponenten und dem gesamten System gewonnen werden.

Mobile apps become more and more important for companies, because apps are needed to sell or operate their products. For being able to serve a wide range of customers, apps must be available for the most common platforms, at least Android and iOS. Considering Windows Phones as well, a company would need to provide three identical apps - one for each platform. As each platform comes with their own tools for app development, the apps must be implemented separately. That means development costs may raise by a factor of three in worst case. The Qt framework promises multi platform ability. This means an app needs to be implemented just once but still runs on several platforms. This bachelor’s thesis shall prove that by developing such a multi platform app using the Qt framework. The app shall be able to collect data from sensors connected to the mobile device and store the retrieved data on the phone. For the proof the supported platforms are limited to the most common ones - Android and iOS. Using this app for recording data from a real life scenario demonstrates its proper functioning.

Bei Waldbrandsituation steht der Krisenstab oft vor Problemen in Bezug auf die Koordination, Entwicklung einer Einsatzstrategie und dem Bewahren der Übersicht während des Einsatzes. Ziel dieser Arbeit war ein Basisprototyp zur Demonstration von Unterstützungsmöglichkeiten für den Operator in der Einsatzleitung. Bei der Entwicklung dieses Prototypen stand die Usability im Vordergrund. Zur Verbesserung der Usability wurden während des Softwareentwicklungsprozesses Methoden des User Centered Designs(UCD) angewendet. Bei der Entwicklung einer Software mit kleiner Nutzergruppe, konnte herausgefunden werden, dass durch die Gegebenheit der Nischenposition der Nutzer andere Methoden angewendet werden müssen als bei einer größeren Nutzergruppe. Für die finale Präsentation des Prototyps wurde ein internationaler Expertenworkshop ausgewählt, bei dem die Software demonstriert und anschließend mit den Experten diskutiert wurde. Aus den Diskussionen konnte die Schlussfolgerung getroffen werden, dass eine solche Software derzeit noch nicht existiert und in vielen Aufgaben des Einsatzstabes benötigt wird. Grundsätzlich kann gesagt werden, dass Methoden aus dem UCD eine gute Basis für die Softwareentwicklung von Katastrophenschutzsoftware bilden und die Weiterentwicklung dieses Softwareprototyp einen guten Anfang für die Entwicklung eines Waldbrandmanagementsystems darstellt.

This thesis deals with the creation of regular expressions from a list of input that should match the resulting expression. Since regular expressions match a pattern, they can be used to speed up work that includes large amounts of data, under the assumption that the user knows some examples of the pattern that should be matched. In the herein discussed program, a regular expression was created iteratively by working away from a very rudimentary regular expression, allowing for an adjustment of a threshold to mitigate the effect of not having any negative matches as input. The result is an easy creation of a sufficiently well-working regular expression, assuming a representative collection of input strings while requiring no negative examples from the user.

In recent years, various recommendation algorithms have been proposed to support learners in technology-enhanced learning environments. Such algorithms have proven to be quite effective in big-data learning settings (massive open online courses), yet successful applications in other informal and formal learning settings are rare. Common challenges include data sparsity, the lack of sufficiently flexible learner and domain models, and the difficulty of including pedagogical goals into recommendation strategies. Computational models of human cognition and learning are, in principle, well positioned to help meet these challenges, yet the effectiveness of cognitive models in educational recommender systems remains poorly understood to this date. This thesis contributes to this strand of research by investigating i) two cognitive learner models (CbKST and SUSTAIN) for resource recommendations that qualify for sparse user data by following theory-driven top down approaches, and ii) two tag recommendation strategies based on models of human cognition (BLL and MINERVA2) that support the creation of learning content meta-data. The results of four online and offline experiments in different learning contexts indicate that a recommendation approach based on the CbKST, a well-founded structural model of knowledge representation, can improve the users' perceived learning experience in formal learning settings. In informal settings, SUSTAIN, a human category learning model, is shown to succeed in representing dynamic, interest based learning interactions and to improve Collaborative Filtering for resource recommendations. The investigation of the two proposed tag recommender strategies underlined their ability to generate accurate suggestions (BLL) and in collaborative settings, their potential to promote the development of shared vocabulary (MINERVA2). This thesis shows that the application of computational models of human cognition holds promise for the design of recommender mechanisms and, at the same time, for gaining a deeper understanding of interaction dynamics in virtual learning systems.

Due to persistent issues concerning sensitive information, when working with big data, we present a new approach of generating articial data1in the form of datasets. For this purpose, we specify the term dataset to represent a UNIX directory structure, consisting of various les and folders. Especially in computer science, there exists a distinct need for data. Mostly, this data already exists, but contains sensitive information. Thus, such critical data is supposed to stay protected against third parties. Hence, this reservation of data leads to a lack of available data for open source developers as well as for researchers. Therefore, we discovered a way to produce replicated datasets, given an origin dataset as input. Such replicated datasets represent the origin dataset as accurate as possible, without leaking any sensitive information. Thus, we introduce the Dataset Anonymization and Replication Tool, short DART, a Python based framework, which allows the replication of datasets. Since we aim to encourage the data science community to participate in our work, we constructed DART as a framework with high degree of adaptability and extensibility. We started with the analysis of datasets and various le and MIME types to nd suitable properties which characterize datasets. Thus, we dened a broad range of properties, respectively characteristics, initiating with the number of les, to the point of le specic characteristics like permissions. In the next step, we explored several mathematical and statistical approaches to replicate the selected characteristics. Therefore, we chose to model characteristics using relative frequency distributions, respectively unigrams, discrete as well as continuous random variables. Finally, we started to produce replicated datasets and analyzed the replicated characteristics against the characteristics of the corresponding origin dataset. Thus, the comparison between origin and replicated datasets is exclusively based on the selected characteristics. The achieved results highly depend on the origin dataset as well as on the characteristics of interest. Thus, origin datasets, which indicate a simple structure, tend more likely to deliver utilizable results. Otherwise, large and complex origin datasets might struggle to be replicated succiently. Nevertheless, the results aspire that tools like DART will be utilized to provide articial data1for persistent use cases.

This paper is about comparing variables and feature selection with greedy and non greedy algorithms. For the greedy solution the ID3 [J. Quinlan, 1986] algorithm is used in this paper, which serves as a baseline. This algorithm is fast and provides good results for smaller datasets. However if the dataset gets larger and the information, which we want to get out of it has to be more precise, several combinations should be checked. Therefore a non greedy solution is a possible way to achieve that goal. This way of getting information out of data tries every possibility/combination to get the optimal results. This results may contain combinations of variables. One variable on its own possibly provides no information about the dataset, but in combination with another variable it does. That is one reason, why it is useful to check every combination. Besides the precision, which is very good, the algorithm needs higher computational time, at least W(n!). The higher the amount of attributes in a dataset is the higher the computational complexity is. The results have shown, even for smaller datasets that the non greedy algorithm finds more precise results, especially in view of combination of several attributes/variables. Taken together, if the dataset needs to be analysed in a more precise way and the hardware allows it, then the non greedy version of the algorithm is a tool, which provides precise data especially at combinational point of view.