Berger Katharina, Rusch Magdalena, Pohlmann Antonia, Popowicz Martin, Geiger Bernhard, Gursch Heimo, Schöggl Josef-Peter, Baumgartner Rupert J.
2023
Digital product passports (DPPs) are an emerging technology and are considered as enablers of sustainable and circular value chains as they support sustainable product management (SPM) by gathering and containing product life cycle data. However, some life cycle data are considered sensitive by stakeholders, resulting in a reluctance to share such data. This contribution provides a concept illustrating how data science and machine learning approaches enable electric vehicle battery (EVB) value chain stakeholders to carry out confidentiality-preserving data exchange via a DPP. This, in turn, can support overcoming data sharing reluctances, consequently facilitating sustainability data management on a DPP for an EVB. The concept development comprised a literature review to identify data needs for sustainable EVB management, data management challenges, and potential data science approaches for data management support. Furthermore, three explorative focus group workshops and follow-up consultations with data scientists were conducted to discuss identified data sciences approaches. This work complements the emerging literature on digitalization and SPM by exploring the specific potential of data science, and machine learning approaches enabling sustainability data management and reducing data sharing reluctance. Furthermore, practical relevance is given, as this concept may provide practitioners with new impulses regarding DPP development and implementation.
Edtmayer, Hermann, Brandl, Daniel, Mach, Thomas, Schlager Elke, Gursch Heimo, Lugmair, Maximilian, Hochenauer, Christoph
2023
Increasing demands on indoor comfort in buildings and urgently needed energy efficiency measures require optimised HVAC systems in buildings. To achieve this, more extensive and accurate input data are required. This is difficult or impossible to accomplish with physical sensors. Virtual sensors, in turn, can provide these data; however, current virtual sensors are either too slow or too inaccurate to do so. The aim of our research was to develop a novel digital-twin workflow providing fast and accurate virtual sensors to solve this problem. To achieve a short calculation time and accurate virtual measurement results, we coupled a fast building energy simulation and an accurate computational fluid dynamics simulation. We used measurement data from a test facility as boundary conditions for the models and managed the coupling workflow with a customised simulation and data management interface. The corresponding simulation results were extracted for the defined virtual sensors and validated with measurement data from the test facility. In summary, the results showed that the total computation time of the coupled simulation was less than 10 min, compared to 20 h of the corresponding CFD models. At the same time, the accuracy of the simulation over five consecutive days was at a mean absolute error of 0.35 K for the indoor air temperature and at 1.2 % for the relative humidity. This shows that the novel coupled digital-twin workflow for virtual sensors is fast and accurate enough to optimise HVAC control systems in buildings.
Gursch Heimo, Körner Stefan, Thaler Franz, Waltner Georg, Ganster Harald, Rinnhofer Alfred, Oberwinkler Christian, Meisenbichler Reinhard, Bischof Horst, Kern Roman
2022
Refuse separation and sorting is currently done by recycling plants that are manually optimised for a fixed refuse composition. Since the refuse compositions constantly change, these plants deliver either suboptimal sorting performances or require constant monitoring and adjustments by the plant operators. Image recognition offers the possibility to continuously monitor the refuse composition on the conveyor belts in a sorting facility. When information about the refuse composition is combined with parameters and measurements of the sorting machinery, the sorting performance of a plant can be continuously monitored, problems detected, optimisations suggested and trends predicted. This article describes solutions for multispectral and 3D image capturing of refuse streams and evaluates the performance of image segmentation models. The image segmentation models are trained with synthetic training data to reduce the manual labelling effort thus reducing the costs of the image recognition introduction. Furthermore, an outlook on the combination of image recognition data with parameters and measurements of the sorting machinery in a combined time series analysis is provided.
Reichel Robert, Gursch Heimo, Kröll Mark
2022
Der Trend, im Gesundheitswesen von Aufzeichnungen in Papierform auf digitale Formen zu wechseln, legt die Basis für eine elektronische Verarbeitung von Gesundheitsdaten. Dieser Artikel beschreibt die technischen Grundlagen für die semantische Aufbereitung und Analyse von textuellen Inhalten in der medizinischen Domäne. Die speziellen Eigenschaften medizinischer Texte gestalten die Extraktion sowie Aggregation relevanter Information herausfordernder als in anderen Anwendungsgebieten. Zusätzlich gibt es Bedarf an spezialisierten Methoden gerade im Bereich der Anonymisierung bzw. Pseudonymisierung personenbezogener Daten. Der Einsatz von Methoden der Computerlinguistik in Kombination mit der fortschreitenden Digitalisierung birgt dennoch enormes Potential, das Personal im Gesundheitswesen zu unterstützen.
Gashi Milot, Gursch Heimo, Hinterbichler Hannes, Pichler Stefan, Lindstaedt Stefanie , Thalmann Stefan
2022
Predictive Maintenance (PdM) is one of the most important applications of advanced data science in Industry 4.0, aiming to facilitate manufacturing processes. To build PdM models, sufficient data, such as condition monitoring and maintenance data of the industrial application, are required. However, collecting maintenance data is complex and challenging as it requires human involvement and expertise. Due to time constrains, motivating workers to provide comprehensive labeled data is very challenging, and thus maintenance data are mostly incomplete or even completely missing. In addition to these aspects, a lot of condition monitoring data-sets exist, but only very few labeled small maintenance data-sets can be found. Hence, our proposed solution can provide additional labels and offer new research possibilities for these data-sets. To address this challenge, we introduce MEDEP, a novel maintenance event detection framework based on the Pruned Exact Linear Time (PELT) approach, promising a low false-positive (FP) rate and high accuracy results in general. MEDEP could help to automatically detect performed maintenance events from the deviations in the condition monitoring data. A heuristic method is proposed as an extension to the PELT approach consisting of the following two steps: (1) mean threshold for multivariate time series and (2) distribution threshold analysis based on the complexity-invariant metric. We validate and compare MEDEP on the Microsoft Azure Predictive Maintenance data-set and data from a real-world use case in the welding industry. The experimental outcomes of the proposed approach resulted in a superior performance with an FP rate of around 10% on average and high sensitivity and accuracy results.
Gursch Heimo, Pramhas Martin, Bernhard Knopper, Daniel Brandl, Markus Gratzl, Schlager Elke, Kern Roman
2021
Im Projekt COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) wird die Behaglichkeit von Büroräumen mit Simulationen und datengetriebenen Verfahren untersucht. Während die datengetriebenen Verfahren auf Messdaten setzen, benötigt die Simulation umfangreiche Beschreibungen der Büroräume, welche sich vielfach mit im Building Information Model (BIM) erfassten Informationen decken. Trotz großer Fortschritte in den letzten Jahren, ist die Integration von BIM und Simulation noch nicht vollständig automatisiert. An dem Fallbeispiel der Aufstockung eines Bürogebäudes der Thomas Lorenz ZT GmbH wird die Übergabe von BIM-Daten an Building Energy Simulation (BES) und Computational Fluid Dynamics (CFD) Simulationen untersucht. Beim untersuchten Gebäude wurde der gesamte Planungsprozess anhand des BIM durchgeführt. Damit konnten Einreichplanung, Ausschreibungsplanung für sämtliche Gewerke inkl. Massenableitung, Ausführungspläne wie Polier-, Schalungs- und Bewehrungspläne aus dem Modell abgeleitet werden und das Haustechnikmodell frühzeitig mit Architektur- und Tragwerksplanungsmodell verknüpft werden.Ausgehend vom BIM konnten die nötigen Daten im IFC-Format an die BES übergeben werden. Die verwendete Software konnte aber noch keine automatische Übergabe durchführen, weshalb eine manuelle Nachbearbeitung der Räume erforderlich war. Für die CFD-Simulation wurden nur ausgewählte Räume betrachtet, denn der Zusatzaufwand zur Übergabe im STEP-Format ist bei normaler Bearbeitung des BIM immer noch sehr groß. Dabei muss der freie Luftraum im BIM separat modelliert und bestimmte geometrischen Randbedingungen erfüllt werden. Ebenso müssen Angaben zu Wärmequellen und Möbel in einer sehr hohen Planungstiefe vorliegen. Der Austausch von Randbedingungen an den Grenzflächen zwischen Luft und Hülle musste noch manuell geschehen.Die BES- und CFD-Simulationsergebnisse sind bezüglich ihrer Aussagekraft mit denen aus herkömmlichen, manuell erstellten Simulationsmodellen als identisch zu betrachten. Eine automatische Übernahme von Parameterwerten scheitert momentan noch an der mangelnden Interpretier- bzw. Zuordenbarkeit in der Simulationssoftware. In Zukunft sollen es die Etablierung von IFC 4 und zusätzlicher Industry Foundation Class (IFC) Parameter einfacher machen die benötigten Daten im Modell strukturiert zu hinterlegen. Besonderes Augenmerk ist dabei auf die Integration von Raumbuchdaten in BIM zu legen, da diese Informationen nicht nur für die Simulation von großem Nutzen sind. Diese Informationsintegrationen sind nicht auf eine einmalige Übermittlung beschränkt, sondern zielen auf eine Integration zur automatischen Übernahme von Änderungen zwischen BIM, Simulation und anknüpfenden Bereichen ab.
Gursch Heimo, Ganster Harald, Rinnhofer Alfred, Waltner Georg, Payer Christian, Oberwinkler Christian, Meisenbichler Reinhard, Kern Roman
2021
Refuse sorting is a key technology to increase the recycling rate and reduce the growths of landfills worldwide. The project KI-Waste combines image recognition with time series analysis to monitor and optimise processes in sorting facilities. The image recognition captures the refuse category distribution and particle size of the refuse streams in the sorting facility. The time series analysis focuses on insights derived from machine parameters and sensor values. The combination of results from the image recognition and the time series analysis creates a new holistic view of the complete sorting process and the performance of a sorting facility. This is the basis for comprehensive monitoring, data-driven optimisations, and performance evaluations supporting workers in sorting facilities. Digital solutions allowing the workers to monitor the sorting process remotely are very desirable since the working conditions in sorting facilities are potentially harmful due to dust, bacteria, and fungal spores. Furthermore, the introduction of objective sorting performance measures enables workers to make informed decisions to improve the sorting parameters and react quicker to changes in the refuse composition. This work describes ideas and objectives of the KI-Waste project, summarises techniques and approaches used in KI-Waste, gives preliminary findings, and closes with an outlook on future work.
Gursch Heimo, Schlager Elke, Feichtinger Gerald, Brandl Daniel
2020
The comfort humans perceive in rooms depends on many influencing factors and is currently only poorly recorded and maintained. This is due to circumstances like the subjective nature of perceived comfort, lack of sensors or data processing infrastructure. Project COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) researches the modelling of perceived thermal comfort of humans in office rooms. This begins at extensive and long-term measurements taking in a laboratory test chamber and in real-world office rooms. Data is collected from the installed building services engineering systems, from high-accurate reference measurement equipment and from weather services describing the outside conditions. All data is stored in a specially developed central Data Management System (DMS) creating the basis for all research and studies in project COMFORT.The collected data is the key enabler for the creation of soft sensors describing comfort relevant indices like predicted mean vote (PMV), predicted percentage of dissatisfied (PPD) and operative temperature (OT). Two different approaches are conducted complementing and extending each other in the realisation of soft sensors. Firstly, a purely data-driven modelling approach generates models for soft sensors by learning the relations between explanatory and target variables in the collected data. Secondly, simulation-based soft sensors are derived from Building Energy Simulation (BES) and Computational Fluid Dynamic (CFD) simulations.The first result of the data-driven analysis is a solar Radiation Modelling (RM) component, capable of splitting global radiation into its direct horizontal and diffuse components. This is needed, since only global radiation data is available for the investigated locations, but the global radiation needs to be divided into direct and diffuse radiation due to their hugely differences in their thermal impact on buildings. The current BES and CFD simulation provide as their results soft sensors for comfort relevant indices, which will be complemented by data-driven soft sensors in the remainder of the project.
Feichtinger Gerald, Gursch Heimo, Schlager Elke, Brandl Daniel, Gratzl Markus
2020
Stanisavljevic Darko, Cemernek David, Gursch Heimo, Urak Günter, Lechner Gernot
2019
Additive manufacturing becomes a more and more important technology for production, mainly driven by the ability to realise extremely complex structures using multiple materials but without assembly or excessive waste. Nevertheless, like any high-precision technology additive manufacturing responds to interferences during the manufacturing process. These interferences – like vibrations – might lead to deviations in product quality, becoming manifest for instance in a reduced lifetime of a product or application issues. This study targets the issue of detecting such interferences during a manufacturing process in an exemplary experimental setup. Collection of data using current sensor technology directly on a 3D-printer enables a quantitative detection of interferences. The evaluation provides insights into the effectiveness of the realised application-oriented setup, the effort required for equipping a manufacturing system with sensors, and the effort for acquisition and processing the data. These insights are of practical utility for organisations dealing with additive manufacturing: the chosen approach for detecting interferences shows promising results, reaching interference detection rates of up to 100% depending on the applied data processing configuration.
Schlager Elke, Gursch Heimo, Feichtinger Gerald
2019
Poster to publish the finally implemented "Data Management System" @ Know-Center for the COMFORT project
Feichtinger Gerald, Gursch Heimo
2019
Poster - allgemeine Projektvorstellung
Monsberger Michael, Koppelhuber Daniela, Sabol Vedran, Gursch Heimo, Spataru Adrian, Prentner Oliver
2019
A lot of research is currently focused on studying user behavior indirectly by analyzing sensor data. However, only little attention has been given to the systematic acquisition of immediate user feedback to study user behavior in buildings. In this paper, we present a novel user feedback system which allows building users to provide feedback on the perceived sense of personal comfort in a room. To this end, a dedicated easy-to-use mobile app has been developed; it is complemented by a supporting infrastructure, including a web page for an at-a-glance overview. The obtained user feedback is compared with sensor data to assess whether building services (e.g., heating, ventilation and air-conditioning systems) are operated in accordance with user requirements. This serves as a basis to develop algorithms capable of optimizing building operation by providing recommendations to facility management staff or by automatic adjustment of operating points of building services. In this paper, we present the basic concept of the novel feedback system for building users and first results from an initial test phase. The results show that building users utilize the developed app to provide both, positive and negative feedback on room conditions. They also show that it is possible to identify rooms with non-ideal operating conditions and that reasonable measures to improve building operation can be derived from the gathered information. The results highlight the potential of the proposed system.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Thalmann Stefan, Gursch Heimo, Suschnigg Josef, Gashi Milot, Ennsbrunner Helmut, Fuchs Anna Katharina, Schreck Tobias, Mutlu Belgin, Mangler Jürgen, Huemer Christian, Lindstaedt Stefanie
2019
Current trends in manufacturing lead to more intelligent products, produced in global supply chains in shorter cycles, taking more and complex requirements into account. To manage this increasing complexity, cognitive decision support systems, building on data analytic approaches and focusing on the product life cycle, stages seem a promising approach. With two high-tech companies (world market leader in their domains) from Austria, we are approaching this challenge and jointly develop cognitive decision support systems for three real world industrial use cases. Within this position paper, we introduce our understanding of cognitive decision support and we introduce three industrial use cases, focusing on the requirements for cognitive decision support. Finally, we describe our preliminary solution approach for each use case and our next steps.
Gursch Heimo, Silva Nelson, Reiterer Bernhard , Paletta Lucas , Bernauer Patrick, Fuchs Martin, Veas Eduardo Enrique, Kern Roman
2018
The project Flexible Intralogistics for Future Factories (FlexIFF) investigates human-robot collaboration in intralogistics teams in the manufacturing industry, which form a cyber-physical system consisting of human workers, mobile manipulators, manufacturing machinery, and manufacturing information systems. The workers use Virtual Reality (VR) and Augmented Reality (AR) devices to interact with the robots and machinery. The right information at the right time is key for making this collaboration successful. Hence, task scheduling for mobile manipulators and human workers must be closely linked with the enterprise’s information systems, offering all actors on the shop floor a common view of the current manufacturing status. FlexIFF will provide useful, well-tested, and sophisticated solutions for cyberphysicals systems in intralogistics, with humans and robots making the most of their strengths, working collaboratively and helping each other.
Neuhold Robert, Gursch Heimo, Cik Michael
2018
Data collection on motorways for traffic management operations is traditionally based on local measurements points and camera monitoring systems. This work looks into social media as additional data source for the Austrian motorway operator ASFINAG. A data driven system called Driver´s Dashboard was developed to collect incident descriptions from social media sources (Facebook, RSS feeds), to filter relevant messages, and to fuse them with local traffic data. All collected texts were analysed for concepts describing road situations linking the texts from the web and social media with traffic messages and traffic data. Due to the Austrian characteristics in social media use and road transportation very few messages are available compared to other studies. 3,586 messages were collected within a five-week period. 7.1% of these messages were automatically annotated as traffic relevant by the system. An evaluation of these traffic relevant messages showed that 22% of these messages were actually relevant for the motorway operator. Further, the traffic relevant messages for the motorway operator were analysed more in detail to identify correlations between message text and traffic data characteristics. A correlation of message text and traffic data was found in nine of eleven messages by comparing the speed profiles and traffic state data with the message text.
Cemernek David, Gursch Heimo, Kern Roman
2017
The catchphrase “Industry 4.0” is widely regarded as a methodology for succeeding in modern manufacturing. This paper provides an overview of the history, technologies and concepts of Industry 4.0. One of the biggest challenges to implementing the Industry 4.0 paradigms in manufacturing are the heterogeneity of system landscapes and integrating data from various sources, such as different suppliers and different data formats. These issues have been addressed in the semiconductor industry since the early 1980s and some solutions have become well-established standards. Hence, the semiconductor industry can provide guidelines for a transition towards Industry 4.0 in other manufacturing domains. In this work, the methodologies of Industry 4.0, cyber-physical systems and Big data processes are discussed. Based on a thorough literature review and experiences from the semiconductor industry, we offer implementation recommendations for Industry 4.0 using the manufacturing process of an electronics manufacturer as an example.
Gursch Heimo, Cemernek David, Kern Roman
2017
In manufacturing environments today, automated machinery works alongside human workers. In many cases computers and humans oversee different aspects of the same manufacturing steps, sub-processes, and processes. This paper identifies and describes four feedback loops in manufacturing and organises them in terms of their time horizon and degree of automation versus human involvement. The data flow in the feedback loops is further characterised by features commonly associated with Big Data. Velocity, volume, variety, and veracity are used to establish, describe and compare differences in the data flows.
Traub Matthias, Gursch Heimo, Lex Elisabeth, Kern Roman
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Gursch Heimo, Körner Stefan, Krasser Hannes, Kern Roman
2016
Painting a modern car involves applying many coats during a highly complex and automated process. The individual coats not only serve a decoration purpose but are also curial for protection from damage due to environmental influences, such as rust. For an optimal paint job, many parameters have to be optimised simultaneously. A forecasting model was created, which predicts the paint flaw probability for a given set of process parameters, to help the production managers modify the process parameters to achieve an optimal result. The mathematical model was based on historical process and quality observations. Production managers who are not familiar with the mathematical concept of the model can use it via an intuitive Web-based Graphical User Interface (Web-GUI). The Web-GUI offers production managers the ability to test process parameters and forecast the expected quality. The model can be used for optimising the process parameters in terms of quality and costs.
Gursch Heimo, Kern Roman
2016
Many different sensing, recording and transmitting platforms are offered on today’s market for Internet of Things (IoT) applications. But taking and transmitting measurements is just one part of a complete system. Also long time storage and processing of recorded sensor values are vital for IoT applications. Big Data technologies provide a rich variety of processing capabilities to analyse the recorded measurements. In this paper an architecture for recording, searching, and analysing sensor measurements is proposed. This architecture combines existing IoT and Big Data technologies to bridge the gap between recording, transmission, and persistency of raw sensor data on one side, and the analysis of data on Hadoop clusters on the other side. The proposed framework emphasises scalability and persistence of measurements as well as easy access to the data from a variety of different data analytics tools. To achieve this, a distributed architecture is designed offering three different views on the recorded sensor readouts. The proposed architecture is not targeted at one specific use-case, but is able to provide a platform for a large number of different services.
Gursch Heimo, Ziak Hermann, Kröll Mark, Kern Roman
2016
Modern knowledge workers need to interact with a large number of different knowledge sources with restricted or public access. Knowledge workers are thus burdened with the need to familiarise and query each source separately. The EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project aims at developing a recommender system providing relevant and novel content to its users. Based on the user’s work context, the EEXCESS system can either automatically recommend useful content, or support users by providing a single user interface for a variety of knowledge sources. In the design process of the EEXCESS system, recommendation quality, scalability and security where the three most important criteria. This paper investigates the scalability aspect achieved by federated design of the EEXCESS recommender system. This means that, content in different sources is not replicated but its management is done in each source individually. Recommendations are generated based on the context describing the knowledge worker’s information need. Each source offers result candidates which are merged and re-ranked into a single result list. This merging is done in a vector representation space to achieve high recommendation quality. To ensure security, user credentials can be set individually by each user for each source. Hence, access to the sources can be granted and revoked for each user and source individually. The scalable architecture of the EEXCESS system handles up to 100 requests querying up to 10 sources in parallel without notable performance deterioration. The re-ranking and merging of results have a smaller influence on the system's responsiveness than the average source response rates. The EEXCESS recommender system offers a common entry point for knowledge workers to a variety of different sources with only marginally lower response times as the individual sources on their own. Hence, familiarisation with individual sources and their query language is not necessary.
Mutlu Belgin, Sabol Vedran, Gursch Heimo, Kern Roman
2016
Graphical interfaces and interactive visualisations are typical mediators between human users and data analytics systems. HCI researchers and developers have to be able to understand both human needs and back-end data analytics. Participants of our tutorial will learn how visualisation and interface design can be combined with data analytics to provide better visualisations. In the first of three parts, the participants will learn about visualisations and how to appropriately select them. In the second part, restrictions and opportunities associated with different data analytics systems will be discussed. In the final part, the participants will have the opportunity to develop visualisations and interface designs under given scenarios of data and system settings.
Gursch Heimo, Wuttei Andreas, Gangloff Theresa
2016
Highly optimised assembly lines are commonly used in various manufacturing domains, such as electronics, microchips, vehicles, electric appliances, etc. In the last decades manufacturers have installed software systems to control and optimise their shop foor processes. Machine Learning can enhance those systems by providing new insights derived from the previously captured data. This paper provides an overview of Machine Learning felds and an introduction to manufacturing management systems. These are followed by a discussion of research projects in the feld of applying Machine Learning solutions for condition monitoring, process control, scheduling, and predictive maintenance.
Horn Christopher, Gursch Heimo, Kern Roman, Cik Michael
2016
Models describing human travel patterns are indispensable to plan and operate road, rail and public transportation networks. For most kind of analyses in the field of transportation planning, there is a need for origin-destination (OD) matrices, which specify the travel demands between the origin and destination zones in the network. The preparation of OD matrices is traditionally a time consuming and cumbersome task. The presented system, QZTool, reduces the necessary effort as it is capable of generating OD matrices automatically. These matrices are produced starting from floating phone data (FPD) as raw input. This raw input is processed by a Hadoop-based big data system. A graphical user interface allows for an easy usage and hides the complexity from the operator. For evaluation, we compare a FDP-based OD matrix to an OD matrix created by a traffic demand model. Results show that both matrices agree to a high degree, indicating that FPD-based OD matrices can be used to create new, or to validate or amend existing OD matrices.
Gursch Heimo, Ziak Hermann, Kern Roman
2015
The objective of the EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project is to develop a system that can automatically recommend helpful and novel content to knowledge workers. The EEXCESS system can be integrated into existing software user interfaces as plugins which will extract topics and suggest the relevant material automatically. This recommendation process simplifies the information gathering of knowledge workers. Recommendations can also be triggered manually via web frontends. EEXCESS hides the potentially large number of knowledge sources by semi or fully automatically providing content suggestions. Hence, users only have to be able to in use the EEXCESS system and not all sources individually. For each user, relevant sources can be set or auto-selected individually. EEXCESS offers open interfaces, making it easy to connect additional sources and user program plugins.