Die Assistentin für die individuelle Betreuung älterer Menschen. Emma bietet Hilfestellung in verschiedenen Lebensbereichen älterer Menschen, um deren Selbständigkeit in den eigenen vier Wänden zu erhalten. Sicherheit und bestes Service steht dabei im Vordergrund. Emma organisiert den Alltag und schafft Wohlfühlzeit und Sicherheit für die ganze Familie.
Short presentation of Know-Center's portfolio on the basis of a use case from the everyday life of a person in the future
Products fueled by data and machine learning can be a powerful way to solve users‘ needs. The opportunity extends far beyond the tech giants: companies of a range of sizes and across sectors are investing in their own data-powered products and business models. But the data component adds an extra layer of complexity. To tackle the challenge, companies should emphasize cross functional collaboration, evaluate and prioritize data product opportunities with an eye to the long-term, and start simple.
Die Möglichkeiten und Unmöglichkeiten der KI Unterstützung
Die Möglichkeiten und Unmöglichkeiten der KI Unterstützung
Die Möglichkeiten und Unmöglichkeiten der KI Unterstützung
What's the conection between a chatbot and AI? The keynote gives an overview about the current state of AI as well as discusses research directions and future challenges of AI.
Panel discussion with representants from industry, public services and research about the impact of AI for Austrian economy
Field studies as evaluation method for socio-technical interventions in Technology-Enhanced Learning. Much research in TEL is design work – i.e., the research team designs an intervention that is intended to support learning. This intervention needs to be evaluated to show the extent to which this goal has been reached; and to gain additional insights that are sought for. Field studies are one main type of evaluations. They are challenging to set up; and in case of a bad study design cannot be easily repeated due to the effort and cost of running a field study. The goal of this lecture and workshop is To provide a blueprint for field studies as evaluation method for socio-technical interventions in technology enhanced learning To present a hierarchical principle of evaluating learning interventions– based on Kirkpatrick & Kirkpatrick: Usage/observable activities – Learning – Impact on task/work performance – Impact on organization (in workplace learning/applicable to settings in which individual learning impacts a wider social entity) To have students plan a field study for their own PhD in rough lines individually To discuss their plans with peers and the lecturer, as well as other senior researchers who may be present – i.e., students will get feedback on their own plan The blueprint for field studies is to evaluate in a hierarchy of research questions/evaluation level: First, one assesses the observable (learning) activities that are carried out – in particular how and whether participants adhered to the prescribed intervention; this helps understand the success of the intervention and it is possible to identify problems. Second, one assesses concrete learning outcomes – insights that are generated. Thirdly, one assesses a change in behaviour, and fourthly a change in performance. In parallel, a mix of qualitative and quantitative methods should be used – this allows on the one hand statistical comparison (pre/post; between groups). On the other hand, one can get in depth explanatory insights.
(This is joint work with Rana Ali Amjad from Technical University of Munich.) The information bottleneck theory of neural networks has received a lot of attention in both machine learning and information theory. At the heart of this theory is the assumption that a good classifier creates representations that are minimal sufficient statistics, i.e., they share only as much mutual information with the input features that is necessary to correctly identify the class label. Indeed, it has been claimed that information-theoretic compression is a possible cause of generalization performance and a consequence of learning the weights using stochastic gradient descent. On the one hand, the claims set forth by this theory have been heavily disputed based on conflicting empirical evidence: There exist classes of invertible neural networks with state-of-the-art generalization performance; the compression phase also appears in full batch learning; information-theoretic compression is an artifact of using a saturating activation function. On the other hand, several authors report that training neural networks using a cost function derived from the information bottleneck principle leads to representations that have desirable properties and yields improved operational capabilities, such as generalization performance and adversarial robustness.In this work we provide yet another perspective on the information bottleneck theory of neural networks. With a focus on training deterministic (i.e., non-Bayesian) neural networks, we show that the information bottleneck framework suffers from two important shortcomings: First, for continuously distributed input features, the information-theoretic compression term is infinite for almost every choice of network weights, making this term problematic during optimization. The second and more important issue is that the information bottleneck functional is invariant under bijective transforms of the representation. Optimizing a neural network w.r.t. this functional thus yields representations that are informative about the class label, but that may still fail to satisfy desirable properties, such as allowing to use simple decision functions or being robust against small perturbations of the input feature. We show that there exist remedies for these shortcomings: Including a decision rule or softmax layer, making the network stochastic by adding noise, or replacing the terms in the information bottleneck functional by more well-behaved quantities. We conclude by showing that the successes reported about training neural networks using the information bottleneck framework can be attributed to exactly these remedies.
Adaptive reflection guidance can be understood as a specific kind of intelligent tutoring systems – as a kind of intelligent mentoring systems, as envisaged by Dimitrova (2006). These systems don’t encode to a very fine-granular degree domain knowledge, and learning strategies, but support the learner in developing the capability to learn in a self-directed manner; and in to learn about a particular learning domain. In this lecture and demo, I will show a concrete modular in-app reflection guidance framework, and its instantiation in different research prototypes (Fessl et al., 2017). I will also discuss how such a system relates to the very wide fields of intelligent tutoring systems and adaptive and context-aware systems in general; inheriting open challenges from each of these fields. In particular, it connects to promising fields of future TEL research in finding the sweet spot between human and artificial intelligence.
Panel discussion with representants from industry, public services and research about the application of AI in tourism (I stepped in for Wolfgang Kienreich)
Der Vortrag beleuchtet den Status Quo von AI bzw. versucht das Thema auf den Boden der Tatsachen zu bringen. Folgende Punkte behandelt der Vortrag: (i) Geschichte der AI, (ii) Fähigkeiten der KI heute und (iii) zukünftige Entwicklungen der KI aus Forschungssicht.
Kombination aus Vortrag und Workshop - Ideengenerierung von daten-getriebenen Use Case zur Verbesserung des öffentlichen Verkehrs
Kombination aus Vortrag und Workshop - Ideengenerierung von daten-getriebenen Use Case zur Verbesserung des öffentlichen Verkehrs
Kombination aus Vortrag und Workshop - Ideengenerierung von daten-getriebenen Use Case zur Verbesserung des öffentlichen Verkehrs
"Quergedacht" ist eine Veranstaltungsreihe, bei der MitarbeiterInnen der Energie Graz innovative Themen und aktuelle Fragestellungen nähergebracht werden. Bei dem Vortrag wurde das Thema Künstliche Intelligenz präsentiert, wobei versucht wurde aufzuzeigen wo wir heute mit der – datengetriebenen – KI stehen (d.h. was KI heute kann und was nicht) und wohin die Reise aus Forschungssicht geht.
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complex datasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches. Therefore, we discuss foundations for eye tracking support in VA systems. We first review and discuss the structure and range of typical VA systems. Based on a widely used VA model, we present five comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used to systematically explore how concrete VA systems could be extended with eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and application opportunities, and classify them into research themes. In a call for action, we map the road for future research to broaden the use of eye tracking and advance visual analytics.
Agile, global enterprises need accurate and readily available information about customers, markets and competitors to formulate strategic decisions. We apply our expertise in collecting and processing information from open and closed sources to support key strategic functions such as technology observation, business intelligence and patent analysis. We provide design and implementation of innovative search solutions and intelligent dashboards that visually capture and present relevant information and support the data-driven decision-making process. Automated Intelligence is now more than ever an important side of the future of data analytics, therefore we apply multiple techniques for the automation of data processing and analysis through the usage of the latest machine learning and artificial intelligence algorithms. We present several successful use cases of our strategic intelligence partnerships and future directions.
Successful Use cases for Competitive Intelligence Approaches to Automation of Information Processing to Gain Competitive Insights Future directions and the importance of AI and Deep Learning
Give you an overview on Competitive Intelligence. Introduce a framework for Innovation from Uberbrands. Present Know-Center competencies and case studies. Discuss your challenges and collaboration possibilities.
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing. Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Previous research has demonstrated the feasibility of conveying vibrotactile encoded information efficiently using wearable devices. Users can understand vibrotactile encoded symbols and complex messages combining such symbols. Such wearable devices can find applicability in many multitasking use cases. Nevertheless, for multitasking, it would be necessary for the perception and comprehension of vibrotactile information to be less attention demanding and not interfere with other parallel tasks. We present a user study which investigates whether high speed vibrotactile encoded messages can be perceived in the background while performing other concurrent attention-demanding primary tasks. The vibrotactile messages used in the study were limited to symbols representing letters of English Alphabet. We observed that users could very accurately comprehend vibrotactile such encoded messages in the background and other parallel tasks did not affect users performance. Additionally, the comprehension of such messages did also not affect the performance of the concurrent primary task as well. Our results promote the use of vibrotactile information transmission to facilitate multitasking.
De la mano de tecnologías como aprendizaje maquinal y redes neuronales, las aplicaciones de inteligencia artificial se han transformado en la solución preferente al tratar con grandes cantidades de datos. En esta charla propongo revisar cómo se diseñan los sistemas interactivos y el funcionamiento de mecanismos de inteligencia artificial que se sirven de datos de la interacción humano-máquina. Se presentará el tema con ejemplos de seguridad de redes, y de nuevos sistemas de wearable computing. En el area de seguridad de redes, se utilizan sistemas de detección de intrusos los cuales deben ser entrenados con datos que describen el comportamiento hostil en función de las características observables de la red. Un experto humano se encarga de asociar un comportamiento malicioso con ciertas características de la red. Presentaremos un sistema que aprende el modelo de detección en tiempo casi real a medida que el experto etiqueta datos y a la vez comunica resultados de sus predicciones al experto. En el caso de wearable computing, presentaremos aplicaciones de aprendizaje maquinal para adaptar mecanismos de feedback táctiles.
Der LSZ CIO-Kongress ist eine IT-Veranstaltung in Österreich und kommt dem unmittelbaren Informations- und Erfahrungsaustausch von CIOs und Managern aus Fachabteilungen untereinander, sowie CIOs/CxOs mit den Experten der Anbieter- und Dienstleistungsunternehmen entgegen. Die insgesamt mehr als 450 Teilnehmer sind aufgefordert, ihr Fachwissen in den Arbeitskreisen sowie in den zahlreichen sonstigen Diskussionsrunden – nach dem Open Space-Prinzip einzubringen.
held on behalf of S. Lindstaedt; with contributions from G. Pirker (LEC GmbH)
Im Rahmen der moderierten Themeninsel, wird das Thema AI und seine Einsatzmöglichkeiten näher beleuchtet und diskutiert: "Was geht heute schon mit AI?", "Wo sind die Grenzen von AI", "Wie gehe ich ein AI Projekt an?", "Was könnte ein datengetriebener AI Use Case sein?"
Gezielte Beschaffung von Informationen nach wie vor elementare Herausforderung: MitarbeiterInnen wollen aktuellste und vollständige Informationen zur Bewältigung ihrer Tasks bekommen! Ziel muss es daher sein, bestehendes und neues (Prozess- und Experten)Wissen so zu erheben und zu vernetzen, dass es proaktiv in passenden Prozessen oder Tasks angeboten werden kann, und damit beispielsweise für eine signifikante Reduktion der Bearbeitungszeit bei gleichzeitiger Hebung der Bearbeitungsqualität und Compliance sorgt.
Discussion points: · Brief overview/summary of data management infrastructures and governance in industry · How prepared/interested are companies for/in the European Open Science Cloud (EOSC) · Relevance of FAIR principles for industry · Challenges & possible solutions – University & industry cooperation in terms of sharing (research) data Participants: Prof. Stefanie Lindstaedt, CEO, Know-Center & Head of Institute of Interactive Systems and Data Science, TUG - Chair Ass.Prof. Viktoria Pammer-Schindler, Data-Driven Business, TUG & Know-Center Christof Wolf-Brenner, Big Data Consultant, Know-Center Andre Perchthaler, Director, Global Transportation Solutions, NXP Semiconductors Dr. Josiane Xavier Parreira, Siemens Corporate Technology
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
Agile, global enterprises need accurate and readily available information about customers, markets and competitors to formulate strategic decisions. We apply our expertise in collecting and processing information from open and closed sources to support key strategic functions such as technology observation, business intelligence and patent analysis. We provide design and implementation of innovative search solutions and intelligent dashboards that visually capture and present relevant information and support the data-driven decision-making process. Automated Intelligence is now more than ever an important side of the future of data analytics, therefore we apply multiple techniques for the automation of data processing and analysis through the usage of the latest machine learning and artificial intelligence algorithms. We present several successful use cases of our strategic intelligence partnerships and future directions.
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space. To find out which RL methods better solve the problem and whether the obtained models generalize to driving on unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
In this talk we showed how the Data Value Check - a systematic approach and methodology developed by the Know-Center to generate data-driven use cases - helped our partner Reval Austria to identify AI based opportunities for their SW product. One of this uses cases was finally implemented in Reval's SW and released as a new feature.
Three core areas in data-driven business: Data-driven business models, Knowledge management, Technology enhanced learning. Digitalising apprenticeship training Learning Guidance, Chatbots and Learning Analytics Use Case: An online learning platform for apprentices. Research opportunities: Target group is under-researched 1. Computer usage & ICT self-efficacy 2. Communities of practice, identities as learners Reflection guidance technologies 3. Rebo, the reflection guidance chatbot
This keynote speech discusses the role of AI at the intersection of “Data world” and “Knowledge world”. It will emphasize the challenges with current data-driven technologies and AI, and how challenging it is to bring the two aspects – (i) extracting knowledge from data and (ii) using knowledge (domain know-how) to analyze data, together in an efficient and beneficial way.
Identifying and realising value within data helps organisations to unlock unused business potentials. With the amounts of generated data increasing by the minute, it is essential to identify the most promising use-cases. The Know-Center offers the Data Value Check, a guided process designed to help companies on their journey to become data driven. Oliver Pimas from Know-Center GmbH will present a keynote about identifying and evaluating promising data driven use-cases.
Contrastive loss terms allows someone to learn good representations in DeepLearning without labels based on the data alone. This can be used in a later stage f.e. to train classifiers with a greatly (about 100 times) reduced need in labels. F.e. the xent loss learns by data augmentation and negative sampling good representations in an unsupervised manner. We show how to use this loss and train a network, present choices for data augmentation, and discuss where it's useful or not, and present some use cases the presenters worked on with the term
Declarative machine learning (ML) aims to simplify the development and usage of large-scale ML algorithms. In SystemML, data scientists specify ML algorithms in a high-level language with R-like syntax and the system automatically generates hybrid execution plans that combine single-node, in-memory operations and distributed operations on Spark. In a first part, we motivate declarative ML and provide an up-to-date overview of SystemML including its APIs for different deployments. Since it was rarely mentioned before, we specifically discuss a programmatic API for low-latency scoring and its usage in containerized and data-parallel environments. In a second part, we then discuss selected research results for large-scale ML, specifically, compressed linear algebra (CLA) and automatic operator fusion. CLA aims to fit larger datasets into available memory by applying lightweight database compression schemes to matrices and executing linear algebra operations directly on the compressed representations. In contrast, automatic operator fusion aims at avoiding materialized intermediates and unnecessary scans, as well as sparsity exploitation by optimizing fusion plans and generating code for these custom fused operators. Together, CLA and automatic operator fusion achieve significant end-to-end improvements as they address orthogonal bottlenecks of large-scale ML algorithms.
Machine learning (ML) applications profoundly transform our private lives and many domains such as health care, finance, transportation, media, logistics, production, and information technology itself. As motivation and background, we will first share lessons learned from building Apache SystemML for declarative, large-scale ML. SystemML compiles R-like scripts into hybrid runtime plans of local, in-memory operations on CPUs and GPUs, as well as distributed operations on data-parallel frameworks like Spark. This high-level specification simplifies the development of ML algorithms, but lacks support for important tasks of the end-to-end data science liefcycle and users with different expertise. Set out to overcome these limitations, we introduce SystemDS, a new open-source ML system that aims to support the end-to-end data science lifecycle from data integration, cleaning, and feature engineering, over efficient local, distributed, and federated ML model training, to deployment and serving. In this talk, we will present the preliminary system architecture including the language abstractions and underlying data model, as well as selected features such as fine-grained lineage tracing and its exploitation for model versioning, reusing intermediates, and debugging model training runs.