DDAI - Data Driven Artificial Intelligence

The goal of our COMET module ‘DDAI – Privacy Preserving Data Driven Artificial Intelligence’ is to develop secure, verifiable and explainable AI that simultaneously protects privacy.

Project Kick-off & Press Conference

Our research and results within the DDAI module should drastically reduce the entry hurdle for companies and individuals to use privacy-preserving AI for data analysis in order to secure a competive advantage. The module covers all stages of the data processing pipeline, from data sources to be verified, to cryptographic methods for secure data processing, and offers AI users a better, more comprehensible basis for decision-making.

“Our work will contribute significantly to acceptance and trust in AI.”

Stefanie Lindstaedt

CEO

Data-Driven AI

The new COMET module from the FFG research promotion agency makes a significant contribution to development and innovation. The aim is to establish forward-looking research topics and build up new areas of strength. Through research at the highest level, new fields of research are established that go far beyond the current state-of-the-art.

The module, which is endowed with 4 million euros, will run for 4 years. The official project kick-off including a press conference took place on 10 February 2020 in Graz.

“With the COMET module on Artificial Intelligence, the focus on AI can be specifically expanded at Know-Center in Graz.”

Henrietta Egerth and Klaus Pseiner, Managing Director of the Austrian Research Promotion Agency FFG

Data-Driven AI
Data-Driven AI 1 2 3

Research areas

1Privacy oriented AI algorithms

We develop secure AI methods that do not reveal sensitive information and allow the analysis of encrypted data. This enables e.g. secure cloud computing – AI models can be exchanged more easily with customers and suppliers and private and public databases can be combined without security risks. We attach great importance to data protection and are working on new generations of secure and confidentiality-conserving AI algorithms.

2Explainable AI for analysts

AI solutions should not be a black box for users, we are working on making decisions for the algorithms used more understandable for analysts without disclosing confidential data. This is intended to strengthen trust in AI decisions, to better visualize models and to make the corresponding results more explainable.

3Explainable AI for users

The interaction of end users with AI is becoming increasingly important and we are researching how we can improve this interaction in the future. For example, we will improve the explainability of personalized recommendation systems and also develop new learning paradigms for Machine Learning to better train and empower employees and users in dealing with AI.

Research questions

  • Cryptography and Cryptoanalysis
  • Side-Channel Attacks
  • Machine Learning and Deep Learning
  • System Architecture
  • Homomorphic Encryption
  • Recommender Systems
  • (Social) Data Science
  • Data and Information Visualization
  • Human-Computer Interaction
  • Data-Driven Business

Your contact person

Scientific Partners

TU Graz Institute: IAIK (Prof. Christian Rechberger) and ISDS (Prof. Stefanie Lindstaedt)

KU Leuven (Cryptography, Prof. Nigel Smart)

Universität Passau (Machine Learning, Prof. Michael Granitzer)

Universität Twente (Explanable AI, Ass. Prof. Christin Seifert)

Industry Partners