Study level

  • PhD
  • Master of Philosophy
  • Honours
  • Vacation research experience scheme

Faculty/School

Faculty of Science

School of Information Systems

Topic status

We're looking for students to study this topic.

Research centre

Supervisors

Associate Professor Chun Ouyang
Position
Associate Professor
Division / Faculty
Faculty of Science
Adjunct Associate Professor Catarina Pinto Moreira
Position
Adjunct Associate Professor
Division / Faculty
Faculty of Science

Overview

Existing machine learning-based intelligent systems are autonomous and opaque (often considered “black-box” systems), which has led to the lack of trust in AI adoption and, consequently, the gap between machine and human being.

In 2018, the European Parliament adopted the General Data Protection Regulation (GDPR), which introduces a right of explanation for all human individuals to obtain “meaningful explanations of the logic involved” when a decision is made by automated systems. To this end, it is a compliance that an intelligent system needs to be transparent and is expected to provide human-understandable explanations.

In November 2019, the Australian Government established the Australia’s Artificial Intelligence (AI) Ethics Framework to guide businesses and governments to responsibly design, develop and implement AI systems. According to the AI ethics principles set out in the Framework, an intelligent system is expected to address human-centred values and support fairness, reliability, transparency and explainability.

Research activities

The research activities can be scoped to cater for different types of research student projects (including VRES, Honours, MPhil, and PhD). Students will conduct these activities mainly at QUT and may also have a chance to collaborate with researchers from our external academic partners.
The research aims to build human-centric intelligent systems by (i) devising new algorithms and techniques to design robust, reliable, and trustworthy systems underpinned by machine intelligence, (ii) developing new theories and methods to support transparency and explainability, and (iii) incorporating human behaviour and input in the design and development of intelligence systems.
For more information about this research project, check out the world leading research initiative on eXplainable Analytics for Machine Intelligence (XAMI) at https://www.xami-lab.org/

Outcomes

  • New algorithms and techniques to design robust, reliable, and trustworthy systems underpinned by machine intelligence.
  • New theories and methods to support transparency and explainability.
  • New approach to incorporate human behaviour and in put in the design and development of intelligence systems.

Skills and experience

  • Knowledge in data mining and machine/deep learning.
  • Knowledge in human-computer interaction (optional).
  • Problem-solving and logical thinking capabilities.
  • Programming skills in Python.
  • Academic writing skills.

Scholarships

You may be eligible to apply for a research scholarship.

Explore our research scholarships

Keywords

Contact

Contact the supervisor for more information.