FEIT Research Project Database

Interpretable AI


Project Leader: Saman Halgamuge
Collaborators: Damith Senanayake Wei Wang
Sponsors: Australian Research Council
Primary Contact: Saman Halgamuge (saman@unimelb.edu.au)
Keywords: artificial intelligence; autonomous systems; deep learning; machine learning; optimisation
Disciplines: Mechanical Engineering
Domains:

To reduce the development cost of Deep Neural Networks (DNNs) and to promote the democratisation of DNN usage, it is proposed to automate the DNN design, which led to an emerging field called automatic machine learning (Auto-ML). Existing Auto-ML methods have attempted to optimise every step of the data analysis pipeline including data preparation, feature engineering, model generation, training, and evaluation. Among them, Neural Architecture Search (NAS) methods explicitly find DNN architectures for a given supervised learning task. This is achieved by encoding the candidate architecture as a solution in some search space and treating the architecture design as an optimisation problem. Growing neural network architectures instead of ‘searching for the best’ has been our alternative strategy to this problem. Interpretation of such automatically designed DNN is of significant benefit to many applications as it also allows integration of existing domain/scientific knowledge to the knowledge extracted from data. Explainable AI (XAI) is the emerging research area that has a strong connection to the level of interpretation we envisage.

A PhD candidate is expected to have exceptional analytical skills and the completion of an undergraduate degree with a first class average in Engineering, Computer Science or Mathematics from a good quality university.

If you cannot explain the actions of AI, you do not understand it!
test