MSE Research Project Database

Adversarial Machine Learning


Project Leader: Ben Rubinstein
Staff: Christopher Leckie,Tansu Alpcan,Sarah Erfani,Yi Han
Collaborators: Olivier De Vel (DSTG) Paul Montague (DSTG) Tamas Abraham (DSTG) Richard Nock (Data61) Aditya Menon (Data61) Jun Zhang (Deakin) Bo Li (Berkeley) Doug Tygar (Berkeley) Anthony Joseph (Berkeley) Blaine Nelson (Google)
Sponsors: Defence Science Technology Group,Data61/CSIRO,Australian Research Council,Future of Life Institute
Primary Contact: Ben Rubinstein (benjamin.rubinstein@unimelb.edu.au)
Keywords: artificial intelligence; autonomous systems; computer security; machine learning; optimisation
Disciplines: Computing and Information Systems
Domains: Networks and data in society

The very adaptability that machine learning algorithms use to deliver utility in consumer electronics, web services, clinical medicine, exposes potential vulnerability to tampering by malicious attackers. Sitting at the intersection of machine learning, computer security, game theory, adversarial machine learning explores attacks on learning algorithms that seek to influence learned models or predictions by poisoning data; and defences against adversarial tampering. Key applications include cybersecurity, trusted autonomous systems, as well as robustness. Recent examples of ideas coming out of the adversarial machine learning literature include generative adversarial networks. This project spans the entire spectrum of fundamental theoretical understanding of learning in adversarial domains, to practical applications in computer vision, networks cyber operations, and more.

Further information: http://bipr.net

Adversarial machine learning considers the robustness of machine learning algorithms to tampering.
test
An attack trajectory
test