MSE Research Project Database

Situated learning for collaboration across language barriers


Project Leader: Greg Wadley
Staff: Trevor Cohn
Collaborators: Steven Bird (Charles Darwin University)
Primary Contact: Greg Wadley (greg.wadley@unimelb.edu.au)
Keywords: computational linguistics; interaction design; machine learning; mobile phones
Disciplines: Computing and Information Systems
Domains:

Background

People working in development are often deployed to remote locations where they work alongside locals who speak an unwritten minority language. Outsiders and locals share know-how and pick up phrases in each other’s languages. They are performing a type of situated learning of language and culture. This situation is found across the world, in developing countries, border zones, and in Indigenous communities.

Aim

This project will investigate new methods for cross-lingual collaboration, drawing on the state of the art in speech processing, deep learning, and interaction design. The research will be evaluated in terms of the effectiveness of the interaction, the acquisition of language, and the quantity of language data collected. The ultimate goal is to contribute to the grand challenge of sustaining the world's linguistic diversity.

PhD candidates

We’re looking for outstanding and highly motivated PhD candidates with personal experience of linguistic diversity. Successful applicants will divide their time between Charles Darwin University and the University of Melbourne, undertaking a PhD at CDU and jointly supervised by researchers at Melbourne. Candidates will be supported to conduct fieldwork in a remote Indigenous community. Indigenous candidates are strongly encouraged to apply. Candidates are expected to have a degree in computer science or related discipline, advanced programming skills, and a background in one of more of: machine learning, speech processing, interaction design.

The deadline for scholarship applications is 31 October 2018.

Prior work

The project will build on previous work in the following areas:

  • mobile platforms for collecting spoken language data [6, 7]
  • respeaking as a technique for improving the value of recordings made ‘in the wild’ and an alternative to traditional transcription practices [14, 16]
  • machine learning of structure in phrase-aligned bilingual speech recordings [2, 3, 4, 8, 9, 10, 11]
  • participatory design of mobile technologies for working with minority languages [5]
  • designing with Indigenous communities [12, 13, 15]
  • managing multilingual databases of text, speech and images [1].

Some recent indicative PhD theses include:

  • Computer Supported Collaborative Language Documentation (Florian Hanke, 2017)
  • Automatic Understanding of Unwritten Languages (Oliver Adams, 2018)
  • Collecter, Transcrire, Analyser : quand la Machine Assiste le Linguiste dans son Travail de Terrain (Elodie Gauthier, 2018)
  • Enriching Endangered Language Resources using Translations (Antonios Anastasopoulos, in prep)
  • Digital Tool Deployment for Language Documentation (Mat Bettinson, in prep)
  • Bayesian and Neural Modeling for Multi Level and Crosslingual Alignment (Pierre Godard, in prep).

References

[1] Abney and Bird. The Human Language Project: building a universal corpus of the world’s languages. Proc ACL 2010.

[2] Adams, Neubig, et al. Learning a translation model from word lattices. Interspeech 2016.

[3] Anastasopoulos, Bansal, et al. Spoken term discovery for language documentation using translations. Proc Speech-Centric NLP, 2017.

[4] Anastasopoulos and Chiang. A case study on using speech-to-translation alignments for language documentation. Workshop on Computational Methods for Endangered Languages, 2017.

[5] Bird. Designing mobile applications for endangered languages. Oxford Handbook of Endangered Languages, 2018.

[6] Bird, Hanke, et al. Aikuma: A mobile app for collaborative language documentation. Workshop on Computational Methods for Endangered Languages. ACL, 2014.

[7] Blachon, Gauthier, et al. Parallel speech collection for under-resourced language studies using the Lig-Aikuma mobile device app. Workshop on Spoken Language Technologies for Under-resourced languages, 2016.

[8] Do, Chen, et al. Multitask learning for phone recognition of under-resourced languages using mismatched transcription. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018.

[9] Dunbar, Cao, et al. The Zero Resource Speech Challenge 2017. In Automatic Speech Recognition and Understanding, 2017.

[10] Duong, Anastasopoulos, et al. An attentional model for speech translation without transcription. NAACL 2016.

[11] Godard, Adda, et al. Preliminary experiments on unsupervised word discovery in Mboshi. Interspeech 2016.

[12] Irani, Vertesi, Dourish, Philip, Grinter (2010). Postcolonial computing: a lens on design and development. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1311-1320). ACM.

[13] Lawrence, Bird, Wadley, Graham, Bidwell, Eades, Dourish.  #thismymob: Digital land rights and reconnecting Indigenous communitiesARC Discovery Indigenous 2017-19.

[14] Liberman, Yuan, et al. Using multiple versions of speech input in phone recognition. ICASSP, 2013.

[15] Winschiers-Theophilus, Chivuno-Kuria, Kapuire, Bidwell, & Blake (2010). Being participated: a community approach. In Proceedings of the 11th Biennial Participatory Design Conference (pp. 1-10). ACM.

[16] Woodbury. Defining documentary linguistics. Language Documentation and Description, 2003.

Translating stories told in Kun-barlang, a language spoken on Goulburn Island with 20 speakers remaining.
test