Sign language AI to focus on real Deaf conversations, not just interpreter data
This item is under embargo and is only visible to journalists
Please login here
Location:
Address
Opening Hours:
Ticket price:
Broadcast content type:
Broadcast starts:
Broadcast duration:
Publication title:
Author:
Publication type:
Publication date:
Number of pages:
ISBN number:
Price:
A £3.5 million UK-Japan research project will transform sign language AI by ensuring training is on real conversations between Deaf people, not interpreted signing.
The five-year collaboration, led by the University of Surrey's Professor Richard Bowden, aims to develop human-centred artificial intelligence and augmented reality systems for real-time translation across British Sign Language, Japanese Sign Language, English and Japanese.
Understanding Multilingual Communication Spaces (UMCS) will focus on natural conversational Deaf data. The research team will capture how people really communicate through turn-taking, backchannels, repair strategies and shared visual attention.
Richard Bowden, Professor of Computer Vision and Machine Learning at the Centre for Vision, Speech and Signal Processing at the University of Surrey, said:
"Most AI research on sign language has used video of interpreters signing to cameras. We know that's not how Deaf people naturally communicate. What excites me about this project is that we're working with authentic conversations between Deaf signers. That will give us much richer insight into how people really interact – and help us build AI systems that reflect that complexity."
UMCS's international team of Deaf and hearing researchers includes:
Industrial partners for the project include a University of Surrey start-up, Signapse Ltd (UK), and NHK Enterprises (Japan) – both will contribute expertise in translation technologies and sign avatar systems to support real-world deployment.
Professor Mayumi Bono said:
“Sign language corpora have been built to capture natural Deaf-to-Deaf interaction, yet they remain largely unused in AI research because today’s AI systems demand large-scale, text-linked data. As the field moves from “corpus to dataset,” researchers are calling for an inclusive science that bridges linguistics and AI while centring on the lived realities and linguistic intuitions of Deaf signers.”
The project is jointly funded by UK Research and Innovation (UKRI) through the Engineering and Physical Sciences Research Council (EPSRC) and the Japan Science and Technology Agency (JST). UMCS is part of the Japan–UK Joint Call for Collaborations in Advancing Human-Centred AI.
The total combined investment from both funders is approximately £3.5 million (¥700 million), supporting researcher exchanges, data collection, AI model development and community co-design from 2026 to 2031.
Regions: Europe, United Kingdom, Asia, Japan
Keywords: Applied science, Artificial Intelligence, Computing, Humanities, Linguistics
Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.
The item has been withdrawn. If you are a journalists please contact the person that posted the item with any questions.