Navigation auf uzh.ch

Suche

Department of Computational Linguistics Language, Technology and Accessibility

  • Evaluation of text-to-pose translation

  • Informed consent in sign language

  • Offset audio/subtitles/signing in broadcast data

  • Manual alignment of sign language videos and spoken language subtitles

  • Sign-to-text translation shared task

  • Text to gloss to pose to video

  • IICT subprojects

  • Comprehensibility of automatic text simplification output

  • Automatic text simplification with images

  • SMILE I demonstrator

  • SMILE II data collection tasks

  • SMILE II studio @ UZH

  • SMILE II data processing

  • TEDxZurich talk Sarah Ebling

  • The essence of Sarah Ebling's TEDxZurich talk by Stefano Oberti

Language, Technology and Accessibility

Welcome to the webpage of our chair!

Our chair deals with language-based assistive technologies and digital accessibility. Our focus is on basic and application-oriented research.
    
We subscribe to a broad definition of language and communication, in line with the UN Convention on the Rights of Persons with Disabilities (UN CRPD); as such, we deal with spoken language (text and speech), sign language, simplified language, Braille, pictographs, etc.

We combine language and communication with technology and accessibility in two ways:

  1. We develop language-based technologies, most often relying on deep learning (artificial intelligence) approaches.
  2. We investigate the reception of these technologies among the users, e.g., through comprehension studies.

Our technologies focus on the contexts of hearing impairments, visual impairments, cognitive impairments, and language disorders.

The group is headed by Prof. Dr. Sarah Ebling.

Hallmarks of our group

Weiterführende Informationen

Visualization of the process of translating text to sign language poses

Multimodality

We deal with assistive technologies and aspects of digital accessibility across modalities, e.g., with different production modalities (manual and non-manual components) in sign languages (Allwood, 2009), with text and video as part of automatic translation of audio descriptions, with text and images as part of automatic text simplification, or with text and pictographs.

Illustration of two hands shaking

Multidisciplinarity

We combine methods and techniques from the disciplines of language technology, linguistics, computer science (including computer vision), and special education. We collaborate with researchers from these and from other disciplines, such as ethics, psychology, rehabilitation sciences, or media and communication sciences.

News