Project Objectives
This project proposes to incorporate visual synthesis technologies that generate articulatory motions in synchrony with synthetic speech. This will be used to extend an existing computer-aided pronunciation training (CAPT) system, by generating audiovisual corrective feedback to facilitate pronunciation training. We will attempt to synthesize speech segments corresponding to detected mispronunciations should be synthesized with hyper-articulation. The degree of hyper-articulation should balance clearer visualization against excessive distortion. We will leverage our existing CAPT platform (referred as the Enunciate system), which is primarily based on audio and has been made accessible across CUHK campus.
Key deliverables expected and timelines for completion
Acoustic-phonetic analysis of Chinese-accented English speech (Jan to Dec 2013)
Development of Text-to-Visual Speech Synthesis Technology (Jan to Dec 2014)
Integration with the Enunciate System and testing with uses (Jan to Dec 2015)
Arrangements for evaluation of project deliverables/outcomes
Our system will be made accessible across CUHK campus and system logs are recorded to inform further development. We will also liaise with the English Language Teaching Unit and the Independent Learning Centre for in-class trials.
Means for disseminating project deliverables/outcomes
We will also list the system in repositories of electronic learning resources at CUHK. We will also submit papers to relevant journal publications.