ISCA Archive Interspeech 2007
ISCA Archive Interspeech 2007

Automatic large-scale oral language proficiency assessment

Febe de Wet, Christa van der Walt, Thomas Niesler

We describe first results obtained during the development of an automatic system for the assessment of spoken English proficiency of university students. The ultimate aim of this system is to allow fast, consistent and objective assessment of oral proficiency for the purpose of placing students in courses appropriate to their language skills. Rate of speech (ROS) was chosen as an indicator of fluency for a number of oral language exercises. In a test involving 106 student subjects, the assessments of 5 human raters are compared with evaluations based on automatically-derived ROS scores. It is found that, although the ROS is estimated accurately, the correlation between human assessments and the ROS scores varies between 0.5 and 0.6. However, the results also indicate that only two of the five human raters were consistent in their appraisals, and that there was only mild inter-rater agreement.


doi: 10.21437/Interspeech.2007-90

Cite as: Wet, F.d., Walt, C.v.d., Niesler, T. (2007) Automatic large-scale oral language proficiency assessment. Proc. Interspeech 2007, 218-221, doi: 10.21437/Interspeech.2007-90

@inproceedings{wet07_interspeech,
  author={Febe de Wet and Christa van der Walt and Thomas Niesler},
  title={{Automatic large-scale oral language proficiency assessment}},
  year=2007,
  booktitle={Proc. Interspeech 2007},
  pages={218--221},
  doi={10.21437/Interspeech.2007-90}
}