Thwaites, Peter
[UCL]
Vandeweerd, Nathan
Paquot, Magali
[UCL]
Recent studies of proficiency measurement and reporting practices in applied linguists have revealed widespread use of unsatisfactory practices such as the use of proxy measures of proficiency in place of explicit tests. Learner corpus research is one specific area affected by this problem: few learner corpora contain reliable, valid evaluations of text proficiency. This has led to calls for the development of new L2 writing proficiency measures for use in research contexts. Answering this call, a recent study by Paquot et al. (2022) generated assessments of learner corpus texts using a community-driven approach in which judges, recruited from the linguistic community, conducted assessments using comparative judgement. Although the approach generated reliable assessments, its practical use is limited because linguists are not always available to contribute to data collections. This paper therefore explores an alternative approach, in which judges are recruited through a crowdsourcing platform. We find that assessments generated in this way can reach near identical levels of reliability and concurrent validity to those produced by members of the linguistic community.


Bibliographic reference |
Thwaites, Peter ; Vandeweerd, Nathan ; Paquot, Magali. Crowdsourced comparative judgement for evaluating learner texts: How reliable are judges recruited from an online crowdsourcing platform?. In: Applied Linguistics, Vol. forthcoming, no.forth, p. forth (2024) |
Permanent URL |
http://hdl.handle.net/2078.1/290245 |