Bannière du forum de recherche CCERBAL. Le logo du CCERBAL, une illustration de deux personnages dos à dos avec des bulles de texte contenant des caractères abstraits au-dessus de leurs têtes. Une image d'Angel Arias. Le logo de l'ILOB dans le coin inférieur droit.

Résumé (en anglais seulement)

Applied linguistics relies on various constructs (e.g., motivation, language aptitude, resilience, teacher cognition, etc.) to study language learning issues and obtain valuable information about context, language learners, and teachers to ultimately improve teaching practices and learning experiences. Theories that underlie the constructs of motivation (Tremblay & Gardner, 1995), resilience (Wang, 2021), language aptitude (Skehan, 2016; Wen et al., 2016), teacher cognition (Borg, 2010), and language proficiency (Bachman, 2007; Bachman & Palmer, 2010) have informed the development of questionnaires, surveys and tests that operationalize such constructs. With the exception of a handful of high-stakes language tests, it is common in our field to develop questionnaires, surveys, and other forms of data collection instruments without subjecting them to procedures of thorough analysis and quality control (i.e., validation) prior to use. Although applied linguistics is considered a collaborative discipline that works on language-based problems within and between fields (Farsano et al., 2021), validation approaches are not readily embraced to gather the required evidence to support score-based interpretations and uses of these instruments. Current and dominant validation approaches include the Standards for Educational and Psychological Testing (American Psychological Research Association [AERA], American Psychological Association [APA] & National Council on Measurement in Education [NCME], 2014) and argument-based validation (Bachman & Palmer, 2010; Chapelle, 2020; Kane, 2013). However, these approaches are rather complex and exclusionary (i.e., accessible to scholars in assessment-centred communities) and require a reasonable amount of resources to apply. This talk outlines the challenges associated with current validation frameworks, discussing implementation difficulties and potential solutions that could enhance uptake across applied linguistics. This talk will be delivered in French and English.

Dr. Angel Arias

Angel Arias, PhD

Assistant Professor in the School of Linguistics and Language Studies at Carleton

Angel Arias holds a Ph.D. in Educational Measurement from the Université de Montréal. His Master’s degree is in Applied Linguistics and Discourse Studies, and his bachelor’s degree is in Education and Modern Languages from Universidad Dominicana Organización y Métodos (O&M), Dominican Republic. His research interests focus on the application of psychometric models and mixed methods approaches in language testing and assessment to evaluate validity evidence of test score meaning and justification of test use in high stakes and classroom contexts. He has served as an external consultant for the Ministry of Quebec’s Education and Chair of the Test Validity Research and Evaluation special interest group of the American Educational Research Association (AERA).

Accessibilité
If you require accommodation, please contact the event host as soon as possible.
Date et heure
25 mars 2022
12 h à 13 h 15
Formule et lieu
Virtuel
Pavillon Hamelin (MHN)
Langue
Anglais, Français
Auditoire
Candidats internatiionaux, Grand public
Organisé par
Institut des langues officielles et du bilinguisme, CCERBAL