Lei YANG
Department Kyoto University of Foreign Studies Department of Chinese Studies, Faculty of Foreign Studies Position Associate Professor |
|
Date | 2024/08/20 |
Presentation Theme | Developing and validating a writing rating scale for Japanese Chinese as a second language(CSL) learners |
Conference | 第十届亚洲语言测试学会(Asian Association for Language Assessment)年度国际研讨会(中国上海外国語大学) |
Promoters | 亚洲语言测试学会(Asian Association for Language Assessment) |
Conference Type | International |
Presentation Type | Speech (General) |
Contribution Type | Collaborative |
Country | China |
Venue | 中国上海外国語大学 |
Holding period | 2024/08/20~2024/08/21 |
Publisher and common publisher | Sichang Gao・ Lei Yang・Kousuke Yoshino・Takashi Ueya |
Details | CSL writing courses are mandatory for undergraduate majors in Japanese universities. Despite being labeled as writing classes, these courses primarily feature exercises such as Japanese-Chinese translation and sentence construction, deviating from the core essence of writing courses. Only a few intermediate and advanced writing courses incorporate topic-focused writing assignments, which encourage learners to enhance their writing proficiency. With students in these courses hailing from diverse backgrounds—including native Chinese speakers who have resided in Japan for extended periods and heritage learners—teaching writing becomes more challenging due to varying levels of proficiency.
Besides, there is a dearth of Chinese writing rating scales tailored specifically for Japanese native speakers, who possess strong writing skills in Chinese characters. Existing rating scales for Chinese learners from different linguistic backgrounds fail to adequately cater to the needs of Japanese learners of Chinese. Consequently, classroom rating scales are largely improvised, formulated by teachers based on the specific needs and capabilities of their students. The development of a comprehensive rating scale serves as a valuable reference tool for establishing learning objectives and guiding teaching methodologies. This present study collects ratings and descriptors from Japanese Chinese teachers on learners’ writing, synthesizing raters’ scoring opinions during the rating process to establish a pool of descriptors. Using KH Coder software, descriptors were analyzed, organized, and classified to develop a pilot version of the rating scale. The reliability and validity of this pilot version were then analysed by Many-Facets Rasch Model (MFRM). The steps involved: 1. Establishing a descriptors pool: Raters were tasked with comprehensively assessing the writings of Japanese Chinese-major students and providing corresponding scoring descriptors, without setting a unified rating standard. Ratings and descriptors from five Chinese teachers on 35 student essays (18 narrative essays and 17 argumentative essays) were collected; 2. Classifying the descriptors: Descriptor analysis revealed three dimensions—structure, content, and expression. Additionally, different essay genres elicited distinct descriptors. For instance, in argumentative essays, students were more likely to score well if they provided examples to support their arguments, while narrative essays were evaluated based on the provision of characteristic details; 3. Assessing reliability and validity: Four additional Chinese teachers used these two scales to evaluate 35 essays. The validity of descriptors and score categories were determined using the MFRM. MFRM demonstrated that the writing descriptors for both genres fit to the model’s parameters, and the score categories were deemed reasonable. |