ヨウ ライ   Lei YANG
  楊 蕾
   所属   京都外国語大学  外国語学部 中国語学科
   職種   准教授
発表年月日 2024/08/20
発表テーマ Developing and validating a writing rating scale for Japanese Chinese as a second language(CSL) learners
会議名 第十届亚洲语言测试学会(Asian Association for Language Assessment)年度国际研讨会(中国上海外国語大学)
主催者 亚洲语言测试学会(Asian Association for Language Assessment)
学会区分 国際学会
発表形式 口頭(一般)
単独共同区分 共同
国名 中華人民共和国
開催地名 中国上海外国語大学
開催期間 2024/08/20~2024/08/21
発表者・共同発表者 Sichang Gao・ Lei Yang・Kousuke Yoshino・Takashi Ueya
概要 CSL writing courses are mandatory for undergraduate majors in Japanese universities. Despite being labeled as writing classes, these courses primarily feature exercises such as Japanese-Chinese translation and sentence construction, deviating from the core essence of writing courses. Only a few intermediate and advanced writing courses incorporate topic-focused writing assignments, which encourage learners to enhance their writing proficiency. With students in these courses hailing from diverse backgrounds—including native Chinese speakers who have resided in Japan for extended periods and heritage learners—teaching writing becomes more challenging due to varying levels of proficiency.
Besides, there is a dearth of Chinese writing rating scales tailored specifically for Japanese native speakers, who possess strong writing skills in Chinese characters. Existing rating scales for Chinese learners from different linguistic backgrounds fail to adequately cater to the needs of Japanese learners of Chinese. Consequently, classroom rating scales are largely improvised, formulated by teachers based on the specific needs and capabilities of their students. The development of a comprehensive rating scale serves as a valuable reference tool for establishing learning objectives and guiding teaching methodologies.
This present study collects ratings and descriptors from Japanese Chinese teachers on learners’ writing, synthesizing raters’ scoring opinions during the rating process to establish a pool of descriptors. Using KH Coder software, descriptors were analyzed, organized, and classified to develop a pilot version of the rating scale. The reliability and validity of this pilot version were then analysed by Many-Facets Rasch Model (MFRM). The steps involved: 1. Establishing a descriptors pool: Raters were tasked with comprehensively assessing the writings of Japanese Chinese-major students and providing corresponding scoring descriptors, without setting a unified rating standard. Ratings and descriptors from five Chinese teachers on 35 student essays (18 narrative essays and 17 argumentative essays) were collected; 2. Classifying the descriptors: Descriptor analysis revealed three dimensions—structure, content, and expression. Additionally, different essay genres elicited distinct descriptors. For instance, in argumentative essays, students were more likely to score well if they provided examples to support their arguments, while narrative essays were evaluated based on the provision of characteristic details; 3. Assessing reliability and validity: Four additional Chinese teachers used these two scales to evaluate 35 essays. The validity of descriptors and score categories were determined using the MFRM. MFRM demonstrated that the writing descriptors for both genres fit to the model’s parameters, and the score categories were deemed reasonable.