How can a diagnostic test determine "fossilization"?
I read today Why do we produce ESL students with fossilized errors and what can we do about it? and had a few thoughts.
- When I read "fossilized error" (term used in quotation marks), I suppose that Jagasia does not really accept the term, but uses it anyway given that others have come to define the term a certain way in the past? Later the term is not in quotation marks that would suggest that it is an accepted term?
- The thesis of this piece seems to have less to do with "fossilization" and more to do with 1) feedback and assessment, 2) differentiated instruction (DI), and 3) potential issues with the placement test in terms of validity, reliability, and/or bias. So, to answer the first question from the title would include more to do with feedback, DI, and diagnostic test validity, reliability, and unbias than some notion of "fossilization". These were issues not covered in the piece.
- Regarding point #2, in the absence of any real example, it is hard to support the idea that just because a student has a tutor (e.g., italki, Verbling, etc.) that "fossilization" is less likely. To understand "fossilization" is to observe a language learner longitudinally and not necessarily the individual learning spaces where learning (or lack thereof) takes place. Another way to state this is that observations (to understand "fossilization") need to be understood from a diachronic versus synchronic lens.
- The problem that this piece sets out to address is not clear. Jagasia (2016) states, "Almost every student who sits our placement test possesses a significant amount of ingrained or "fossilized" errors" (para. 2). Again, how can a single diagnostic exam (synchronically) measure fossilization presumably before the fact? Perhaps the assumption is if a language learner is taking a more advanced level class but is still making lower-level errors that this automatically means that "fossilization" occurs. Intuitively, one can see the weakness of this argument when considering other possibilities: 1) learners acquire the language at different rates, 2) learners had little-to-no exposure to a grammatical structure, 3) learners had little-to-no practice in moving understandings from short term to long term memory, 4) the error could have be a "slip of the tongue" or a mistake that the learner really understands but carelessly overlooked, 5) personal circumstances that would interfere with concentration during the test, 6) simply a poor test taker, 7) poor alignment between diagnostic test and instruction, 8) poor alignment between diagnostic test and assessment, 9) poor alignment between curriculum and diagnostic test, etc.
Forgetting the term "fossilization" for a moment, how students make mistakes (whether repeatedly or in isolation) cannot be viewed entirely from a single diagnostic test. It seems for Jagasia (2016) as if the diagnostic test is detecting issues of instruction or assessment? To understand how students make mistakes requires observations that occur over time in terms of how feedback is given and received and requires test designers to recognize the integrity of the instrument and its purpose.