Points to consider from yesterday's Google+ post…
What's the argument for capturing formative assessment?
Summative assessment attempts to measure learning and formative assessment improves learning. Improving how teachers learn should be a larger part of the evaluative (judgmental) process. Too often, a teacher's evaluation is limited to a numeric representation used to form judgments on the knowledge, skill, and disposition of an individual. Instead, teacher evaluations should rely on iterative and reciprocal interactions (among all stakeholders) based on qualitative and quantitative data where assessment and instruction and support cycle through more fluidly. We need more than a single snapshot of evidence of one's competence, we need an entire photo album.
If we look at USA educational history, which has employed mostly summative evaluation systems, we see great success in the past.
Is the USA successful because they relied on mostly summative evaluation systems? Can we claim great success when there has been virtually no increase in college entrance exam scores yet great advancement in the way we communicate using recent technologies developed over the last 40 years?
Business and other institutions seek entrants with “21-century skills” such as critical thinking, collaboration, creativity, communication, etc.?
I question whether relying on summative assessments (as an end measurement) properly allows one to make the accurate inferences on the level of “21-century skills” a learner has.
What proof do we have that formative-heavy evaluation is the right direction. Pedagogy isn't an exact field of study. Are we sure? Are we taking educated guesses based on environmental situations and observations?
Since pedagogy and learning are not exact fields of study, we're better off referring to the current literature on formative assessment and the notion that individuals learn in different ways…it's out there. Formative assessment does not rely on direct cause-and-effect relationships to teaching and learning, but rather provides the means for greater interaction around the learning process. Sometimes it looks not only at the act of learning (via a myriad of evidence), but the act of becoming as well. If properly aligned, formative assessment can impact standardized reporting (Transformative assessment).
What is the right mix? How do we back it up as sound?
Asking what's the right mix is like asking what's the best way to do it. It's not about practices or programs, it's about people. There is no magic formula. It's about having honest, open, and ongoing discussions where the learner links content, context, and conduits together in a meaningful and relevant way.
Gates is one of the biggest innovators of all time … could his foundation be wrong?
I'm questioning the relevance of the report. And I would question anyone who claimed they knew how others learn best. Research-based learning principals are fine as long as they are being discussed within a local context.
…isn't most summative evaluation really just an end result or snapshot of mostly an ongoing formative process which leads to the numerical grading process?
No. Evaluating one's learning requires the collection of qualitative, quantitative, and relational data in parallel (not serial) that provides a cycle of planning, implementing, and reflecting on and in practice. This needs to be done at the classroom, school, school-district, state, and federal levels simultaneously where reports like those generated by the Gates Foundation become just one small piece of the puzzle.
For example, at ITESM, teachers are heavily evaluated on the numerical advancement of students between their beginning TOEFL scores and their ending TOEFL scores for the period. While this indeed is summative evaluation …. it really doesn't explicitly highlight the formative assessment that took place between the periods .. it isn't captured by the system at all like summative evaluation is (numbers are number after all, nice and concrete). But, that doesn't mean that formative assessment and learning hasn't happened. It has. Most formative record-keeping is done informally by the teacher, some of which really can't be documented.
Why not evaluate English language teachers by collecting a mass of evidence: TOEFL scores, eportfolios, OER projects, openly shared experiences in online communities, workshops, conferences, most significant change stories, etc. We should make formative assessment explicit and then judge it along with summative forms of assessment as well. I would personally place more emphasis on formative than summative assessments, making formative assessment more formal. Ideally, it's entirely about formative assessment – if quantitative reporting were done in a more timely fashion, it too could be treated as formal assessment (i.e., dynamic assessment). When quantitative reports are issued months, perhaps years after the actual event, one has to question it's relevancy. It's like trying to get over a cold by performing an autopsy. I'd prefer to take care of one's health by taking preventative measures and take one's temperature periodically.
So even though the Gates Foundation is focusing on the numerical end point, isn't it really an indication of how well the teachers formative evaluation skills are in getting to that point?
No. It could mean that formative assessment is working extremely well but is not being reflected in the report, or it could mean that formative assessment has no impact on the change process. Thinking in terms of order, produce the quantitative information first, then immediately follow it up with a lot of formative assessment. Then continue formative assessments with period quantitative reporting (take the temperature). Formative assessment needs to be based on past experience (problems) and should be included in the evaluation process. What's important is that there is not a lot of lag time between the quantitative and qualitative reporting – should be like days or weeks instead of months or years.
We spend a lot of teacher training time on formative stuff. Either a teacher gets it or does not get it (this has been my observation experience as a teacher trainer). And, I think that mostly gets reflected in the end result … which should also include some formative project work too. So even though something looks and smells like summative, my feeling is that, to a great extent, the summative is a leading indicator of a teachers formative learning and evaluation skills.
My experience has been different. I see that teachers understand things by degree – I hardly ever classified it as teachers get it or they don't. Same goes for students, come to think about it. Teachers come from different perspectives, have different experiences applying their knowledge, have different levels of empathy or feelings about their craft, etc. So the learning process is taking them from where they are currently to a new “place”. To measure the degree of understanding teachers have, different types of timely data are required: quantitative, qualitative, and relational. I would argue that the longer it takes to produce a quantitative report and then act on it, the less formative it becomes. Another risk quantitative reports have in terms of their formativeness is that the results can be too general. I have a hard time accepting the report from the Gate Foundation as being formative. But certainly a school-generated quantitative report has the potential to be formative.
I kind of want to believe that Dan Pink got it RIGHT. Which teachers do you think will do better with students nowadays? Those with left-brained or right-brained predominances? Perhaps this is key to the end measurable result? How does a teacher's cognitive processes lend to productive formative assessment and ultimately summative assessment?
I don't view people as being left or right brained…they're all “full-brained” to me. I don't consider learning styles or other personal attributes in isolation. I watch to see how teachers adapt to their environment by trying to facilitate particular networked topologies that link information, context, delivery. Adapting to one's environment not only is cognitive, but physical/material, and affective. Reporting procedures needs to add value to the process of adaptation.