Today, we’re releasing the prototype of the new Socratic Brain instructional platform. Register, sign in, and check it out!
If you’ve never actually read it, you should take a look at this paper by Kirschner, Sweller, and Clark. There’s a lot there to get you thinking. I imagine that I will write more than one post about it. And I can imagine that there’s some disagreement with their conclusions about the efficacy of what they call Constructivist educational methods. I don’t care to wade into that, but think there are some independently valid take-aways.
A student’s prior knowledge determines what they can learn from an experience. Not exactly news when stated as such, but the research on working memory they refer to puts a finer point on the dynamic. If a student must recruit all of their resources to solve a problem, then they won’t have any cognitive resources left to process the result. In other words, they won’t actually learn from it.
Now, I’m not convinced that is the same as not getting value from the experience. The emotional reward of figuring out a complex problem has its own value. I remember one student struggling her way to an epiphany recently, earning a lot of well deserved praise for effort and respect from her peers for her accomplishment. I think there’s value to the experience itself, too – see below. But if you accept K,S&C’s premise that learning is characterized by changes in long-term memory, then an under-prepared student may not be able to learn from a cognitive demanding task. I think I’ve seen this in the classroom.
I’ve seen students collect data and create elaborate analyses in the form of eloquent whiteboards, then fail to articulate the meaning of their work. The more scaffolding that I provide for the analysis, the less the student understands the connection between their first-hand experience and the analysis they present. It seems like in both cases, the connections that were made in the process of executing the work don’t last, and the student has a fuzzy sense of the whole process. This is a big deal – I invest a huge percentage of our precious class time in investigations and discussions.
One potentially valid argument would be that the exercise of the data collection, the first hand experience, is the prior knowledge that the student can then tap into to reduce cognitive load in the future. “Remember the car on the incline? It’s like that.” I can see that, potentially. That’s the idea of a paradigm lab, as I understand it, in the first place – creating that cognitive anchor experience.
So, here’s my thoughts on managing cognitive load in my modeling classroom.
For a paradigm lab, we’ve got to create the appropriate level of challenge for each student. Too much load, and they can’t learn. Too little, they won’t get enough richness to the experience to maximize the time invested. Of the two concerns, the former is more pressing, so I think my default on paradigm labs will be a lot of scaffolding in the analysis. However, I could do better by those best prepared to have a version with less scaffolding. I’m imagining a pre-test of crucial knowledge, or some skill deployed in the analysis. Demonstrate mastery, and I pull the scaffolding. While those who need it spend some pre-lab time in review of the skill, those who don’t need the help get busy moving forward with the investigation.
Once we get to the deployment phase, it’s a similar split. Pre-test, and differentiate those who have the skill mostly figured out from those who need help. Students in the first group (who arrived with the necessary experience) are individually assigned key problems from the skill worksheet to whiteboard and present. These students earn level one through their presentation. For those students in the second group (who don’t have the skill figured out), they will be assigned the whole worksheet. While students in the first group create whiteboards of the key problems, the teacher (me) can present solutions to the more fundamental problems on the worksheets to the second group. Then, students from the first group present to students of the second group.
I think this gives each group the appropriate level of challenge, the right cognitive load. It reduces the volume of tangible evidence the first group needs to produce beyond the pre-test, asking them instead to articulate their understanding as discussion leaders. And vice versa, for those whose cognitive capacity will be sufficiently stretched just following along, they will be given the opportunity to simply follow. As a teacher, my work load is reduced in that I have less to check from the first group, and less to show to the second.
Now I need to figure out the logistics of pulling this off effectively…
It’s the last day of classes before semester exams begin, and I started the day in conversation with a colleague. She and I were talking about the seasonal nature of teaching, the predictable ups and downs of the school year. So, I thought I’d send myself a little card to open on day one of school next year…
I’ve been thinking of practice for a quiz like this lately. As we work on example problems from our handouts, much of the discussion is at a higher level than the questions on the quiz. For instance, check out example 6 on this one:
The questions on the Level 2 quiz don’t have forces on an incline, so the whole discussion that comes out of that example is above the level of the quiz. But I think it’s good for two reasons:
1.) The “bat” is now unusually easy to “swing”. 😉
2.) We’ve been preparing for the Top Level task simultaneously with our quiz preparation.
What do y’all think?
I’ve had a lot of students looking for help during exam week. And they are falling into two camps. The ones who come for lunch or after school tutoring before the exam day, and the ones who want help on exam day. The first group is fine. I want them preparing for their top level tasks, and if they need assistance, that’s what I’m here for. But on exam day, if a student still needs help it’s kinda too late. While my teacher impulse is to always want to help, I’m finding that I need to respect the deadline that we’ve got. On the exam day, either you know it or you don’t.
The question then is what to do with students who get the concept after the deadline. My current thought is to leave the grade on the report card (we have 4 per year) even if they complete the top level task later and earn their level 3. I’ve become more comfortable with the grade on the report card reflecting some part work ethic/meeting deadlines. While I still want the grade to be an accurate reflection of the students level of Physics or Algebra knowledge and skill, I also want them to understand that improving this level is mostly about effort. If they want to improve, they can’t keep waiting around for answers to come, they’ve got to put in the work. #thestruggle
What does everyone else do about this? Keep the report card grade or change it to reflect progress? Where’s the line?
So I’ve had some success using rubrics with my unit one top level task in algebra. In this example you can see how the student used the rubric to complete his work. The best part of using the rubric I think is that when a student doesn’t reach the top level for a learning target they can see exactly why they didn’t reach it and how to do so.
A few years ago, when I began SBG and tried to explain it to students and parents, I listed a few guiding principles to help them understand the difference between what they were likely used to and the system that I was going to use. In the meantime, I’ve made many changes, but I don’t feel as though my basic beliefs about assessment – those principles – have changed significantly. In the current incarnation of my grading system, I’m wondering if I’ve gotten better or fallen away from one of those core principles.
One principle I worked from was the notion that “once is not enough”. It’s a pretty common sense feeling, and gets to the heart of what mastery means. Could anyone be convinced that someone has reached a level we could call mastery on the basis of a single assessment? As I am fond of telling my students on the first day of class, how will you know when I have learned your name? Will today alone convince you? Or will you really be convinced when I call you by the right name a few days from now, when you have come back to class wearing a different shirt and sitting in a different seat, and I still call you by name? Most of us agree on this point, but it’s difficult to track many targets in an academic setting.
The last two years, my approach to assessment relied on a battery of short quizzes. For a student to earn an A on a given learning target, they would need to answer 100% of the questions on the quiz correctly, and then repeat that performance on another day. It won’t be the same quiz, but if you’ve mastered the material it will feel pretty much the same to you, and the results will be the same. If you got lucky, or faked it once successfully, that second quiz will probably reveal you. Once is not enough.
On the up side, that system worked – students who earned an A really had mastered the content. Students pushed to get their “Two 100s”, and their results on final exams showed that they really retained the knowledge and skills that they had been quizzed on. Students reported to me that their exam results generally did not require much, if any, additional studying on those topics that they had already earned As on. Success, right? Sadly, all was not well.
They HATED it. No part of my Physics class was universally reviled like the “Two 100s” policy. Survey after survey pointed to this as THE thing they didn’t like about Physics. It’s not that I’m a softie who needs my students to love everything about my class – but if I could achieve my goals as an educator without so much negative emotion… that would surely be better.
More significantly – and ironically – I felt as though I wasn’t pushing my students far enough. My many-quizzes strategy had made SBG work in my classroom, but I could see my students’ understanding was getting compartmentalized. Skills that they could deploy flawlessly in one context were unavailable to them in a new context. This year’s system, the “three-level sandwich”, seems to be addressing the issue of compartmentalization directly.
Students work to attain higher levels in each learning target. Level 1 in a target is earned through collaborative classwork and an interview – the bottom line is that the student must show their work, and be able to articulate something they have learned. Level 2 is earned with a 100 on a target-specific quiz – the same quizzes I used in the past (homework has a role in here, too… a discussion for another time). Level 3 can only be earned through what is referred to as a “top-level task”, a complex task that requires strategic and/or critical thinking. Students earn level 3 in a target by demonstrating the successful deployment of the skills identified in the target, in the context of the task.
So, that’s our sandwich: Level 1 is collaborative work, with face to face assessment. Level 2 is mechanical, concrete operational, computer assessed. Level 3 is higher-order, hand-written, with a teacher-student feedback loop. The instant feedback of the computer is sandwiched by informal and formal teacher assessment.
Maybe I should call those top-level tasks meta-targets. The learning targets my students quiz on are just pieces of these larger puzzles. When we designed our curriculum this year, we did Marzano proud – we started from those tasks, decided what pieces the students needed to learn to be able to do them successfully, and made those elements our learning targets.
Build the skills, and piece by piece assemble the framework for addressing the top level task. Show me that you can interpret a velocity-time graph on a quiz – that’s level 2. Then, show me how you use that understanding (along with the calculation and analysis of relevant forces in the scenario) to validate claims regarding Newton’s Second Law… for that, I award level 3.
And so I ask myself – if my student can do that, is once enough?