Response to “Alive in the Swamp”

Find the document here:

In broad strokes, the summary of the publication Alive in the Swamp (Michael Fullen, Katelyn Donnelly, 2013) is that in order for technology within education to be transformational, that is to change the way that learning works, big changes in technology, pedagogy, and the system itself need to happen.

They start off by describing the system in a way that basically works like this: school is boring and nobody wants to be there, but at the same time the opportunities for learning by means of technology have never been greater but still aren’t being realized within formal education.

Technology, pedagogy, and system change are identified as the three big forces in the educational revolution and that only by combining all three can we successfully launch progress forwards. They argue that while technology is increasingly moving forwards (faster than ever, in fact), pedagogy and system change are not doing very much to keep up with it. I would actually agree quite a lot with this statement, because it seems like more and more technology growth is getting away from us (it’s hard for anyone to keep up), and as such it would be even harder for a large complicated organism like the education system to keep up as well.

The authors’ four criteria that any new learning system must meet are these:

  • Irresistible engagement for students and teachers
  • Elegantly easy to use and adapt
  • 24/7 access to technology
  • Steeped in real life problems

The authors go on to talk about subjects like the roles of teachers in the classroom (as facilitators and activators), technology placement, and the weaknesses and strengths of pedagogy in order to flesh out the details of what really needs to change the most within the system.

In the next section, the authors describe the Index that they’ve created which breaks down pedagogy, technology, and system change into granular components and describe the grading system on how these parts of the system work. The Index itself is an evaluative tool that can be applied to educational technologies in order to determine their usefulness in an educational system.

The subcomponents of pedagogy are clarity and quality of intended outcomes, its underlying theory and practice of how to deliver learning, and quality of the assessment platform. The subcomponents of system change are support for implementation of new strategies and technologies (since there are so many new advancements to choose from), the value of these advancements from a financial standpoint, and the potential for the system change on the large scale. As for technology, the subcomponents are the quality of user experience/model design, ease of adaptation, and comprehensiveness and integration. Each of these components has several paragraphs written on the in-depth details of what constitutes “off-track” through “good” practices and some specifics on the intended meaning. I won’t go through all these details here because they’re all listed there in the document.

In the end, the research ended up finding that more often than not, pedagogy and system change are the weak links in the triangle and that when moving “forward,” it’s easier to develop technology on its own rather than with the other two variables along with it. The authors recommend trying to lead with pedagogy and having technology and system change follow in its wake. They again argue some ideas of how this might be accomplished.


So quite honestly, I feel like this document was a needlessly difficult read. I understand that they were trying to incorporate much of the actual science and psychology in order to make it applicable, but I also feel like a “dumbed-down” version would go over much better.

I do agree with what they’re getting at though. Even to a casual observer, it’s fairly obvious that technology leads pedagogy and system change. That’s the way that the world works now: in every new decade, technological progression is moving forward even more quickly than the decades before it. You can talk to any teacher and hear a vastly different opinion on technology and its use/viability in the classroom than an opinion from another teacher. The fact of the matter is that we’re getting new technologies faster than we can use them effectively within education, and to many teachers there is no readily apparent reason why they should start doing something new when what they’ve always done in the past is working fine. The topic of keeping up to date with these new technologies is also an issue that is very much alive in the scope of this discussion.

I’d agree with the opinion that North American school systems require an overhaul, and not just in terms of technology use within the curriculum. But part of the reason why teachers may struggle with ways to incorporate the technology available to them is that they feel there is some inflexibility within the curriculum or that they are trapped in traditional methods.

Overall, I agree with the paper that we need to lead with pedagogy rather than technology, but this is hardly just an educational mindset that needs to change: it’s a societal one. We’re proverbially being led around by the nose by what’s shiny and new on the market, just proving that technology is leading everywhere. It’s all part of a big feedback loop which can possibly help to be changed by changes to the school system, but we’re still a long way off of that.


More on Assessment

This entry is building off of my last post where I was talking about unit tests in my PS2 practicum.

I’ve done more thinking and reflecting on what exactly happened in one of the unit tests in particular where the students did… not-very-well: asking myself what might have missed, what the students may have missed, and what I could do to solve for it. As much as we’re told not to teach to the tests, I have to wonder if maybe I didn’t teach to the test enough. I ruminated on this bit especially, since I have this philosophy that the students shouldn’t be given nasty surprises on a big test (although I feel that some new situations to see if they can apply their knowledge and understanding would be appropriate), and I would make sure that the students had ample practice with what they needed to know and be able to do before giving them the unit test. Most students did well on the quizzes and practice problems that I had handed out, so why was it that they weren’t doing so well on the unit test?

It was actually another friend from the U of L who suggested that maybe they didn’t know how to tie it all together. The whole way along, they’d been doing the unit in discreet packets and chunks (which I still made sure to tie together so they weren’t stand-alone), but whenever they would do a quiz or assignment, they would know that this one quiz is specifically on force and acceleration (for example), so they’d need to use these particular equations for this particular sheet.

But then when it came time to start identifying when the appropriate situation to use a certain equation, they fell apart. All of a sudden it was no longer as simple as just-take-this-equation-and-apply-it, now they needed to go and find which equation to use first and then figure out how to use it.

So it was after I got this suggestion that it all started making sense, and I started kicking myself a bit for it. After all, I’ve done a full physics degree and so of course I should have recognized that actually building the problem from the ground-up is something that’s important to focus on.

I’m still not sure about the best way to address this, but what I think I’m going to do is try to design quizzes and worksheets in such ways that they spread out the use of equations (for physics and math, anyways) to challenge the students to figure out the problem before they can solve it. I’m also toying with the idea of having practice final tests before the real ones so that the students can have a better idea of what to expect for when it counts towards grades. I’m not sure how much I like that though, because like I said before, I don’t want to simply teach to a test. So it becomes a balancing act of telling the students that THIS is what you need to know, versus presenting new situations in which to apply their knowledge.

There also comes the question about what to do with students after a not-so-great summative test. I know that there were a few students who seemed to be doing well in class and then bombed the summative, but it’s a question of whether they blanked, had a bad day, or just plain-old didn’t know the material. If I weren’t leaving my practicum immediately, then the first step I would have taken would have been to ask the students about it and see what their thoughts are. After that, I might make plans with them to try and address any issues that arose, or even to rearrange seating so that they will be in a more learning-conductive situation and have fewer distractions in terms of chatty friends. (But of course, I also don’t want to completely isolate them socially either.)

All-in-all, this whole experience has only served to increase my interest in the psychology of assessment, which I really hope to do some graduate studies in later in life.