So today I was looking at this discussion about this article, and in the course of the discussion someone asked about this study, which she described as having "debunked reading and math software."
Now, five years of my career was spent making reading and math software for schools and para-schools. Really good reading and math software. Software we worked extremely hard on. How hard? In the project plans, I budgeted forty-five minutes for each multiple-choice question. Why so much time? Well, each individual incorrect answer choice was designed to tease out a specific misunderstanding of the topic at hand. Further, each incorrect answer choice was specifically remediated in the wrong-answer-feedback, without giving away the correct answer. After being written (a tough job in its own right) the question text had to be tagged, coded, compiled and tested. 45 minutes each.
We also developed an umbrella-sort mechanism using the magic of Regular Expression text-string comparison to do a reaonable job of analyzing free-entry text responses, going far beyond the typical exact-match of text-entry items. (Did I mention this was done on DOS on a 386 CPU, not using semantic cloud computing or neural networks?)
As I said, really good educational software, not just PDFs of worksheets or arcade-game drill-n-kill exercises.
So when I read that some study had supposedly "debunked reading and math software" my hackles stirred enough to send me to read the executive summary of the study. I read a lot of educational research. Not counting the journals I read tyrying to keep up in my field, I'm a reviewer for an international journal of education technology, and over the past several years I've reviewed more than fifty articles submitted for publication. SO I think I'm at least competent to read a piece of research and tell whether its any good.
The ED study is pretty good, though it has some major limitations, which the authors themselves note. It certainly does not "debunk" educational software.
Here's what I wrote in reply:
"Debunking" is rather a strong, and IMO inappropriate word. At worst, the survey reports no significant difference in learning outcomes. That's not necessarily a bad thing. As it happens I also have open on my desktop the site http://nosignificantdiffernce.org , which provides a meta-analysis of hundreds of comparative-media studies. The bottom line is that comparative media studies *usually* report no significant difference in outcomes.
And why should that be surprising? If Medium A and Medium B are both *designed to help learners achieve the same learning objectives*, we should *expect* to see no significant difference.
That said, the ED study reports a good deal of trouble in data collection. There was a serious lack of continuity from year one to year two - over 70% of the teachers dropped out of the study. There were no classroom observations in year two. The survey team administered their own tests where the districts did not, and it is not immediately clear whether the software that was evaluated was aligned to those tests, or whether the instruction given to the control group was tailored to the test.
In other words, is the software taking a hit because it didn't teach something that was on the test? Many of these software packages are highly modularized and can be adapted to fit state or local standards. If the software wasn't set up to teach the content that was going to be on the test (assuming it could have been), it's hardly the fault of the software developers.
In addition, the study authors issue some strong caveats about the limits of their own research. The summary notes: "Characteristics of districts and schools that volunteered to implement the products differ, and these differences may relate to product effects in important ways."
It concludes, "Products in the study also were implemented in a specific set of districts and schools, and other districts and schools may have different experiences with the products. The findings should be viewed as one element within a larger set of research studies that have explored the effectiveness of software products."
If the study authors themselves issue such caveats, it's a little over the top to call it "debunking." Just because it's not a magic bullet doesn't mean it's of no value.
Successful implementation of learning technology does not seek to replace the teacher (except in situations where there is no teacher to replace). Rather, it seeks to free up the teacher by assuming the role of content-provider. This enables the teacher to do what a machine cannot - to connect with the student as a person, to coach and encourage, and when necessary to admonish and correct (can we even do that anymore?)
The commentor to whom I had responded thanked me for my response and replied that she had gotten her information from a comment on a post on the liberal multi-author blog Huffington Post. I followed the link and found her reference in the comments section, which was filled with vitriolic partisan ignorance that is beyond my ability or desire to attempt to remediate.
I really feel sorry for people who are filled with fear and hatred for ideas that are different from their own. Can we not disagree agreeably?
1 comment:
It is sad, isn't it? One thing that really gets me is folks who know nothing about data analysis or software design taking these results seriously without regard for good or poor educational software...they just lump them all in there together.
And your reaction is why I avoid the far-left and far-right blogs. There's so much vitriol there. I'm interested in people who can respectfully disagree, which is why I enjoy reading what you right and our conversations.
Post a Comment