Small commercial carrier JetBlue jumped on the "Flying CEO" zeitgeist with a series of clever YouTube ads that poke fun at the image of CEOs who don't want to mingle with the hoi polloi. You can see them here, along with the unamused reaction of one self-decribed flying CEO.
The alphabet groups that promote business aviation seem to be taking a dim view of the ads. That's an entirely understandable, perfectly predictable response.
And it's dead wrong.
It perpetuates the popular image of CEOs as whiny rich men who are only interested in their money and their toys, with the business aviation industry being the chief toymakers.
Might I suggest something such as the following:
Scene - outdoors, general aviation airport. Middle-aged man in suit (carrying jacket, tie loosened) is walking across tarmac to an airplane, speaking to camera.
"Hi. I'm a corporate CEO. That means I make decisions every day that affect the people who work for my company. If I make a bad decision, they suffer the consequences. So, I try to make smart decisions - and wasting time is not a smart decision."
*looks directly at camera*
"That's why I don't fly on JetBlue."
*gestures at aircraft behind him*
"This little airplane gets me into more than five thousand small airports across the country. That's where my customers and suppliers are - nowhere close to the big airline hubs. And when I need to get someplace right away, I don't have to wait around for the next available flight. As a matter of fact, a customer in another state called me early this morning with an urgent problem. If I had to fly commercial, I'd be lucky to get there before noon, and really lucky to get home in time to put my kids to bed tonight. Most of my day would be wasted at the airport. But with this business tool, I can see my customer this morning, and be back in the office this afternoon. That's a smart decision."
*boards steps, looks back at camera, shrugs*
"I get to keep my shoes on, too."
Hat Tip Benet Wilson, via Twitter
Monday, March 23, 2009
Wednesday, March 11, 2009
Links at LinkedIn leave me a little sad
I recently realized that the professional SN site LinkedIn has discussion threads.
So today I was looking at this discussion about this article, and in the course of the discussion someone asked about this study, which she described as having "debunked reading and math software."
Now, five years of my career was spent making reading and math software for schools and para-schools. Really good reading and math software. Software we worked extremely hard on. How hard? In the project plans, I budgeted forty-five minutes for each multiple-choice question. Why so much time? Well, each individual incorrect answer choice was designed to tease out a specific misunderstanding of the topic at hand. Further, each incorrect answer choice was specifically remediated in the wrong-answer-feedback, without giving away the correct answer. After being written (a tough job in its own right) the question text had to be tagged, coded, compiled and tested. 45 minutes each.
We also developed an umbrella-sort mechanism using the magic of Regular Expression text-string comparison to do a reaonable job of analyzing free-entry text responses, going far beyond the typical exact-match of text-entry items. (Did I mention this was done on DOS on a 386 CPU, not using semantic cloud computing or neural networks?)
As I said, really good educational software, not just PDFs of worksheets or arcade-game drill-n-kill exercises.
So when I read that some study had supposedly "debunked reading and math software" my hackles stirred enough to send me to read the executive summary of the study. I read a lot of educational research. Not counting the journals I read tyrying to keep up in my field, I'm a reviewer for an international journal of education technology, and over the past several years I've reviewed more than fifty articles submitted for publication. SO I think I'm at least competent to read a piece of research and tell whether its any good.
The ED study is pretty good, though it has some major limitations, which the authors themselves note. It certainly does not "debunk" educational software.
Here's what I wrote in reply:
The commentor to whom I had responded thanked me for my response and replied that she had gotten her information from a comment on a post on the liberal multi-author blog Huffington Post. I followed the link and found her reference in the comments section, which was filled with vitriolic partisan ignorance that is beyond my ability or desire to attempt to remediate.
I really feel sorry for people who are filled with fear and hatred for ideas that are different from their own. Can we not disagree agreeably?
So today I was looking at this discussion about this article, and in the course of the discussion someone asked about this study, which she described as having "debunked reading and math software."
Now, five years of my career was spent making reading and math software for schools and para-schools. Really good reading and math software. Software we worked extremely hard on. How hard? In the project plans, I budgeted forty-five minutes for each multiple-choice question. Why so much time? Well, each individual incorrect answer choice was designed to tease out a specific misunderstanding of the topic at hand. Further, each incorrect answer choice was specifically remediated in the wrong-answer-feedback, without giving away the correct answer. After being written (a tough job in its own right) the question text had to be tagged, coded, compiled and tested. 45 minutes each.
We also developed an umbrella-sort mechanism using the magic of Regular Expression text-string comparison to do a reaonable job of analyzing free-entry text responses, going far beyond the typical exact-match of text-entry items. (Did I mention this was done on DOS on a 386 CPU, not using semantic cloud computing or neural networks?)
As I said, really good educational software, not just PDFs of worksheets or arcade-game drill-n-kill exercises.
So when I read that some study had supposedly "debunked reading and math software" my hackles stirred enough to send me to read the executive summary of the study. I read a lot of educational research. Not counting the journals I read tyrying to keep up in my field, I'm a reviewer for an international journal of education technology, and over the past several years I've reviewed more than fifty articles submitted for publication. SO I think I'm at least competent to read a piece of research and tell whether its any good.
The ED study is pretty good, though it has some major limitations, which the authors themselves note. It certainly does not "debunk" educational software.
Here's what I wrote in reply:
"Debunking" is rather a strong, and IMO inappropriate word. At worst, the survey reports no significant difference in learning outcomes. That's not necessarily a bad thing. As it happens I also have open on my desktop the site http://nosignificantdiffernce.org , which provides a meta-analysis of hundreds of comparative-media studies. The bottom line is that comparative media studies *usually* report no significant difference in outcomes.
And why should that be surprising? If Medium A and Medium B are both *designed to help learners achieve the same learning objectives*, we should *expect* to see no significant difference.
That said, the ED study reports a good deal of trouble in data collection. There was a serious lack of continuity from year one to year two - over 70% of the teachers dropped out of the study. There were no classroom observations in year two. The survey team administered their own tests where the districts did not, and it is not immediately clear whether the software that was evaluated was aligned to those tests, or whether the instruction given to the control group was tailored to the test.
In other words, is the software taking a hit because it didn't teach something that was on the test? Many of these software packages are highly modularized and can be adapted to fit state or local standards. If the software wasn't set up to teach the content that was going to be on the test (assuming it could have been), it's hardly the fault of the software developers.
In addition, the study authors issue some strong caveats about the limits of their own research. The summary notes: "Characteristics of districts and schools that volunteered to implement the products differ, and these differences may relate to product effects in important ways."
It concludes, "Products in the study also were implemented in a specific set of districts and schools, and other districts and schools may have different experiences with the products. The findings should be viewed as one element within a larger set of research studies that have explored the effectiveness of software products."
If the study authors themselves issue such caveats, it's a little over the top to call it "debunking." Just because it's not a magic bullet doesn't mean it's of no value.
Successful implementation of learning technology does not seek to replace the teacher (except in situations where there is no teacher to replace). Rather, it seeks to free up the teacher by assuming the role of content-provider. This enables the teacher to do what a machine cannot - to connect with the student as a person, to coach and encourage, and when necessary to admonish and correct (can we even do that anymore?)
The commentor to whom I had responded thanked me for my response and replied that she had gotten her information from a comment on a post on the liberal multi-author blog Huffington Post. I followed the link and found her reference in the comments section, which was filled with vitriolic partisan ignorance that is beyond my ability or desire to attempt to remediate.
I really feel sorry for people who are filled with fear and hatred for ideas that are different from their own. Can we not disagree agreeably?
Subscribe to:
Posts (Atom)