One academic the course has introduced
me to is Roger C. Schank. Roger is an AI lecturer who later
specialising in learning. Schank's big idea is that of learning
by doing. It's a simple and
compelling idea – he claims traditional classrooms don't work
because they are far too theoretical and divorced from any real
“need” to learn.
There is certainly
a lot of truth in this. A book of drills (whether it be arithmetic,
grammar, or rote historical facts) does little to demonstrate either
why the information is important or the contexts to which it is
relevant.
The
title of this post is lifted straight from a
report Schank wrote in 1995 for his university. It's a huge
piece of writing – almost thirty thousand words long – and to be
perfectly honest with you, I didn't read it to the end. But why
would I? It's called a “technical report”, but in truth it's
little more than an oversized opinion piece. There's no technical
data: he does not appeal to research, he does not appeal to figures,
he just makes unsupported statements. What is most telling is that
there are only 5 citations in his bibliography, and four of these are
to himself.
As he argues
without evidence, I feel perfectly entitled to dismantle his argument
without citations. Besides, he's supposed to be an academic, and I'm
just a blogger!
First up, Schank fails to address in the first 25% of his essay (which is approximately what I read) the biggest concern that has always been raised against the idea of whole-subject-learning, learn-by-doing or whatever label you chose to attach to the idea: lack of breadth. (A quick skim suggests that that it's not addressed further down, so if it is, it's clearly not given the prominence it deserves. Besides, as it is the single most important concern of most critics, you need to address it early or you lose our attention.) A single task will only lead to a single solution, with some limited exploration of alternative strategies. It teaches the students how to cope with a situation where they lack knowledge, rather than minimising those situations by providing them with knowledge.
There's been
research into various aspects of this. I've seen reports claiming to
prove schoolchildren given a whole-subject education have a much
narrower set of knowledge. This should be pretty obvious, I would
have thought... so maybe I'm just displaying confirmation bias and
reading the stuff that supports my view.
Of course, there
was the study that showed that doctors trained by case-study rather
than theory performed better in diagnosing patients, but as I recall
it, this was tempered by the fact that their diagnoses took a lot of
time and money, because they tended to follow a systematic approach
of testing for increasingly less common problems or symptoms. A
doctor trained on a theory-based course was more likely to formulate
a reasonable first hypothesis and start the testing process somewhere
in the middle. The conclusions we can take from this are mixed. You
can claim that the traditionally-trained doctor is better at
diagnosing on the grounds that he can do it with less testing; or you
can claim that only the end result matters, and the
case-study-trained doctor is better. You can argue that minimising
mistakes is the ultimate goal, or you can argue that the time taken
in avoiding mistakes is too great in that it delays treatment for
other patients.
So,
anyway... Schank does nothing to convince me that it is possible to
cover the breadth of a traditional course in learn-by-doing, but
there is a video on Schank's
site of a course he designed at Carnegie-Mellon about ten years
ago, and it alludes to what I think is the only real argument about
it. One of the senior course staff and one or two of the students
talk about the idea of forgetting everything that's been taught in a
traditional course. If challenged, would that be the basis of
Schank's response? The logic certainly appeals: if you're not really
learning anything in a traditional academic course, the breadth of
what is covered is irrelevant.
But is anything
ever truly forgotten? I've recently gone back into coding after a
long hiatus – even when I worked in IT, I never had any serious
programming to do. But when I come up against a problem, I find
myself thinking back to mostly-forgotten bits of theory from my CS
degree days, and looking them up on the net. Tree-traversal
algorithms, curried functions, delayed evaluation... But if I had
never encountered these ideas before, how would I even know to look
for them?
This is not a mere
theoretical problem. I'm not usually one to complain about “ivory
tower academics”, but goddamn it, Schank's brought this on himself.
And I quote:
"One of the places where real life learning takes place is in the
workplace, "on the job." The reason for this seems simple
enough. Humans are natural learners. They learn from everything they
do. When they watch television, they learn about the day's events.
When they take a trip, they learn about how to get where they are
going and what it is like to be there. This constant learning also
takes place as one works. If you want an employee to learn his job,
then, it stands to reason that the best way is to simply let him do
his job. Motivation is not a problem in such situations since
employees know that if they don't learn to do their job well, they
won't keep it for long.
Most employees are interested in
learning to their jobs better. One reason for this is, of course,
potential monetary rewards. But the real reason is much deeper than
that. If you do something often enough, you get better at it --
simple and obvious. When people really care about what they are
doing, they may even learn how to do their jobs better than anyone
had hoped. They themselves wonder how to improve their own
performance. They innovate. Since mistakes are often quite jarring to
someone who cares about what they are doing, people naturally work
hard to avoid them. No one likes to fail. It is basic to human nature
to try to do better and this means attempting to explain one's
failures well enough so that they can be remedied. This
self-correcting behavior can only take place when one has been made
aware of one's mistakes and when one cares enough to improve. If an
employee understands and believes that an error has been made, he
will work hard to correct it, and will want to be trained to do
better, if proper rewards are in place for a job well done. "
Many graduates, particularly CS grads, will immediately be able to identify with me when I say this isn't true. When you leave university, you typically know how to do things correctly, but once you get into the real world, “correctly” is too time-consuming, too expensive. (Except in safety-critical roles, such as avionics or military systems.) In the short term, this is OK – pragmatically, that's the way it's got to be.
In the longer term, though, things start to go haywire. We get so habituated to our way of doing things that we soon learn to identify it as the “right way” of doing things. Someone comes along with a new way, a better idea (probably a recent grad) and we dismiss the idea as wishful thinking.
In computers more than any other field, this problem is easily apparent. I remember suggesting a very simple change to a database system and being told “you can't do that with computers”. A) You can do pretty much anything with computers. B) I was talking about something that is built into the database software we were using!!! Yes, a standard feature of the software, and the team's top “expert” didn't know about it; and because he was the expert and didn't know about it, it was as though it didn't exist.
More generally,
the problem becomes visible when a programmer switches languages. A
C programmer who learns Python normally ends up writing his Python
code using the features that most resemble those of C. The unique
features of Python are designed to overcome specific problems and
difficulties with a lower-level language such as C, but to the expert
C coder, these aren't really “problems”, because he long ago
rationalised them away and learned to cope with them. He doesn't
realise there is a “problem”, so he doesn't have any reason to go
looking for a solution. Even within languages, some people always do
things the “hard way” because they've simply never thought to
look for an “easy way”.
So Schank is being
hopelessly naïve. The key feature of expertise is automaticity –
experts have internalised enough that they don't have to think about
what they're doing. They close their minds to learning, because
there's more value in doing a job suboptimally but quickly than in
doing it optimally but slowly. People need to develop breadth before
they become experts – before they become “set in their ways”.
Now to answer the question that started this article, what do we learn when we learn-by-doing?
We learn to be
adequate, not brilliant. We learn to get by, not to excel. We learn
to stop thinking. We learn, paradoxically, to stop learning.
No comments:
Post a Comment