It was really cold in my flat this morning, so I grabbed a pile of essays to mark and headed for the university, and reliable central heating. Well, in the end I didn't make it into the uni, stopping outside to enjoy wome truly incredible weather.
One thing that I've noticed time and again is that adjective-noun word order seems pretty difficult for French people, with many only being able to cope with "plain" adjectives. Stick in an adjective formed from a past or present participle and they revert to the French order. (eg "land unoccupied" instead of "unoccupied land".)
Now at first I thought this may be a verb/adjective confusion, but I'm starting to think otherwise.
In theory, students should learn with normal adjectives as a base case and then generalise from there. However it looks as though they haven't actually generalised it - I believe a great many of my students have, at some point earlier in their schooling, internalised what to me is the "rule" as though it's an exception.
What could cause this problem? An overly restrictive set of adjectives at the start, probably.
It isn't difficult to introduce -ed and -ing adjectives early on -- "tired dogs" and "boring books" and the like -- but that has to be expanded upon, and opened up to the rarer participial adjectives. Yes, rarer adjectives. Even though these words aren't in the Swadesh lists or similar, that doesn't matter. They're rare, and they're regular, and these two facts are intrinsically linked.
In most languages, the most common words are irregular, so it's no great leap of logic to see how a learner could falsely assume the rule is an exception if they're only dealing with a small set of common words....
Edit: I feel a funny sense of déjà vu....
27 February 2013
26 February 2013
New evidence of in-utero language development.
A step away from the education side of things now, and back towards language.
I just saw an intriguing article on the BBC News website, reporting on a paper recently published by a French research team. It's long been believed that babies start to learn to discriminate sounds while in the womb, given that the ear is fully developed by 23 weeks. Babies ability to discrimate sounds can only be measured by brain scans (because you can't ask them if they can here the difference) but you can't do a scan while the baby is still in the womb (cos you've got to stick a wee cap on their heads. So the researchers took premature newborns and carried out the usual tests for phoneme discrimination, and they found something. Not a lot, but it definitely supports the claims that language learning starts before birth.
I just saw an intriguing article on the BBC News website, reporting on a paper recently published by a French research team. It's long been believed that babies start to learn to discriminate sounds while in the womb, given that the ear is fully developed by 23 weeks. Babies ability to discrimate sounds can only be measured by brain scans (because you can't ask them if they can here the difference) but you can't do a scan while the baby is still in the womb (cos you've got to stick a wee cap on their heads. So the researchers took premature newborns and carried out the usual tests for phoneme discrimination, and they found something. Not a lot, but it definitely supports the claims that language learning starts before birth.
21 February 2013
What do we learn when we learn by doing?
The University of Georgia MOOC on
online education was starting to look very interesting before it
suddenly folded. On one hand, it covered a lot of interesting
theoretical pedagogy. On the other, the practical pedagogy of the
course itself seemed sub-optimal. As I only got one week in before
they closed up, so I can't really say all that much about it.
First up, Schank fails to address in the first 25% of his essay (which is approximately what I read) the biggest concern that has always been raised against the idea of whole-subject-learning, learn-by-doing or whatever label you chose to attach to the idea: lack of breadth. (A quick skim suggests that that it's not addressed further down, so if it is, it's clearly not given the prominence it deserves. Besides, as it is the single most important concern of most critics, you need to address it early or you lose our attention.) A single task will only lead to a single solution, with some limited exploration of alternative strategies. It teaches the students how to cope with a situation where they lack knowledge, rather than minimising those situations by providing them with knowledge.
Many graduates, particularly CS grads, will immediately be able to identify with me when I say this isn't true. When you leave university, you typically know how to do things correctly, but once you get into the real world, “correctly” is too time-consuming, too expensive. (Except in safety-critical roles, such as avionics or military systems.) In the short term, this is OK – pragmatically, that's the way it's got to be.
In the longer term, though, things start to go haywire. We get so habituated to our way of doing things that we soon learn to identify it as the “right way” of doing things. Someone comes along with a new way, a better idea (probably a recent grad) and we dismiss the idea as wishful thinking.
In computers more than any other field, this problem is easily apparent. I remember suggesting a very simple change to a database system and being told “you can't do that with computers”. A) You can do pretty much anything with computers. B) I was talking about something that is built into the database software we were using!!! Yes, a standard feature of the software, and the team's top “expert” didn't know about it; and because he was the expert and didn't know about it, it was as though it didn't exist.
Now to answer the question that started this article, what do we learn when we learn-by-doing?
One academic the course has introduced
me to is Roger C. Schank. Roger is an AI lecturer who later
specialising in learning. Schank's big idea is that of learning
by doing. It's a simple and
compelling idea – he claims traditional classrooms don't work
because they are far too theoretical and divorced from any real
“need” to learn.
There is certainly
a lot of truth in this. A book of drills (whether it be arithmetic,
grammar, or rote historical facts) does little to demonstrate either
why the information is important or the contexts to which it is
relevant.
The
title of this post is lifted straight from a
report Schank wrote in 1995 for his university. It's a huge
piece of writing – almost thirty thousand words long – and to be
perfectly honest with you, I didn't read it to the end. But why
would I? It's called a “technical report”, but in truth it's
little more than an oversized opinion piece. There's no technical
data: he does not appeal to research, he does not appeal to figures,
he just makes unsupported statements. What is most telling is that
there are only 5 citations in his bibliography, and four of these are
to himself.
As he argues
without evidence, I feel perfectly entitled to dismantle his argument
without citations. Besides, he's supposed to be an academic, and I'm
just a blogger!
First up, Schank fails to address in the first 25% of his essay (which is approximately what I read) the biggest concern that has always been raised against the idea of whole-subject-learning, learn-by-doing or whatever label you chose to attach to the idea: lack of breadth. (A quick skim suggests that that it's not addressed further down, so if it is, it's clearly not given the prominence it deserves. Besides, as it is the single most important concern of most critics, you need to address it early or you lose our attention.) A single task will only lead to a single solution, with some limited exploration of alternative strategies. It teaches the students how to cope with a situation where they lack knowledge, rather than minimising those situations by providing them with knowledge.
There's been
research into various aspects of this. I've seen reports claiming to
prove schoolchildren given a whole-subject education have a much
narrower set of knowledge. This should be pretty obvious, I would
have thought... so maybe I'm just displaying confirmation bias and
reading the stuff that supports my view.
Of course, there
was the study that showed that doctors trained by case-study rather
than theory performed better in diagnosing patients, but as I recall
it, this was tempered by the fact that their diagnoses took a lot of
time and money, because they tended to follow a systematic approach
of testing for increasingly less common problems or symptoms. A
doctor trained on a theory-based course was more likely to formulate
a reasonable first hypothesis and start the testing process somewhere
in the middle. The conclusions we can take from this are mixed. You
can claim that the traditionally-trained doctor is better at
diagnosing on the grounds that he can do it with less testing; or you
can claim that only the end result matters, and the
case-study-trained doctor is better. You can argue that minimising
mistakes is the ultimate goal, or you can argue that the time taken
in avoiding mistakes is too great in that it delays treatment for
other patients.
So,
anyway... Schank does nothing to convince me that it is possible to
cover the breadth of a traditional course in learn-by-doing, but
there is a video on Schank's
site of a course he designed at Carnegie-Mellon about ten years
ago, and it alludes to what I think is the only real argument about
it. One of the senior course staff and one or two of the students
talk about the idea of forgetting everything that's been taught in a
traditional course. If challenged, would that be the basis of
Schank's response? The logic certainly appeals: if you're not really
learning anything in a traditional academic course, the breadth of
what is covered is irrelevant.
But is anything
ever truly forgotten? I've recently gone back into coding after a
long hiatus – even when I worked in IT, I never had any serious
programming to do. But when I come up against a problem, I find
myself thinking back to mostly-forgotten bits of theory from my CS
degree days, and looking them up on the net. Tree-traversal
algorithms, curried functions, delayed evaluation... But if I had
never encountered these ideas before, how would I even know to look
for them?
This is not a mere
theoretical problem. I'm not usually one to complain about “ivory
tower academics”, but goddamn it, Schank's brought this on himself.
And I quote:
"One of the places where real life learning takes place is in the
workplace, "on the job." The reason for this seems simple
enough. Humans are natural learners. They learn from everything they
do. When they watch television, they learn about the day's events.
When they take a trip, they learn about how to get where they are
going and what it is like to be there. This constant learning also
takes place as one works. If you want an employee to learn his job,
then, it stands to reason that the best way is to simply let him do
his job. Motivation is not a problem in such situations since
employees know that if they don't learn to do their job well, they
won't keep it for long.
Most employees are interested in
learning to their jobs better. One reason for this is, of course,
potential monetary rewards. But the real reason is much deeper than
that. If you do something often enough, you get better at it --
simple and obvious. When people really care about what they are
doing, they may even learn how to do their jobs better than anyone
had hoped. They themselves wonder how to improve their own
performance. They innovate. Since mistakes are often quite jarring to
someone who cares about what they are doing, people naturally work
hard to avoid them. No one likes to fail. It is basic to human nature
to try to do better and this means attempting to explain one's
failures well enough so that they can be remedied. This
self-correcting behavior can only take place when one has been made
aware of one's mistakes and when one cares enough to improve. If an
employee understands and believes that an error has been made, he
will work hard to correct it, and will want to be trained to do
better, if proper rewards are in place for a job well done. "
Many graduates, particularly CS grads, will immediately be able to identify with me when I say this isn't true. When you leave university, you typically know how to do things correctly, but once you get into the real world, “correctly” is too time-consuming, too expensive. (Except in safety-critical roles, such as avionics or military systems.) In the short term, this is OK – pragmatically, that's the way it's got to be.
In the longer term, though, things start to go haywire. We get so habituated to our way of doing things that we soon learn to identify it as the “right way” of doing things. Someone comes along with a new way, a better idea (probably a recent grad) and we dismiss the idea as wishful thinking.
In computers more than any other field, this problem is easily apparent. I remember suggesting a very simple change to a database system and being told “you can't do that with computers”. A) You can do pretty much anything with computers. B) I was talking about something that is built into the database software we were using!!! Yes, a standard feature of the software, and the team's top “expert” didn't know about it; and because he was the expert and didn't know about it, it was as though it didn't exist.
More generally,
the problem becomes visible when a programmer switches languages. A
C programmer who learns Python normally ends up writing his Python
code using the features that most resemble those of C. The unique
features of Python are designed to overcome specific problems and
difficulties with a lower-level language such as C, but to the expert
C coder, these aren't really “problems”, because he long ago
rationalised them away and learned to cope with them. He doesn't
realise there is a “problem”, so he doesn't have any reason to go
looking for a solution. Even within languages, some people always do
things the “hard way” because they've simply never thought to
look for an “easy way”.
So Schank is being
hopelessly naïve. The key feature of expertise is automaticity –
experts have internalised enough that they don't have to think about
what they're doing. They close their minds to learning, because
there's more value in doing a job suboptimally but quickly than in
doing it optimally but slowly. People need to develop breadth before
they become experts – before they become “set in their ways”.
Now to answer the question that started this article, what do we learn when we learn-by-doing?
We learn to be
adequate, not brilliant. We learn to get by, not to excel. We learn
to stop thinking. We learn, paradoxically, to stop learning.
17 February 2013
Horses for courses...
Perhaps my
previous pontifications on the word “course” were unconvincing,
but I think the word “course” can account for a lot of the
problems in modern education.
But still, I felt that my dissatisfaction was caused not because the university I was studying in was not as prestigious as Edinburgh, and not because the teaching staff weren't up to scratch. No, the teaching staff were all well-educated and highly motivated, and had the benefit of an exceptionally good teacher:student ratio.
Which brings me back to the word “course”, and in particular the three ways the University of Edinburgh used it.
Each of these was composed of two “half-courses”, which would be the equivalent of a “module”. But the very fact that it was a “half” course made it clear that it was not a stand-alone “unit” of education, it was part of an ongoing learning path that extended through the year.
Last year, I took
a year of full-time study. My last exam was on a Friday, and the
next day I found myself on a busy train home. I ended up sitting at
a table with a couple of guys, and it turned out one of them was a
retired tutor from a technical college. In Scotland, “college”
has two meanings –either part of a higher education university, or
more commonly a further education institute focusing on vocational
studies.
He was working
with trades – very practical, hands-on stuff like welding. He was
already working there when the system of “modules” came in. A
module, like the word implies, is a self-contained unit of learning,
and it's a concept that is all too familiar in all areas of
education. Well, this old guy described it as a disaster. The
material he was working with, he said, just couldn't be broken down
that way; everything was interconnected.
But that wasn't
unique to his field. Less than a fortnight before, I was in a
meeting with a couple of the staff from the course I'd taken
discussing my feedback, and this was precisely my complaint there –
modularisation.
I was studying the
second year of a four-year degree scheme, and my initial concern was
that I would be lacking some of the prerequisite knowledge from the
first year modules. In particular, I was concerned that the module
on short-form fiction would rely on literary analysis skills and
terminology from the first year. But there was none of that. It was
a true module – it was self-contained. No prerequisite knowledge.
But that meant that (in my opinion) it sorely lacked depth, and at
the end of it, I did not feel that I was ready for degree-level
literary analysis courses, as would be expected of me if I continued
to the third year. So in turn I concluded that the courses on offer
could not be up to the standard I would expect of a third year
course.
Of course, I do
have unforgivingly high standards,
in no small part down to the excellent quality of education I
received for my first degree (at Edinburgh University).
But still, I felt that my dissatisfaction was caused not because the university I was studying in was not as prestigious as Edinburgh, and not because the teaching staff weren't up to scratch. No, the teaching staff were all well-educated and highly motivated, and had the benefit of an exceptionally good teacher:student ratio.
The biggest problem was simply this word “module”, this idea that
skills and knowledge can be compartmentalised and packaged neatly, and this institution had fully implemented a completely modularised syllabus.
Which brings me back to the word “course”, and in particular the three ways the University of Edinburgh used it.
In my first year, I did three “courses”: Computer Science, Maths
and Artificial Intelligence.
Each of these was composed of two “half-courses”, which would be the equivalent of a “module”. But the very fact that it was a “half” course made it clear that it was not a stand-alone “unit” of education, it was part of an ongoing learning path that extended through the year.
The
third use of “course” was in the term “degree course”. My
“degree course” was Computer Science, and everything I was taught
in first and second year was a predetermined path that prepared me
for the advanced study of third and fourth year. I had a much freer
choice of courses in third year, and it would be tempting to call
them “modules” here, but even as we made our third year choices
we were all thinking about our final year, as each individual course
opened up certain possibilities in the fourth year: our third year
choice restricted the path – the course
– our fourth year would follow.
So while it is true that not every university can be Edinburgh
University, it's still useful to look at what Edinburgh does and see
if it's worth copying.
Some universities pride themselves on having a wide choice of modules
on offer. At Edinburgh, I had no choice whatsoever in first or
second year. The university identified what I needed to know to
prepare for my chosen degree, and they made sure I learned it well.
(OK, there was stuff taught that I never really “needed”, but
that's by-the-by.) And because we all learned the same things, when
it came to third year, the lecturers all knew how much we knew, and
they could build on it.
This took us further and deeper into every subject than we could have
otherwise gone.
That's
why the word “course” is so important. That's
why we've got to know what it means, and what it doesn't mean. The loss of the word "course" is travelling had in hand with the loss of a controlled progression in learning.
The Connectivist school of thought blames traditional schooling for students' poor ability to deal with content at levels of abstraction, whereas I believe it is the modulisation of the syllabus, a fairly recent development, that has caused this problem. If each semester we start on a new separate module, one that stands alone and refuses to take as given the learning outcomes of specific prerequisite courses, then we are never, as students, taught to integrate our knowledge, and we are never given the opportunity to apply that knowledge at a higher level of abstraction.
This is not a small problem, specific to certain institutions – it's a growing problem, and it is becoming written into more and more of the documents that define our education systems. Under the agreements that created the Bologna Process, students should now be free to move between institutions all over Europe, and in my English classes, I have several students who have moved from other French universities and even one who has transferred in from an Eastern European university to complete her studies in France.
The end result is a constant shallowing of our knowledge, and a sad, slow end to a once-great education system...
15 February 2013
If a MOOC isn't a course...?
After writing
the post on the word “course”, I started pontificating on what a
constructionist MOOC is if it's not a “course”.
And it is, “most importantly”:
Cue much philosophising on my part. Is a course an “event”? It doesn't feel like one. Is that because most courses are suboptimal, and therefore unexciting? I could accept that as a valid argument, but I personally have enjoyed many courses that didn't feel like “events” to me, so I was unconvinced.
It's an event “where people can get together and work and talk about it [a topic] in a structured way.” It certainly doesn't sound like a course.
And so it was that one morning earlier this week, I found myself lying in bed, hoping to get another half-hour nap in, and this image in my head:
Online conferences have been tried before – live video streaming of a series of scheduled speakers and open “seminars” on text chat – but they've never proved popular.
This, I would argue, is because they attempted to replicate the conference format lock, stock and barrel. But the conference format is a compromise – most importantly, conferences are squeezed into a short space of time because of logistics. You couldn't spread a conference over a couple of months, because people would have to travel back and forth, and it would get very expensive very quickly. This means that the seminar sessions are short and limited – you can't research your response to a talk in the toilet break between the speech itself and the seminar on the topic.
What we have in the Cormier MOOC model is the conference with those time restrictions stripped away. The talks are available wherever you are, without travel, so you can take your time to discuss and research your reactions to them. You're no longer forced to choose between two equally interesting talks just because the scheduler put them on at the same time.
The MOOC guys came up with a great idea, but not only didn't give a clear picture of what it is, what it does, or what it replaces; they claim it's something it's not, it does something it doesn't, and replaces something that it is completely different from.
So I started
rewatching Cormier's video What is a MOOC.
Cormier says:
After all, as he
says, “it has facilitators, course materials, it has a start and an
end date, it has participants”
OK so far...
It's also:
…good so far....
And it is, “most importantly”:
Cue much philosophising on my part. Is a course an “event”? It doesn't feel like one. Is that because most courses are suboptimal, and therefore unexciting? I could accept that as a valid argument, but I personally have enjoyed many courses that didn't feel like “events” to me, so I was unconvinced.
It's an event “where people can get together and work and talk about it [a topic] in a structured way.” It certainly doesn't sound like a course.
And so it was that one morning earlier this week, I found myself lying in bed, hoping to get another half-hour nap in, and this image in my head:
I could also
picture the structure of the change.mooc.ca course, which appears to
consist of one guest speaker per week, and free discussion.
An event...
...where you
choose what material to engage with...
...comprised of a
series of guest speakers...
...where there are
no assignments...
...and you network
with like-minded people...
Eureka! A cMOOC
is a conference.
Online conferences have been tried before – live video streaming of a series of scheduled speakers and open “seminars” on text chat – but they've never proved popular.
This, I would argue, is because they attempted to replicate the conference format lock, stock and barrel. But the conference format is a compromise – most importantly, conferences are squeezed into a short space of time because of logistics. You couldn't spread a conference over a couple of months, because people would have to travel back and forth, and it would get very expensive very quickly. This means that the seminar sessions are short and limited – you can't research your response to a talk in the toilet break between the speech itself and the seminar on the topic.
What we have in the Cormier MOOC model is the conference with those time restrictions stripped away. The talks are available wherever you are, without travel, so you can take your time to discuss and research your reactions to them. You're no longer forced to choose between two equally interesting talks just because the scheduler put them on at the same time.
So the MOOC as a “distributed conference” is far better in many
respects than a physical conference. It's a good thing, it just isn't a course. And if conferences are now easier and cheaper to run, we may be able to replace some courses with conferences, but they're still not courses.
Do I feel stupid for rubbishing MOOCs previously, now that I've
realised this?
No. My biggest problem developing as a teacher is that people keep
suggesting techniques, methods, tools and strategies that “work
well” in the classroom, but they never tell me when or why to apply
them.
The MOOC guys came up with a great idea, but not only didn't give a clear picture of what it is, what it does, or what it replaces; they claim it's something it's not, it does something it doesn't, and replaces something that it is completely different from.
Labels:
conference,
Cormier,
course,
definitions,
MOOC,
tell me why
13 February 2013
Putting the cart before the course.
After getting in a discussion with Debbie Morrison on her blog Online Learning Insights, I came to
realise that we were misunderstanding each other over the vagueness
of the definition of what a MOOC really is, and I was going to write
a post about this, when I realised that in order to do that, I would
first have to consider what the meaning of the word “course” is.
This train of thought had actually been
idling in the station for a while, ever since I watched a video of atalk given by Roger Schank to staff at the World Bank. Something
Schank said didn't ring true – he described the word “course”
as imply a race, winners and losers. I wasn't happy with this
interpretation, but it wasn't until I got into the discussion with
Debbie that I realised why.
A course is not a race, although many
races take place on courses. No, a course is a route, a path. A
river finds its “course” to the sea.
Many in Western education look at the
Eastern tradition with some sense of awe. We are told that in the
East a teacher isn't a “teacher”, but “one who has walked the
path”, and a student isn't a “student” but a seeker of knowledge. But the question is: do the Easterners know this? The origin of our word
“course” shows that our system is built on the same philosophy,
but we're not aware of this. Maybe they are blind to the meaning of
their words, just as we are blind to the meaning of ours.
Now where does that leave us on the
meaning of MOOC?
Well, in a rather pedantic sense, the
MOOC as proposed by the people who coined the term is not a “course”
at all, because there is no set path whatsoever, which is in fact part of the
point of connectivist learning theory – the learning experience
(for wont of a better term) is driven and steered by the students,
with each student finding their own path through the information
presented. Cormier says in his video introduction to the idea of a
MOOC that there is “no right way to do the course, no single path.”
If course is synonymous with path, this is a paradox.
Is it useful to define this as “not a course”? You may not agree with me, but I think it's not only useful, but perhaps even essential.
Connectivism is just the latest combatant in an on-going ideological war between whole-subject and basic skills teaching. Why do I call this an ideological war, and not a pedagogical one? Because neither side really has much of an argument or evidence behind them.
In the real world, basic skills and whole-subject teaching are two ends of a spectrum of teaching styles, and most teachers use different parts of the spectrum at different times.
In general, education follows a progression that starts with the aim of teaching certain well-defined basic skills, then we start to use them in more complex environments. As we progress through our education, we build levels of abstraction over those basic skills.
The question is whether we can learn those basic skills and abstract skills and basic skills at the same time. I personally believe that we can, but that it is less efficient. If you put the cart before the horse, the horse can push it, but it would be more efficient pulling it behind.
But perhaps the reason that the MOOC as proposed by Dave Cormier doesn't really fit the term “course” is because what it aims to replace isn't really a “course” either. Cormier mentions “lifelong learning”, but “lifelong learning” is a term that is in itself ambiguous.
There are two sides to lifelong learning – there's what we'd traditionally call “adult education”, which can be stereotyped as evening classes offering high school or university-level classes to adults who dropped out of the education system at a young age; then there's continuing professional development (CPD) for people who are qualified and working in a degree-educated field. (Well, not two sides, per se, as there's a whole spectrum in between, but never mind.)
I really think that what Cormier did
was create something that was far more orientated towards that
highest level of abstraction: the qualified, experienced practitioner
who already had a well-developed framework for understanding the
material presented, and as I said in a previous post, at that stage
most professional development takes place in seminars, not in strict
“courses”.
The whole concept of informal learning is now firmly entrenched in the Scottish teaching system. Teachers are set targets of CPD points to acquire through the year. Many of these are given through traditional in-service training days and seminars, but teachers are expected to top these up with other things through personal initiative. There is a large catalogue of activities that qualify as optional points, even down to watching a television documentary on a subject related to your teaching field.
That informal online learning is
effective for people for whom informal offline learning is already
known to be effective should not be a surprise
One of the biggest influences on my thinking about education was Michel Thomas. Thomas studied psychology, and he wanted to study the learning process. He reportedly chose to teach languages for a very simple reason: languages provide the best opportunity to work with a student with zero starting knowledge of the subject being taught. Eliminating the variable of prior knowledge made reaching conclusions about the effectiveness of teaching easier, he reasoned.
The style of teaching he developed is
demonstrated in the courses he recorded before his death for Hodder.
(He recorded courses in Spanish, French, German and Italian. The
other languages released in his name are very different indeed.)
They are all examples of very tight control by the teacher, offering
a well-defined, clearly sign-posted learning path. As the courses he
produced are live recordings with genuine students, you can observe
him diverge from his preferred path as a reaction to things said by
the student (for example, when one student makes a mistake
conjugating a verb in Spanish and accidentally says an imperative,
Thomas is forced to introduce the imperative early, because he
doesn't want to tell the student that he's wrong), so there's a fair
degree of flexibility there, but we can see that there is a definite
“course” there.
The amount that the students on the recordings pick up even in just the first 2 hours is quite extraordinary – I've never seen anyone else achieve similar results
But he's working with absolute beginners, and beginners need a path to follow, they need a course. It would be a mistake, I believe, to try to use the term “course” to describe an undirected, pathless learning experience, as this leads to the conclusion that such a learning experience is a replacement for a genuine guided course, when I really think it's only something that can really be done once the student has finished with courses.
12 February 2013
A letter to the Scottish Parliament...
The Scottish Parliament has a consultation open on teaching languages in the primary school, with a call for views open until this Friday, the 15th. I've responded, and as is my wont, my response is rather long and heavily opinionated. Time was I wouldn't have bothered, assuming I'd just be ignored, but I figure there's no harm in trying, so I dedicated a few hours to drafting the letter that I've included below, for readers' reference.
The only increase in cost to the host universities would be the marginal cost of ongoing language tuition, which many already offer, and which is a marginal cost against the guarantee of a well-trained, highly motivated classroom teacher, and therefore better pass rates for their students.6
If the Scottish Government or local councils were to mandate that feeder primaries taught the same languages as used in the secondaries, this would reproduce the issues I currently face in all our secondary schools, as it is inevitable that different primaries would achieve different results with their pupils.
I am currently working as an English
teacher in a French university, and have long term plans to be a
teacher of languages when I return to Scotland.
I feel the final question in the call
for comments is a dangerous question, in that it inadvertantly
presupposes a particular policy:
- The role of languages in economic development – what
languages should children be learning to benefit their future
careers, and to help Scotland flourish economically?
- that the languages should be chosen for reasons of utility and
- that there should be a restricted list of languages.
On the first point, it is very
difficult to predict what foreign languages, if any, will rise to
prominence in the next twenty years; and regardless, current evidence
shows that the only single language linked to international economic
success is English. Attempting to find a utility measure for
languages at the school level is therefore a distraction from the
main goal of improving language provision.
On the second point, we risk robbing
ourselves of the best possible resources we have to hand. The
population of teachers in Scotland is very broad and varied, and
while most are Scots-born monolinguals, there are also immigrants
from various countries, as well as the second-generation of immigrant
families who still retain their ancestral languages in the home. The
secondary syllabus has space for Community Languages in addition to
Modern Foreign Languages, but the list here is very heavily
restricted and the option is not available in a great number of
schools.
If we look at the case of Ireland, the
policy of universal Irish language teaching is widely regarded as a
failure. Teachers with no real command of the language cannot teach
effectively, and teachers who have no love of the language cannot
motivate and enthuse the students.
In order to teach effectively at the
primary level, we need teachers who are comfortable and confident in
the language they are teaching, and who are teaching out of choice
rather than obligation. The level of competence required cannot be
achieved with schemes such as Gaelic for Learners in the Primary
School. (I would comment that I have heard good feedback from
several primary teachers involved in the GLPS scheme, but that at
present, teachers in the scheme are a self-selecting minority, all of
whom are personally motivated to work with the language. There is no
reason to believe that the scheme would continue to be successful if
it was imposed on unwilling or unmotivated teachers.)
My belief, therefore, is that language
teaching should be encouraged and invested in, but that the choice of
language should be entirely at the discretion of the teacher. If the
teacher speaks Afrikaans or Aymara, Tongan or Tibetan, the teacher
should be free to use that in the class, and not forced to struggle
through on pidgin French or Spanish simply because the syllabus
demands it.
Opportunities to build language competence in teachers
Around a year ago, I wrote to the ministry suggesting that a special effort should be made in encouraging newly-qualified teachers to take a year abroad in order to reduce the number of teachers completing their probationary year and finding themselves without work. As the provision of languages at primary level has been a goal for many years, this would be a definite career advantage to any candidates taking up such an offer.
In particular, I
would draw the committee's attention to the situation in France. I
am currently working as a “lecteur”. This is a junior teaching
post for graduates with a minimum 4 years of university education,
hence any BEd(hons) or PGDE graduate would fulfill the entry
criteria.
The problem faced
by most universities is that most candidates for these posts have no
knowledge or experience of teaching, so the experience for the
students varies considerably from year to year.
It is therefore
extremely likely that the proposal of a system which would provide a
reliable supply of fully-trained teachers with at least one year of
classroom experience would be welcomed by the French universities,
even despite differences in class age groups.
I would propose a
scheme of the following structure:
- Fully qualified and registered teachers are recruited for French universities in Scotland.1
- An intensive summer course in teaching English, equivalent to the Cambridge CELTA or Trinity TESOL, is provided for successful candidates.2
- Intensive French lessons would be given to each candidate prior to the beginning of the academic year.3
- The yearly salary would be provided by the host institution.4
- Ongoing language tuition should be made available by the host institution throughout the year.5
I would propose
the following division of costs:
- Scottish Government to pay for English-teaching tuition.
- French Government to provide intensive French tuition.
- Host universities cover the costs of ongoing tuition.
- Recruitment costs to be split between Scottish and French governments.
I believe the costs to each body are
more than compensated for by the benefits.
The cost to the Scottish Government
would likely be no more than the cost of providing equivalent
language training alone directly. However, the experience of a year
living and working in the language, at no additional cost to the
Scottish public purse, would be invaluable to the teachers and hence
to the Scottish education system.
France, for its part, would be taking
steps to fulfill its current policies on language competence, which
stipulate that language is obligatory in almost all higher education,
again at no great cost.
The only increase in cost to the host universities would be the marginal cost of ongoing language tuition, which many already offer, and which is a marginal cost against the guarantee of a well-trained, highly motivated classroom teacher, and therefore better pass rates for their students.6
The transition from primary to secondary
Having a truly
open choice of languages at primary might seem an invitation to
disaster at the secondary level, but I would argue the opposite.
Teaching at
university level, my students all come from very different
backgrounds, hence different high schools. The level of English
varies from student to student, and the material they have learned is
a result of what has been taught at their various schools. Even
though they have had nominally the same education, in practice they
are very different. This makes my job extremely difficult, and
reduces my value to each individual student.
If the Scottish Government or local councils were to mandate that feeder primaries taught the same languages as used in the secondaries, this would reproduce the issues I currently face in all our secondary schools, as it is inevitable that different primaries would achieve different results with their pupils.
If given the
choice between teaching Spanish at high school to a mixed group who
had been learning Spanish since P1 at different primary schools, or a
mixed group who had learned completely different languages since P1,
I would choose the latter. Their previous exposure to language
learning would aid them considerably in picking up Spanish, and their
shared level of Spanish knowledge (zero) would mean that tasks could
be designed and selected that are suitable for all. This is vitally
important, because if the tasks are not suitable for all students, it
is in practical terms impossible for the teacher to motivate the
class, and so there is a real risk that progress in language would be
halted at the transition to secondary.
I would therefore
suggest that the introduction of a third language at P5 might prove
to be counterproductive, and that leaving the third language until
secondary would avoid difficult-to-manage mixed-level classes.
In summary, I
believe that the choice of language in the primary school should be
independent of the local secondary provision and based directly on
individual teachers' skills and competencies, and that the best time
to introduce a third language is at entry to secondary school, to
avoid the situation where classes suffer due to extreme differences
in previous knowledge and ability. I believe that we can also
leverage the worldwide demand for high quality English teaching to
help our primary teachers gain the language competence required to
make primary language teaching successful.
Regards,
Níall P. Tracey
1I
suggest this mainly for teachers after their probationary year, but
there's no reason this couldn't also be offered as a sabbatical
scheme to continuing teachers.
2A
significant part of the cost of current Celta and Trinity TESOL
certificate courses is the accreditation by Cambridge and Trinity
respectively. The certification is of much reduced relevance to
people already holding a university-level teaching qualification,
and as candidates on my proposed scheme are assumed not be looking
for further English-teaching work, putting them on an accredited
course would be inappropriate. However, this scheme could make use
of the excellent installed base of CELTA and TESOL trainers already
in Scotland.
In the longer term it may even provide the basis for a new Scottish qualification in the teaching of English to young adults.
In the longer term it may even provide the basis for a new Scottish qualification in the teaching of English to young adults.
3The
most practical option would be to host these centrally in a French
university and have the teachers attend the course en route to their
final host university.
4French
universities are part of the public sector, so employees are
considered public functionaries. Salaries are non-negotiable and
set by statute.
5This
language tuition could again be French and/or a local minority
language that is taught in the university, eg Basque or Breton.
6Most
language assessment is now performed centrally, under a French
government scheme called CLES. This is intended to ensure that all
universities provide equivalent standards of language tuition.
09 February 2013
Introductions...
An introduction that can only be understood after reading the whole chapter is no introduction at all.
That seems obvious, right? But how many times have you read the introduction to a chapter in a technical book and found yourself completely flummoxed by it, only to come back to it later and agree it makes perfect sense? Far too often, I'd wager, even if you're not conscious of it. But now that I've said it, you'll start to notice it more and more often....
This has always been a problem, but the much-acclaimed "democratising" effects of the internet are also "amateurising", and more and more people are attempting to write articles and books on technical topics without any real grounding in the principles of technical writing. Introdutions are one of the first things to suffer, and it is becoming all too common to see an introduction that talks about something in completely opaque terms, leaving the reader more confused than if there had been no introduction at all. The writer very often acknowledges this, commenting not to worry if you don't
understand it.
My golden rule is simple:
Don't tell me something until you're ready to explain it properly. If I don't understand what you've said, you'd have been just as well saying nothing at all.
There are several factors that contribute to this problem, and some are more complex than others, so I'm going to focus on one problem alone just now: thematisation of new information confuses the reader.
What's thematisation of new information? I hear you cry. It's something I've just done to demonstrate the problem. I've been presenting a relatively coherent argument that you can follow without effort, even if you disagree with it. But then suddenly I threw in something your brain wasn't really ready to process, and you don't know what I'm talking about (unless you *do* know what I'm talking about, in which case just make believe for a while).
The "theme" of a clause or a sentence is (put simply) the first actual "thing" that the reader encounters (ie a noun phrase).
Consider the difference between:
Last week, John went to Paris.
and
John went to Paris last week.
In the first, we are talking about "last week", our theme, whereas in the second, we are talking about John. It's only a slight difference in emphasis, but it must be pretty important, otherwise we wouldn't go to the effort of maintaining the two different possible structures.
In the clause "thematisation of new information confuses the reader", the "theme" is "thematisation of new information".
I've started the clause with something most readers do not understand, and crucially I have done nothing to prepare them for it.
The conceptual gulf here is that the writer is fixated on the name of what they're trying to teach as a defining feature, rather than accepting that the name is arbitrary and effectively meaningless, and what we need to focus on is the concept and meaning.
Imagine you want to teach the idea of a biscuit tin. You've got a choice between:
"A metal container used to store biscuits is called a biscuit tin."and
"A biscuit tin is a metal container used to store biscuits."
The latter form assumes that the reader has encountered the term "biscuit tin" and is actively seeking its meaning, but the former introduces it the concept before sticking a label on it. It assumes we understand "metal container", "store" and "biscuits", but so does the other one. The first one, therefore, acts as a better introduction; the second is a dictionary entry, not copy for an instructional book.
A challenge for the reader: pick up a grammar book, any grammar book, and turn to the section on the subjunctive. How many times does the word "subjunctive" appear before the reader gets any real idea of the concept? And I'm not including "implies uncertainty or doubt", because that really isn't the meaning subjunctive in a great many languages. (EG in Spanish creo que... implies uncertainty, but it doesn't take the subjunctive.)
In my experience all the trickiest grammar points are provided with no adequate introduction in the vast majority of cases. When an introduction includes words to the effect of "don't worry if this is unclear, it will be explained later," I find myself wanting to ask the author why he didn't just leave it until he was ready to explain it properly.
Now I feel that my last sentence needs clarification: by explaining it properly I don't mean explaining it fully, because, of course, that's not what an introduction is supposed to do. An introduction can't and shouldn't give the full detail, but it should leave you with a rough notion of what the concept is, which can be refined as it goes.
Before I started properly on Scottish Gaelic, I read a great little book: Scottish Gaelic: a brief introduction (now republished as An Introduction to Scottish Gaelic). The book was an introduction. Each chapter was merely an introduction. The book didn't attempt to teach you anything in any depth, but to provide a broad overview of the concepts in the language. It's hard to say exactly how much of an effect it had on my learning, but I certainly didn't feel like I struggled with many of the concepts when I finally came round to actually learning how to use them "in anger".
I've read several internet language learners say they like to use grammar books this way -- read through without "learning" per se, just to get a feel for the overall "shape" of the language before starting to study in earnest, so it's a shame there aren't more books written in this style.
The introduction author has two great enemies.
The first is precision. No-one wants to write things that are wrong, and any simple explanation is prone to being cut down by pedantry, which ends up leading many to start with a dry technical description, which doesn't really help the reader to understand.
The second enemy is the contents page.
No, seriously.
Think about it: the textbook has never managed to supplant lectures as the main source of study, despite being cheaper than face-to-face lessons and far more readily available. And what do all textbooks have in common? Contents pages.
What... you expected a more in-depth argument than that? Correlation isn't good enough for you? Well then, try this on for causation:
What do you get on a contents page? Quite often a single noun phrase. "The nominal group." "The subjunctive mood." "Hydrostatic load." "Currying functions." Before you even turn to the first page of the chapter you're bombarded with arbitrary, undefined terms. All new information, all theme; nothing that actually means anything.
I believe that this predisposes people to write bad introductions, because it makes them act as if the reader is looking for a definition of a given term, when often nothing could be further from the truth: the term given is merely a label of convenience to summarise a body of knowledge that the writer hopes to impart to the reader.
But now the word's up there in bold type and there's no getting away from it. It could be ignored, but wouldn't you feel a little silly actively avoiding the word used in the title of the chapter for the first page or two? The title has poisoned the chapter.
OK, I'm exaggerating grossly here. Not all books have bad introductions. Some books have meaningful chapter titles rather than simply throwing the jargon in there. But there's something about the format that encourages thinking this way, and it's something that bears conscious consideration....
That seems obvious, right? But how many times have you read the introduction to a chapter in a technical book and found yourself completely flummoxed by it, only to come back to it later and agree it makes perfect sense? Far too often, I'd wager, even if you're not conscious of it. But now that I've said it, you'll start to notice it more and more often....
This has always been a problem, but the much-acclaimed "democratising" effects of the internet are also "amateurising", and more and more people are attempting to write articles and books on technical topics without any real grounding in the principles of technical writing. Introdutions are one of the first things to suffer, and it is becoming all too common to see an introduction that talks about something in completely opaque terms, leaving the reader more confused than if there had been no introduction at all. The writer very often acknowledges this, commenting not to worry if you don't
understand it.
My golden rule is simple:
Don't tell me something until you're ready to explain it properly. If I don't understand what you've said, you'd have been just as well saying nothing at all.
There are several factors that contribute to this problem, and some are more complex than others, so I'm going to focus on one problem alone just now: thematisation of new information confuses the reader.
What's thematisation of new information? I hear you cry. It's something I've just done to demonstrate the problem. I've been presenting a relatively coherent argument that you can follow without effort, even if you disagree with it. But then suddenly I threw in something your brain wasn't really ready to process, and you don't know what I'm talking about (unless you *do* know what I'm talking about, in which case just make believe for a while).
The "theme" of a clause or a sentence is (put simply) the first actual "thing" that the reader encounters (ie a noun phrase).
Consider the difference between:
Last week, John went to Paris.
and
John went to Paris last week.
In the first, we are talking about "last week", our theme, whereas in the second, we are talking about John. It's only a slight difference in emphasis, but it must be pretty important, otherwise we wouldn't go to the effort of maintaining the two different possible structures.
In the clause "thematisation of new information confuses the reader", the "theme" is "thematisation of new information".
I've started the clause with something most readers do not understand, and crucially I have done nothing to prepare them for it.
The conceptual gulf here is that the writer is fixated on the name of what they're trying to teach as a defining feature, rather than accepting that the name is arbitrary and effectively meaningless, and what we need to focus on is the concept and meaning.
Imagine you want to teach the idea of a biscuit tin. You've got a choice between:
"A metal container used to store biscuits is called a biscuit tin."and
"A biscuit tin is a metal container used to store biscuits."
The latter form assumes that the reader has encountered the term "biscuit tin" and is actively seeking its meaning, but the former introduces it the concept before sticking a label on it. It assumes we understand "metal container", "store" and "biscuits", but so does the other one. The first one, therefore, acts as a better introduction; the second is a dictionary entry, not copy for an instructional book.
A challenge for the reader: pick up a grammar book, any grammar book, and turn to the section on the subjunctive. How many times does the word "subjunctive" appear before the reader gets any real idea of the concept? And I'm not including "implies uncertainty or doubt", because that really isn't the meaning subjunctive in a great many languages. (EG in Spanish creo que... implies uncertainty, but it doesn't take the subjunctive.)
In my experience all the trickiest grammar points are provided with no adequate introduction in the vast majority of cases. When an introduction includes words to the effect of "don't worry if this is unclear, it will be explained later," I find myself wanting to ask the author why he didn't just leave it until he was ready to explain it properly.
Now I feel that my last sentence needs clarification: by explaining it properly I don't mean explaining it fully, because, of course, that's not what an introduction is supposed to do. An introduction can't and shouldn't give the full detail, but it should leave you with a rough notion of what the concept is, which can be refined as it goes.
Before I started properly on Scottish Gaelic, I read a great little book: Scottish Gaelic: a brief introduction (now republished as An Introduction to Scottish Gaelic). The book was an introduction. Each chapter was merely an introduction. The book didn't attempt to teach you anything in any depth, but to provide a broad overview of the concepts in the language. It's hard to say exactly how much of an effect it had on my learning, but I certainly didn't feel like I struggled with many of the concepts when I finally came round to actually learning how to use them "in anger".
I've read several internet language learners say they like to use grammar books this way -- read through without "learning" per se, just to get a feel for the overall "shape" of the language before starting to study in earnest, so it's a shame there aren't more books written in this style.
The introduction author has two great enemies.
The first is precision. No-one wants to write things that are wrong, and any simple explanation is prone to being cut down by pedantry, which ends up leading many to start with a dry technical description, which doesn't really help the reader to understand.
The second enemy is the contents page.
No, seriously.
Think about it: the textbook has never managed to supplant lectures as the main source of study, despite being cheaper than face-to-face lessons and far more readily available. And what do all textbooks have in common? Contents pages.
What... you expected a more in-depth argument than that? Correlation isn't good enough for you? Well then, try this on for causation:
What do you get on a contents page? Quite often a single noun phrase. "The nominal group." "The subjunctive mood." "Hydrostatic load." "Currying functions." Before you even turn to the first page of the chapter you're bombarded with arbitrary, undefined terms. All new information, all theme; nothing that actually means anything.
I believe that this predisposes people to write bad introductions, because it makes them act as if the reader is looking for a definition of a given term, when often nothing could be further from the truth: the term given is merely a label of convenience to summarise a body of knowledge that the writer hopes to impart to the reader.
But now the word's up there in bold type and there's no getting away from it. It could be ignored, but wouldn't you feel a little silly actively avoiding the word used in the title of the chapter for the first page or two? The title has poisoned the chapter.
OK, I'm exaggerating grossly here. Not all books have bad introductions. Some books have meaningful chapter titles rather than simply throwing the jargon in there. But there's something about the format that encourages thinking this way, and it's something that bears conscious consideration....
08 February 2013
The misunderstandings in MOOCs
I had another article lined up and ready to go, and then I started reading about MOOCs on-line, and in particular a lot of the reaction to the total collapse of Coursera's MOOC Fundamentals of on-line education (#foemooc), one of the biggest and most revealing ironies in history.
One article quoted a law professor as follows:
We are currently experiencing a similar bubble in Web 2.0. So-called investors* have ploughed money into companies like Facebook and Instagram, only to see stock prices suffer when the market suddenly realises that they have no business plan.
[* The modern stockmarket makes a mockery of the term "investor". "Investing" is supposed to mean putting money in so that a company can grow, but most high-profile IPOs, including Facebook and the like, are simply a change of ownership. The money is for the previous owner, not the business.]
Why do we want to replicate that model at a cost to our valued educational institutions?
I've commented on that side of things before, noting Udacity's move toward corporate sponsorship: MOOCs as adverts for a given vendor.
This MOOC-as-sponsorship model might even work to some extent for the universities too -- the University of Edinburgh's Coursera MOOC E-Learning and Digital Cultures (#edcmooc) is based on a module taken from one of their masters programmes, and it's possible that the increased exposure it gives the course among potential students will get back the cost of putting the MOOC version together and on-line. The risk, though, is that if too much material is available for free, the adverts will basically kill the market for the product.
One article in particular caught my attention. Over at the blog online learning insights, blogger and instructional designer Debbie Morrison wrote a post about three takeaways from the collapse of the Coursera Fundamentals of Online Education course.
Her first "takeaway" is, in my ever so humble opinion, completely and utterly wrong. Totally. Completely. Utterly. Let's take a look:
1) The instructional model is shifting to be student-centric, away from an institution or instructor-focused model.In a massive, open and online course with thousands of students, the instructor must relinquish control of the student learning process. The instructor-focused model is counter intuitive to the idea of a MOOC; in the MOOC model the student directs and drives his or her learning. The pedagogy used for traditional courses is not applicable to a course on a massive scale. With the Web as the classroom platform, students learn by making connections with various ‘nodes’ of content [not all provided by the instructor] on the Web, they aggregate content, and create knowledge that is assessed not by the instructor, but by peers or self. This pedagogy builds upon the constructivist theory, and more recently a theory developed by Downes and Siemens, the connectivist learning model.
Wrong... how so?
Debbie Morrison clearly knows a thing or two about building online courses; unlike me, she's actually written them. But when she claims that the a tight teacher-led course is "counter intuitive to the idea of a MOOC", she's projecting her views into a reality that is distinctly different. The origin of the MOOC is in the field of Computer Science and Artificial Intelligence. The original MOOCs were very much instructor-led, with very specific, well defined tasks for the student to follow.
Because the tasks were very tightly controlled, marking could be automated, and hundreds of thousands of students could be individually assessed by machine.
Morrison is mistakenly applying the standards for her own niche, the small-to-medium online course, to a very different animal.
Morrison's courses are comparable to a seminar-based course in more traditional education. Every generation has accepted that seminar-based courses are better than lecture-based ones, but that they're more work for the teacher because the uncontrolled variables lead to a massive space of potential outcomes and directions for the class.
Online education for small classes is a good realisation of the seminar-based course, because the online medium removes some of the time pressures on the instructor in terms of organisation, logistics and delivery; time which can be devoted instead to tailored group and individual feedback.
But when you've got one instructor and one teaching assistant for a class of 41,000 students (foe) or 1 instructor for a class of 150,000 (claimed by Ed Tech Magazine for a Udacity course), then that's out the window. A teacher cannot be a "guide" or a "facilitator" to a group that big. A teacher cannot even have a concept of the individual students as human beings (we're well beyond Dunbar's Number here).
A massive course cannot be student-centred because there are simply too many students.
And let's go back to the seminar-style class for a minute. Seminar-based classes are far more common in taught postgrad courses than in undergrad courses, and while you might encounter them in undergrad courses, you'll only see them in degree-year modules, or in very specialised, low intake degree schemes. As I said before, seminar based courses are more work, but more specifically, they take a lot more time to grade. An open-ended course leads to an open-ended assessment. If you have a class of 400 students, marking essays is an endless task, and it is very difficult to mark them all fairly and equally, but if you give them a structured exam, marking is quick, easy and completely objective.
But more than that, the standard system provides a gradual shift in abstraction. Perhaps this is a happy accident rather than by design, but as we progress through our degree scheme, we should be building up a toolkit of useful skills and concepts that can later be applied elsewhere. As that toolkit expands, we are slowly given greater freedom to apply and think.
My opinion is, and always had been, that the MOOC must be seen as a vehicle for "basic skills" courses. Many within the educational establishment reject basic skills teaching in favour of "whole topic" teaching, but I personally believe that they've done so on the wrong grounds.
If you were to tell me that basic skills teaching is categorically bad, I would disagree with you. Basic skills are prerequisite to advanced skills.
But if instead you were to tell me that teaching basic skills properly and exhaustively is good in theory, but unfortunately not practical given class sizes and constraints on classroom time, I think I would be able to accept your point.
And this is where online can really flourish: we can take the basic skills load out of the classroom. Have the students work on the basic skills individually, using constrained tasks that a computer can assess. These types of tasks can be completed much quicker without all the usual classroom faff of handing out sheets, explaining the task, checking up on students, going over the answers, addressing individual errors with the whole group etc. The volume of work that is assessed or otherwise receives feedback can be increased, as the teacher's time is no longer a limiting factor.
Then the students can go into the class with the basic skills they need to address larger, more abstract, more challenging tasks. Lessons can be more rewarding for both the student and the teacher.
Debbie Morrison proposes turning a course with a 5-figure class roll into a student-centred, open-ended seminar course, but that is precisely what the FOE Mooc attempted to do, and it was a total train wreck. Debbie's "takeaway" from the incident isn't asking us to learn from their mistakes, but to repeat them.
One article quoted a law professor as follows:
“Part of what Coursera’s gotten right is that it makes more sense to build your user base first and then figure out later how to monetize it, than to worry too much at the beginning about how to monetize it,” said Edward Rock, a law professor serving as the University of Pennsylvania’s senior adviser on open course initiatives.Is he mad, or just plain ignorant? It's this sort of thinking that leads people to compare the MOOC boom to the internet bubble at the turn of the century. After all, that bubble was based on companies building massive user bases with no idea of how to monetarise them.
We are currently experiencing a similar bubble in Web 2.0. So-called investors* have ploughed money into companies like Facebook and Instagram, only to see stock prices suffer when the market suddenly realises that they have no business plan.
[* The modern stockmarket makes a mockery of the term "investor". "Investing" is supposed to mean putting money in so that a company can grow, but most high-profile IPOs, including Facebook and the like, are simply a change of ownership. The money is for the previous owner, not the business.]
Why do we want to replicate that model at a cost to our valued educational institutions?
I've commented on that side of things before, noting Udacity's move toward corporate sponsorship: MOOCs as adverts for a given vendor.
This MOOC-as-sponsorship model might even work to some extent for the universities too -- the University of Edinburgh's Coursera MOOC E-Learning and Digital Cultures (#edcmooc) is based on a module taken from one of their masters programmes, and it's possible that the increased exposure it gives the course among potential students will get back the cost of putting the MOOC version together and on-line. The risk, though, is that if too much material is available for free, the adverts will basically kill the market for the product.
One article in particular caught my attention. Over at the blog online learning insights, blogger and instructional designer Debbie Morrison wrote a post about three takeaways from the collapse of the Coursera Fundamentals of Online Education course.
Her first "takeaway" is, in my ever so humble opinion, completely and utterly wrong. Totally. Completely. Utterly. Let's take a look:
1) The instructional model is shifting to be student-centric, away from an institution or instructor-focused model.In a massive, open and online course with thousands of students, the instructor must relinquish control of the student learning process. The instructor-focused model is counter intuitive to the idea of a MOOC; in the MOOC model the student directs and drives his or her learning. The pedagogy used for traditional courses is not applicable to a course on a massive scale. With the Web as the classroom platform, students learn by making connections with various ‘nodes’ of content [not all provided by the instructor] on the Web, they aggregate content, and create knowledge that is assessed not by the instructor, but by peers or self. This pedagogy builds upon the constructivist theory, and more recently a theory developed by Downes and Siemens, the connectivist learning model.
Wrong... how so?
Debbie Morrison clearly knows a thing or two about building online courses; unlike me, she's actually written them. But when she claims that the a tight teacher-led course is "counter intuitive to the idea of a MOOC", she's projecting her views into a reality that is distinctly different. The origin of the MOOC is in the field of Computer Science and Artificial Intelligence. The original MOOCs were very much instructor-led, with very specific, well defined tasks for the student to follow.
Because the tasks were very tightly controlled, marking could be automated, and hundreds of thousands of students could be individually assessed by machine.
Morrison is mistakenly applying the standards for her own niche, the small-to-medium online course, to a very different animal.
Morrison's courses are comparable to a seminar-based course in more traditional education. Every generation has accepted that seminar-based courses are better than lecture-based ones, but that they're more work for the teacher because the uncontrolled variables lead to a massive space of potential outcomes and directions for the class.
Online education for small classes is a good realisation of the seminar-based course, because the online medium removes some of the time pressures on the instructor in terms of organisation, logistics and delivery; time which can be devoted instead to tailored group and individual feedback.
But when you've got one instructor and one teaching assistant for a class of 41,000 students (foe) or 1 instructor for a class of 150,000 (claimed by Ed Tech Magazine for a Udacity course), then that's out the window. A teacher cannot be a "guide" or a "facilitator" to a group that big. A teacher cannot even have a concept of the individual students as human beings (we're well beyond Dunbar's Number here).
A massive course cannot be student-centred because there are simply too many students.
And let's go back to the seminar-style class for a minute. Seminar-based classes are far more common in taught postgrad courses than in undergrad courses, and while you might encounter them in undergrad courses, you'll only see them in degree-year modules, or in very specialised, low intake degree schemes. As I said before, seminar based courses are more work, but more specifically, they take a lot more time to grade. An open-ended course leads to an open-ended assessment. If you have a class of 400 students, marking essays is an endless task, and it is very difficult to mark them all fairly and equally, but if you give them a structured exam, marking is quick, easy and completely objective.
But more than that, the standard system provides a gradual shift in abstraction. Perhaps this is a happy accident rather than by design, but as we progress through our degree scheme, we should be building up a toolkit of useful skills and concepts that can later be applied elsewhere. As that toolkit expands, we are slowly given greater freedom to apply and think.
My opinion is, and always had been, that the MOOC must be seen as a vehicle for "basic skills" courses. Many within the educational establishment reject basic skills teaching in favour of "whole topic" teaching, but I personally believe that they've done so on the wrong grounds.
If you were to tell me that basic skills teaching is categorically bad, I would disagree with you. Basic skills are prerequisite to advanced skills.
But if instead you were to tell me that teaching basic skills properly and exhaustively is good in theory, but unfortunately not practical given class sizes and constraints on classroom time, I think I would be able to accept your point.
And this is where online can really flourish: we can take the basic skills load out of the classroom. Have the students work on the basic skills individually, using constrained tasks that a computer can assess. These types of tasks can be completed much quicker without all the usual classroom faff of handing out sheets, explaining the task, checking up on students, going over the answers, addressing individual errors with the whole group etc. The volume of work that is assessed or otherwise receives feedback can be increased, as the teacher's time is no longer a limiting factor.
Then the students can go into the class with the basic skills they need to address larger, more abstract, more challenging tasks. Lessons can be more rewarding for both the student and the teacher.
Debbie Morrison proposes turning a course with a 5-figure class roll into a student-centred, open-ended seminar course, but that is precisely what the FOE Mooc attempted to do, and it was a total train wreck. Debbie's "takeaway" from the incident isn't asking us to learn from their mistakes, but to repeat them.
06 February 2013
A corpus in the class...
This week I got my first opportunity to try out corpus linguistics in the classroom. It was a class of technically-oriented students (computers and media), and it was a small enough group that I could get them all in front of a computer for a bit, so I thought I'd give it a go.
(See below for a brief summary of what corpus linguistics actually is if you don't already know.)
I didn't think to check beforehand whether they understood the concept of regular expressions (a computing term, not a linguistic one). Not a major mistake. It turned out that they haven't been taught that in their courses, so I ended up teaching a little bit about regular expressions. There's nothing wrong with teaching a bit of computing in an English class, as long as you are teaching it through English, after all!
I didn't prepare enough good examples to pull out anything interesting from, as I wanted to work with whatever the students suggested. Why? Well, the whole point of corpus linguistics is that it's full of things you don't expect, and wouldn't be able to guess. With the first class, this resulted in findings so dull that I can't even remember what words we used. But in the second, someone said "amazing" (which may well have been a sarcastic reaction to my geeky enthusiasm for corpus linguistics!) and I searched for it in the British National Corpus. As I was looking at the computer screen and reading out a few of the words appearing around "amazing", I spotted a pattern: beach... bar... hotel... wait! The word "amazing" appears very frequently in adverts for package holidays. You learn something new every day.
So the spontaneous examples from the class are definitely a good thing, but next time I'll have a list of other examples that show interesting results.
Overall, though, I felt the lesson went really well for a first attempt. I focused on two tasks, the first of which was shamelessly ripped off of the first assessed task I carried out with a corpus back at uni: checking the frequency of occurence of must, have to, and 've got to in English, then drilling down to see differences in register. The second task was far more freeform and exploratory, asking them to look for common phrasal verbs. It was far more of an open-ended task than I would usually set, and I was quite unsure of myself setting it. It worked well with the first group and not so well with the second. Basically, there wasn't enough support to kick off the phrasal verb task. I should have given them a more gradual introduction by starting with a specific phrasal verb, then asking them to find phrasal verbs with a specific verb, and then verbs that go with a particular particle, then leave them to explore openly for the last 20 minutes or so.
But, yeah... if I ever find myself in front of a class who that sort of thing would appeal to, and where the facilities are available, I'll give it another go.
What is corpus linguistics?
"Corpus linguistics" is the analysis of a large body (corpus) of texts using computers. It allows us to search for patterns in language statistically, rather than relying on our intuition or simply trusting the grammar book. You use a piece of software called a concordancer to extract the information, and there's a great concordancer free on Brigham Young University's website, with access to several different English-language corpora, as well as Spanish and Portuguese.
(See below for a brief summary of what corpus linguistics actually is if you don't already know.)
I didn't think to check beforehand whether they understood the concept of regular expressions (a computing term, not a linguistic one). Not a major mistake. It turned out that they haven't been taught that in their courses, so I ended up teaching a little bit about regular expressions. There's nothing wrong with teaching a bit of computing in an English class, as long as you are teaching it through English, after all!
I didn't prepare enough good examples to pull out anything interesting from, as I wanted to work with whatever the students suggested. Why? Well, the whole point of corpus linguistics is that it's full of things you don't expect, and wouldn't be able to guess. With the first class, this resulted in findings so dull that I can't even remember what words we used. But in the second, someone said "amazing" (which may well have been a sarcastic reaction to my geeky enthusiasm for corpus linguistics!) and I searched for it in the British National Corpus. As I was looking at the computer screen and reading out a few of the words appearing around "amazing", I spotted a pattern: beach... bar... hotel... wait! The word "amazing" appears very frequently in adverts for package holidays. You learn something new every day.
So the spontaneous examples from the class are definitely a good thing, but next time I'll have a list of other examples that show interesting results.
Overall, though, I felt the lesson went really well for a first attempt. I focused on two tasks, the first of which was shamelessly ripped off of the first assessed task I carried out with a corpus back at uni: checking the frequency of occurence of must, have to, and 've got to in English, then drilling down to see differences in register. The second task was far more freeform and exploratory, asking them to look for common phrasal verbs. It was far more of an open-ended task than I would usually set, and I was quite unsure of myself setting it. It worked well with the first group and not so well with the second. Basically, there wasn't enough support to kick off the phrasal verb task. I should have given them a more gradual introduction by starting with a specific phrasal verb, then asking them to find phrasal verbs with a specific verb, and then verbs that go with a particular particle, then leave them to explore openly for the last 20 minutes or so.
But, yeah... if I ever find myself in front of a class who that sort of thing would appeal to, and where the facilities are available, I'll give it another go.
What is corpus linguistics?
"Corpus linguistics" is the analysis of a large body (corpus) of texts using computers. It allows us to search for patterns in language statistically, rather than relying on our intuition or simply trusting the grammar book. You use a piece of software called a concordancer to extract the information, and there's a great concordancer free on Brigham Young University's website, with access to several different English-language corpora, as well as Spanish and Portuguese.
05 February 2013
Groupwork in action...?
So a couple of days ago I commented on the idea of peer instruction, and noted how it said that peer explanation worked because a student who had just learned something recently was often able to explain it to a peer. I compared this with my rant against groupwork from a year or so ago.
I found myself feeling more open to groupwork, now that I had a context, and when I found myself in a room with fewer computers than I expected and an internet-based lesson plan, I needed to test my confidence in it.
And yes, the first group seemed to get something out of it. One of the students with the best English was sitting with one of the guys with a pretty basic level, and he was bringing him up. Most of the pairs were working cooperatively and discovering stuff. Success!... I thought.
The second group weren't as good. Every pair seemed to have one person working and the other doing nothing. Or one person checking their email, or playing a game, or browsing the net.
But I can't say that this was due to the "groupwork" thing, because there was a rather major difference between the two groups: most of the second group bring a laptop with them, and so they weren't working on the uni desktop machines. Was it the seating arrangements, the screen size or maybe the lack of mice (everyone was using the touchpad) that made the whole thing seem more "solo" to them? Or was it simply that "my laptop" is "my territory"?
This is the group I'm most likely to experiment with in terms of teamwork, though, as they seem to be a fairly tight group and work on lots of activities together in other classes. It will be interesting to challenge my own preconceptions.
I found myself feeling more open to groupwork, now that I had a context, and when I found myself in a room with fewer computers than I expected and an internet-based lesson plan, I needed to test my confidence in it.
And yes, the first group seemed to get something out of it. One of the students with the best English was sitting with one of the guys with a pretty basic level, and he was bringing him up. Most of the pairs were working cooperatively and discovering stuff. Success!... I thought.
The second group weren't as good. Every pair seemed to have one person working and the other doing nothing. Or one person checking their email, or playing a game, or browsing the net.
But I can't say that this was due to the "groupwork" thing, because there was a rather major difference between the two groups: most of the second group bring a laptop with them, and so they weren't working on the uni desktop machines. Was it the seating arrangements, the screen size or maybe the lack of mice (everyone was using the touchpad) that made the whole thing seem more "solo" to them? Or was it simply that "my laptop" is "my territory"?
This is the group I'm most likely to experiment with in terms of teamwork, though, as they seem to be a fairly tight group and work on lots of activities together in other classes. It will be interesting to challenge my own preconceptions.
03 February 2013
MOOCs? Bah humbug!
Yesterday I got an email from the Georgia Tech online education Coursera MOOC explaining why it was important that people signed up for groups. I felt it was slightly patronising to start off with, but made particularly so because I received it a while after I'd clicked "Assign Me A Group". The reason I hadn't signed up for any groups was simple: the course opened on Monday and the group signup had crashed before I had a chance to look at it, and I didn't have the time to look at it again until Saturday, because I had a very hectic working week (28 and a half student contact hours, plus planning, marking and reporting). So I'd been quite annoyed to log in on Saturday morning hoping to do my week's MOOC work and find that I couldn't do my week's homework because I wasn't in a group.
The course has now been postponed indefinitely, presumably because there were too many people in certain groups and too many people without groups. Overall, their approach seems massively shortsighted -- the original problem with group signups was that they'd done it with a Google Docs spreadsheet with wide open permissions, meaning it could all too easily get mucked up. A spreadsheet can't handle that sort of concurrency anyway.
Then there was the links to the videos that had been uploaded at way too high a resolution and took too long to download.
Overall, for a course specifically teaching online education, the organisers appear to have had a spectacularly poor grasp of the technological side of things.
I signed up for another MOOC on online education, again with Coursera, but from the University of Edinburgh: Elearning and Digital Cultures. This also opened on Monday, and I didn't get a chance to look at it until Saturday, when I saw that the instructions for the week included watching a bunch of short films before tuning into a live webcast with the tutor on Friday. They didn't give enough notice, and it really misses the point of global online education if you have to watch at a certain time, even though the segment is not interactive.
So I'm getting pretty cynical about MOOCs and the woolly thinking and poor planning surrounding them. So thank heavens for Edinburgh University's Introduction to Philosophy, featuring a scruffy looking bloke in a poor-fitting jumper talking to a camera. It's finally something I feel I can relate to. (And to think I only signed up because I saw the Edinburgh logo, and there was no other Edinburgh courses at the time!)
The course has now been postponed indefinitely, presumably because there were too many people in certain groups and too many people without groups. Overall, their approach seems massively shortsighted -- the original problem with group signups was that they'd done it with a Google Docs spreadsheet with wide open permissions, meaning it could all too easily get mucked up. A spreadsheet can't handle that sort of concurrency anyway.
Then there was the links to the videos that had been uploaded at way too high a resolution and took too long to download.
Overall, for a course specifically teaching online education, the organisers appear to have had a spectacularly poor grasp of the technological side of things.
I signed up for another MOOC on online education, again with Coursera, but from the University of Edinburgh: Elearning and Digital Cultures. This also opened on Monday, and I didn't get a chance to look at it until Saturday, when I saw that the instructions for the week included watching a bunch of short films before tuning into a live webcast with the tutor on Friday. They didn't give enough notice, and it really misses the point of global online education if you have to watch at a certain time, even though the segment is not interactive.
So I'm getting pretty cynical about MOOCs and the woolly thinking and poor planning surrounding them. So thank heavens for Edinburgh University's Introduction to Philosophy, featuring a scruffy looking bloke in a poor-fitting jumper talking to a camera. It's finally something I feel I can relate to. (And to think I only signed up because I saw the Edinburgh logo, and there was no other Edinburgh courses at the time!)
02 February 2013
Peer Instruction
Well that was serendipitous. Just yesterday I linked to an old post of mine called The Myth Of Groupwork. My argument against groupwork was that students don't know how to teach each other, and that even if they did, the nature of the task doesn't present teaching as the goal, so instead of working together to learn, the students work together to fill in the gaps on the sheet. A "task-focused" approach or, as I compared it to in that post, a "pub-quiz" approach to a question sheet.
But today I sat down to watch some videos from a Coursera MOOC on online education, provided by Georgia Tech (Georgia the US state, not Georgia the country). Now I'm not too impressed with the course in a lot of respects so far, but it is full of very good information that I will definitely learn from. (A fuller review may come later, when I've made a bigger dent in my workload here.)
I've just paused a video midflow to write this because the course instructor has just started talking about an idea called Peer Instruction. Apparently it was "discovered" in 1990 by a guy who "discovered" that lecturers "didn't work". Peer-led learning is nothing new really, and everybody knows that lectures alone don't work. But leaving that aside...
Eric Mazur, the man credited in the video with this "discovery" did make a useful observation, even if others probably did before him. Students who had learned a new concept in class successfully were often better able to explain it to a peer (that had been in the same lesson but hadn't understood) than the teacher, because the student has just gone through the process of reasoning out the problem.
There is one intrinsic flaw in this reasoning: a good teacher should be capable of giving a better explanation than a non-expert peer; Mazur's discovery was in effect nothing more than discovering he wasn't a particularly good teacher. Which is a pretty good explanation, for why this teaching idea was "discovered" by one of the leading lights in optical physics (>groan<... sorry) rather than a pedagogy or education professional.
This is OK... that's how universities work, and that's why learner independence is so important in a traditional university: lecturers are subject experts, not education experts.
But it's when I compare this idea of Peer Instruction to my observations of groupwork that things start to get interesting, because the incident refered to in my previous post wasn't something that we'd just learned. It was a grammar class with a mixture of natives, long-term learners and recent learners, so re-evaluating it from the perspective of Peer Instruction, an essential element was missing: the people who understood the concepts we were being tested on had already known them a long time before the class started, so they didn't have the recent experience of having recently worked out the answer that successful peer instruction is based on.
One of my philosophies (and a frequent undercurrent in my posts here) is that it's safer to assume a teaching technique is bad than good until you understand how it works and when, where and why it's appropriate. Most groupwork is justified by the overly simplistic notion of "learning from your peers", but the idea of "learning from peers who have recently learned the concept" is massively more useful.
Now that I better understand the why and when of groupwork, I'm far less negative about it, but that doesn't mean I'll suddenly take it up wholeheartedly. In my situation, this is entirely academic: my classes are at such a mixed level that I the central idea of peer instruction fails: the students who understand the concept generally understood the concept (at least in part) before the lesson -- they do not have the recent experience of working out how the language point works....
But today I sat down to watch some videos from a Coursera MOOC on online education, provided by Georgia Tech (Georgia the US state, not Georgia the country). Now I'm not too impressed with the course in a lot of respects so far, but it is full of very good information that I will definitely learn from. (A fuller review may come later, when I've made a bigger dent in my workload here.)
I've just paused a video midflow to write this because the course instructor has just started talking about an idea called Peer Instruction. Apparently it was "discovered" in 1990 by a guy who "discovered" that lecturers "didn't work". Peer-led learning is nothing new really, and everybody knows that lectures alone don't work. But leaving that aside...
Eric Mazur, the man credited in the video with this "discovery" did make a useful observation, even if others probably did before him. Students who had learned a new concept in class successfully were often better able to explain it to a peer (that had been in the same lesson but hadn't understood) than the teacher, because the student has just gone through the process of reasoning out the problem.
There is one intrinsic flaw in this reasoning: a good teacher should be capable of giving a better explanation than a non-expert peer; Mazur's discovery was in effect nothing more than discovering he wasn't a particularly good teacher. Which is a pretty good explanation, for why this teaching idea was "discovered" by one of the leading lights in optical physics (>groan<... sorry) rather than a pedagogy or education professional.
This is OK... that's how universities work, and that's why learner independence is so important in a traditional university: lecturers are subject experts, not education experts.
But it's when I compare this idea of Peer Instruction to my observations of groupwork that things start to get interesting, because the incident refered to in my previous post wasn't something that we'd just learned. It was a grammar class with a mixture of natives, long-term learners and recent learners, so re-evaluating it from the perspective of Peer Instruction, an essential element was missing: the people who understood the concepts we were being tested on had already known them a long time before the class started, so they didn't have the recent experience of having recently worked out the answer that successful peer instruction is based on.
One of my philosophies (and a frequent undercurrent in my posts here) is that it's safer to assume a teaching technique is bad than good until you understand how it works and when, where and why it's appropriate. Most groupwork is justified by the overly simplistic notion of "learning from your peers", but the idea of "learning from peers who have recently learned the concept" is massively more useful.
Now that I better understand the why and when of groupwork, I'm far less negative about it, but that doesn't mean I'll suddenly take it up wholeheartedly. In my situation, this is entirely academic: my classes are at such a mixed level that I the central idea of peer instruction fails: the students who understand the concept generally understood the concept (at least in part) before the lesson -- they do not have the recent experience of working out how the language point works....
01 February 2013
Physician, heal thyself!
I have many criticisms about teaching and/or teachers, and this sometimes leads me to wind up other teachers. Not here, though, because I've been far too focused on my own teaching!
My criticisms aren't intended to be taken too personally, though. They're not personal failings, they're just mistakes that are pretty much endemic in teaching, so it's not really that surprising that I fall into the trap of making a lot of these mistakes myself.
So this post is a reflection on one of the errors I constantly find myself slipping into, and that I need to be on constant guard against.
A year ago I wrote about a personal experience as a learner, in a post entitled The Myth of Groupwork. My criticism of the groupwork was that it was too "task-focused", in that the students and the teacher were looking to fill in all the answers on the sheet, not to "learn" per se. The results of groupwork tasks give the teacher no insight into the level of knowledge of any individual student, and there's not normally any way of verifying that any of the lower-achieving students have actually learnt anything from the experience.
My error isn't in assigning groupwork, but it's that problem of being "task-focused". Sometimes I make the mistake of looking at the result of "the class", rather than verifying that every individual has learned from the experience.
Which is not to say that a classroom teacher can check every student's work for every task, every time. No, that's a logistical impossibility.
What I'm asking of myself is to stay mindful of any compromises I have to make between pedagogy and logistics. I'm asking myself to remember at the end of the class that I do not and cannot know whether that lesson has been a success... or at least not until exam marking day.
There's a very human tendency to start to accept the things you do as being intrinsically and unquestionably correct, even while retaining a superficial recognition that it is in fact a compromise. Once you do this, you stop trying to improve, and that's something that I never want to do. Realism forces me to accept the compromise, but optimism tells me that if I keep my eyes open, I'll find ways to slowly improve and compromise less.
What I've always aimed to do when criticising pedagogy and methodology is raise the questions and increase the readers' mindfulness of their own actions and motivations, but sometimes I've fallen into the trap of criticising more bluntly. That can be taken far too personally, and doesn't help anyone.
So I suppose that I have to be more mindful not only in the class, but when discussing classes too....
My criticisms aren't intended to be taken too personally, though. They're not personal failings, they're just mistakes that are pretty much endemic in teaching, so it's not really that surprising that I fall into the trap of making a lot of these mistakes myself.
So this post is a reflection on one of the errors I constantly find myself slipping into, and that I need to be on constant guard against.
A year ago I wrote about a personal experience as a learner, in a post entitled The Myth of Groupwork. My criticism of the groupwork was that it was too "task-focused", in that the students and the teacher were looking to fill in all the answers on the sheet, not to "learn" per se. The results of groupwork tasks give the teacher no insight into the level of knowledge of any individual student, and there's not normally any way of verifying that any of the lower-achieving students have actually learnt anything from the experience.
My error isn't in assigning groupwork, but it's that problem of being "task-focused". Sometimes I make the mistake of looking at the result of "the class", rather than verifying that every individual has learned from the experience.
Which is not to say that a classroom teacher can check every student's work for every task, every time. No, that's a logistical impossibility.
What I'm asking of myself is to stay mindful of any compromises I have to make between pedagogy and logistics. I'm asking myself to remember at the end of the class that I do not and cannot know whether that lesson has been a success... or at least not until exam marking day.
There's a very human tendency to start to accept the things you do as being intrinsically and unquestionably correct, even while retaining a superficial recognition that it is in fact a compromise. Once you do this, you stop trying to improve, and that's something that I never want to do. Realism forces me to accept the compromise, but optimism tells me that if I keep my eyes open, I'll find ways to slowly improve and compromise less.
What I've always aimed to do when criticising pedagogy and methodology is raise the questions and increase the readers' mindfulness of their own actions and motivations, but sometimes I've fallen into the trap of criticising more bluntly. That can be taken far too personally, and doesn't help anyone.
So I suppose that I have to be more mindful not only in the class, but when discussing classes too....
Subscribe to:
Posts (Atom)