29 October 2010

The unspoken value of student feedback

I once read a cracking quote from a famous actor, but I can't remember who it was, and what exactly he said.  The gist of it was that he felt the worst review he could get was one that commented on how good the acting was.

Why?  The job of the actor is to make the play seem real, to the point where even though the audience know they are in a theatre, the acting should distract the attention from reality, only allowing the audience to return to reality when the curtain falls or the actors look out to audience.

Whether it's on the stage or on TV -- or even in a book -- I'm sure we've all had the experience of feeling our attention drawn out of the room we're in, and into the story.

Now through most of my education, nobody asked me what I found most useful as a pupil or a student, but once or twice they have, and I noticed something remarkable:  I couldn't remember everything we'd done.  OK, so maybe that's not so remarkable, but what I mean is that I could remember some activities clearly, and others barely at all.  Still not remarkable?  OK, let's clear this up.

I started listening to other students' feedback.  They only mentioned a fraction of what we'd done in the class -- 10 or 15 minutes of the hour.  Where had the rest gone?

The reviews focused on things that we didn't like (things that bored us, basically) and things that we enjoyed or thought helped us learn.  But the things that we said we enjoyed were the things that we noticed.  Learning should be a natural experience.  When we are learning at our most effective, we should be so engrossed in it that we don't consciously think about what we're doing.

Surely, then, the most important part of the class is not what the students claim was effective, but what the students do not comment on at all.  The best feedback for an actively may actually be no feedback at all, paradoxically, because feedback is the sign that the student has noticed the teaching, just as the critic has noticed the acting.

The Curriculum for Excellence, the latest initiative in Scottish public education, aims to promote kids' responsibility for their own learning.  Kids are expected to reflect on their own learning, and teachers to act on pupil feedback.

But the danger is that in asking students to reflect on their own learning, we are making them focus consciously on the process, and preventing them from fully engaging in the actual learning itself.
And worse, if teachers build their classes based on student feedback, they will be building classes based on the least effective components of their teaching.

So if I'm right, Scottish education may have just shot itself in the foot... with a bazooka.
For once, I'd really like to be proven wrong.

26 October 2010

I'm a bit behind the times, as this was announced over a fortnight ago, but...

"A new draft resolution for endangered languages was launched on July 8th in the European Parliament at the meeting of the Intergroup for Traditional Minorities, National Communities and Languages by the Corsican MEP François Alfonsi (EFA/Greens)."
[http://www.gipuzkoaeuskara.net/albisteak/1286180399 also available in Basque, Spanish and French.]

The historic problem according to the article is that while their have been European funds available to support minority languages, truly endangered languages have been shut out from applying for funds because they don't have the infrastructure in place that allows them to apply for these funds.

Shifting the goal-posts or levelling the playing field -- whichever way you look at it, this will give more languages a fair go at preparing themselves for the future.

25 October 2010

Why English is a poor international language.

English is now the international language of trade and commerce, but it's not fit for purpose.  That's not to say any other language genuinely is either.  For all the spelling quirks, inconsistent borrowings and weird pronunciations in English, the most important problem, to my mind, is the result of the natural evolution of language.

Language evolved to be spoken, for face-to-face communication.  It's only modern technology that has allowed remote communications (written and by telephone) to really take off.

Languages take advantage of the face-to-face medium implicitly.  We have three grammatical "persons".  The first is who is speaking, the second is who is being spoken to, the third is anybody else -- absolutely anybody else.

To me, this is one of the concrete physical underpinnings of language, which is not as abstract as some would like to think.

Ramachandran and Hubbard put forward the case for language as a synaesthetic phenomenon.  Even if this is overstating the case, their theory uses the proximity of the auditory parts of the brain with the parts involved in physical movements.  Signers have often held that they are not "reading" or "writing" when they engage in a sign-language conversation, but speaking, and it has long been accepted that sign languages are genuine languages, not mere abstract codes.  Other academics than Ramachandran and Hubbard that language was a series of gestures, but that they just happened to be gestures of the mouth.  Ramachandran and Hubbard merely suggest a mechanism that would allow us to perceive these gestures in the absence of visual data.

But I'm at risk of digressing here, as this theory is something I find absolutely fascinating.

The 1st, 2nd, 3rd distinction is not just about people, but also more generally about location.  Many languages have 3 words where English only has "here" and "there".  If you think about it, "here" is "where I am".  "There" is merely anywhere that I am not.  However, in Gaelic, "an seo" is where I am, "an sin" is where you are, and "an siud" is where neither of us are -- a "third place", effectively.  In older English we had "here", "there" and "yonder", and we still have remnants of this distinction in the phrase "this, that and the other".

This is where the physicality comes in.  When we talk face to face, I can point to "you" and "me" unambiguously, but third parties would be a vague wave off to one side.  Now, because you can see me, and I can see you, we know lots about who's speaking, not least of which is gender.  Very few languages encode gender in their 1st and 2nd plural pronouns because it's not information that really is particularly useful. But in the 3rd person, it helps a great deal, because it helps us categorise and reduce the number of potential candidates.  It lets us talk about 2 people without confusing them, if they happen to be of a different sex -- so he says, and she says, and he says...

But now on the internet, with text based communications and screen-names that are often not real names and give no clues about gender, what are we to do?  In French, it's possible that someone would give themselves away by using a gender-specific adjective, but these are vanishingly rare in English.  So when I refer to someone else's comments, I often end up arbitrarily ascribing a gender to them.  And it's normally male, which often winds up women.

This alone can be sorted by using a truly ungendered language (Quechua as the new language of the internet, anyone?) but there's another problem that slips a lot of people by: in text-based conversations, "you" is also prone to misinterpretation.

Think about it.  I can't see you.  You can't see me.  Am I really talking to you?  Perhaps I'm talking to someone else, and calling him (or her!) "you".  But you think I'm talking to you, because you're seeing that word "you" and there's no reason to think it's someone else.  In the physical world, you would hear me saying "you" and you would see whether I was looking at you as I said it or not.  In the internet there is no physical relationship, no "pointing", so the boundary between 2nd and 3rd person has been completely broken.  This leads to confusion and unnecessary offense so often, not just on the text-based side of the internet, but in conference calls too.

I've been on phone conferences where someone's asked a question to "you", and no-one has answered because they all think it's someone else who is being addressed.  In language classes, a teacher will start by asking "how are you?" to the whole class, and everyone will answer in turn, but on an internet tutorial, the latest joiner appears to assume that the question was addressed to him only, and starts a conversation.

A true "remote" language would have to have a radically different structure, and perhaps people would reject it.  What would it be?

Maybe it would just be a matter of collapsing second and third into one.  This is already how many IE languages handle politeness.  Even in English, we are somewhat familiar with the idea of speaking about the 2nd person in the 3rd person, even if only in posh restaurants and in period drama.  "Does sir want to see the menu?" "Is sir ready to order?"

Another option would be to maintain the 1st, 2nd, 3rd distinction, but (and this takes a bit of getting your head round) remove the "you" from the 2nd person singular and require that the person's name is used instead.  "Do John want to see the menu?" "Are John ready to order?"

But what would have happened to language if humans had originally evolved in a conference call environment?  This really messes with your head.

I suspect we would have had a 3 person distinction -- 1: me, 2: anyone on the call, 3: anyone not on the call -- but augmented by a some manner of direct address vs reference in the 2nd, so that I can ask a question to a person in the call and refer to something someone else in the call said without ambiguity.  So that's actually 4 persons.  I think.  My head hurts.

22 October 2010

Learning styles, teaching strategies

Anyone who has talked to me in person about learning will be aware that I don't believe in "learning styles", the idea that people have their own personal optimal way of learning.

Now I'm not saying that you shouldn't treat people differently in a classroom, because there are individual differences.  All I'm saying is that these differences are differences in past experiences, not fundamental differences in their ways of thinking.  If a new subject relies on knowledge that the student lacks, these holes must be filled before the new subject can be learned.

If we look at physics, a student that is not strong mathematically is going to have a lot of bother, because physics is built on maths.  No physics teacher is going to accept that the student is a "non-mathematical learner" and teach physics in a non-mathematical way -- this would be nonsensical and impossible.

Discussing learning styles with language learners and teachers is interesting.  I have heard and read many many learners claiming that they need the written word because they are "visual learners".  Now most professional educators (people with degrees, not one-month teaching certificates) would decry that as a misinterpretation of what learning styles is all about, but I haven't had a solid explanation of how to apply learning style theory in a classroom.  As Einstein said, if you can't explain it clearly, you don't understand it well enough.  This leaves us with the idea that no-one understands learning styles, yet it is now part of the fabric of the modern teaching system.

So what is there instead?  I've always held that we should talk about learning strategies, that the learner should be encouraged to develop strategies appropriate to the problem at hand rather than believing he has a fixed "style" that the task must be distorted to fit.  This latter notion always struck me like trying to drive screws into wood with a hammer.  It takes more time and effort, and the end result may look OK from the outside, but underneath, the structure is weaker.

But I've re-evaluated and decided that while the end-goal is learning strategies, the only thing we can really focus on is teaching strategies.  The appropriate strategy can only be known by someone familiar with the desired end-state knowledge and the learner's initial knowledge.  The course given to the student must define the optimal strategy and walk the learner through it, because it is only be doing that we learn, so it follows that in employing a strategy, we learn it.  We then hope that the learner will recognise in future where to apply the strategy he has now learnt.

Anyway, so I was very glad when I was recently directed to the journal article Learning Styles: concepts and evidence.  The article is a very specific type of review of the published literature on learning styles: it looked at the methodology behind the experiments supporting (or otherwise) learning style theory, to see if they really proved anything.  The findings were pretty damning: the only experiments that held up to examination showed that learning styles had no discernable effect on the optimal strategy for teaching a subject.

The authors were very careful to say that this does not disprove learning style theory, but they stated quite clearly that their professional opinion was that the focus on learning styles in modern education is distracting attention and money from finding the optimal approach for the subject itself, and that there is no place for learning styles in modern education.

And they're quite right.  Even if language styles do exist, we still don't know what they really are or we certainly don't know how to account for them, so discussion of learning styles is the equivalent to the old clichéd philosophical debate on how many angels can dance on the head of a pin.

15 October 2010

On-line language learning -- new solutions or new problems?

Way back in 2001, Wilfried Decoo gave what I consider one of the most important lecturers in the history of language learning.  The transcript has disappeared from his university site, but is still available on archive.org . The lecture was entitled "On the mortality of language learning methods", and was a brief history of the predominant language learning methods of the 19th and 20th centuries.

Decoo said "Of all disciplines, language learning is one that is the most ignorant of its own past." He notes that of all fields in academia, language learning is unique in this regard.  Science, mathematics, literature, law... in every other field of study, the history of the subject is an integral part of the student's workload.  In fact, he goes on to say that if there is any mention of past methods during teacher training, it is usually only to say how wrong it was and to show why the new method is better.

But the picture Decoo paints is of history repeating itself.  Of "instant experts", ignorant of the past, making authoritative statements and declaring that they have discovered a new, better way to teach language, but almost inavariably they're saying the same thing as thousands before them.

How will language learning progress if teachers continue to make the same mistakes, generation after generation?  Even then, things haven't progressed quite as Decoo predicted.

Almost 10 years on, the Communicative Approach is alive and well, thanks to the behemoth of the English teaching industry.  The Communicative Approach's core strength is that a native speaker needs minimal training to teach his language using it, making it the only way to fill the demand.  People involved in TEFL tend to believe and repeat the hype uncritically, and it is widely accepted as a progressive and modern approach, despite being recognised as limited and outmoded in the 1990s.

Decoo also predicted that our methods now would be led by the internet, but we're only now at the stage where internet language programs are truly becoming the mainstream.  Sites like livemocha offer various free and fee-paying courses, modelled loosely on the Rosetta Stone software.  Rosetta Stone itself is moving onto the net.  A new generation of electronic learning software like Hot Potatoes is giving teachers the tools to produce their own tasks quickly and easily.

But as Decoo said in 2001, the method is being led by the medium.  He said that "The irony of Internet as the new panacea is that it has less functionality compared to a well-designed CD-rom for language learning."  This is no longer true -- the internet can now do almost anything a CD-ROM can do, but in terms of access time, it is an awful lot slower.  Comparing the Rosetta Stone online demo with a demo CD of the same package, or comparing LiveMocha with anything else shows the experience to be less immediate, and I find those little delays let my brain cool off and switch off.  I get bored or impatient or both.

Decoo says that all courses pick one feature as their key selling point, and in this case that selling point is interaction with native speakers.  LiveMocha has amateur marking (which is often excellent, but equally often of little or no value) and both push heavily the idea of social networking and language exchange.  But this interaction is not integrated with the course design, and the courses themselves do not equip the learner with sufficient language to engage in meaningful interaction.  Talking to native speakers is an add-on, a sideshow; yet it is used as a keyword, a catchphrase, the hook to draw you in.  LiveMocha bolsters this with the supremely arrogant soundbite "Livemocha brings language learning out of the stone age".  Nice.


Hot Potatoes has a different problem.  It is a toolbox to allow teachers to make a limited set of learning tasks.  Where's the harm in that?  The moment something becomes easier, it will be done more often.  One of the tasks in hot potatoes is the gap-fill, which I discuss in an earlier post.  There is no support on choosing when each type of task is appropriate, so there is a very real danger that tasks will be designed around the tools available rather than around educational goals.  It is a case of, as Decoo puts it, the medium making the method.

Computers offer up infinite options, yet somehow they seem to limit us more than they enable us.  It is an interesting paradox.

11 October 2010

Passive vs Active (cont) -- Assimil.

In my previous post, active vs passive, I mentioned the Assimil series of books.

I've was listening to one of their courses in the bath this morning and it strikes me that they are on the right track here -- in some languages.

Each lesson in the Assimil method teaches by dialogues, and you are expected to listen to each dialogue multiple times.  This is backed up by a translation exercise that in the course I was listening to (Le Catalan) was actually quite effective in rearranging the language taught into a new and fairly different form, but one that is still within the reader's comprehension.  Unfortunately the other book I've tried (Iniciación al Euskera) presents tasks that are impossible given the previously taught knowledge.

With all the listening, reading and relistening of the main dialogue, you're possibly only getting 5-10% of your time exposed to new comprehensible material.  The study I quoted said that even a 50-50 split of active vs passive was as effective as 100% active study.  Does this suggest that Assimil is not optimally efficient or that the fundamental differences between music and language are enough that this doesn't apply?

08 October 2010

Passive vs Active

I read an interesting article on Wired.com recently, relating to a paper in the Journal of Neuroscience.  The study looked at a small set of musical skills.  Common wisdom is that you can only learn music by performing it, which is analogous to learning language by production.  Previous studies support this, and prove that you can't learn to play music just by listening, which is analogous to language learning by practicing receptive skills.

The study noted that previous research has focussed on an all-or-nothing approach -- all practice or no practice -- and instead looked at the effects on passive perception in addition to practice, and they claim that timed right, it's just as effective to spend 50% of your time listening to what you've just learned.  They are, of course, talking about fairly early learners, as professional level musicians can show remarkable skills in learning and playing new music.

This makes a great deal of sense to me -- I have always said that I can only understand something that I can say (or, in the case of language I've forgotten, could have said previously).  This doesn't mean that I ever would say it, but that it is possible in my internal model of the language.  This, I feel, is analogous to the example of musicians.

A beginner in both fields has a small set of "devices" to employ (specific notes and practised combinations thereof vs specific words and practised combinations of them) and neither is going to be able to directly relate to any material that goes beyond that set.  As the learner develops in both fields, the set of devices grows, and the learner will be able to generalise to all material made up of items within their arsenal of devices.  Thus the musician who masters one style of music will be able to play any piece in that style by ear, because it is a recombination of musical devices he knows, but will not be able to do the same for a piece in a radically different style.  The language learner with high fluency and proficiency in his new language will be able to understand lots of individual sentences that he has never heard before but are composed of language he knows, but if someone says an idiomatic phrase he has never learned, he obviously can't understand it -- are you hip to my jive, man?

If this research proves to hold true for language learning as it does for musical skills, then the language teaching industry is missing a trick by producing materials that students can't understand in their entirety.  Perhaps instead students would be better listening to things that they have just been taught to say, even if they can only say them shakily.  Some lessons in the Pimsleur method do this, repeating at the end of the lesson the dialogue that was used to teach that lesson.  Assimil teaches you from a dialogue, and then asks you to listen to the dialogue again once you've learned it all, without reading any notes, so that you understand it in its entirety. This may be a lazy way to do it, because it doesn't really require true understanding, but many people are happy with it.  It's curious then that neither Teach Yourself nor Colloquial, the two biggest book-shelf brands available in the UK, give the learner any instruction to relisten to each lesson's dialogues at the end of the lesson.  It seems like a pretty quick win -- at present the dialogue is only listened to before the presentation of the associated language points, when the learner will most likely understand almost none of it.

Personally I would like to see the post-learning listening being a new and unheard piece, but built up of the elements used earlier in the lesson.  This is easier said than done, though.  If you start from a situational point of view (Chapter 1: Greetings, Chapter 2: Meet the family etc), then you are going to be recycling the same phrases and it will be an almost identical dialogue to what's been used in the lesson.  If you go from a grammar-led course, on the other hand, it's going to be difficult to find anything interesting to say, and you're likely to end up with something that sounds quite contrived.

It's a difficult one.

(NB.  The Journal of Neuroscience isn't available in my university library, so I've only read the abstract.)

01 October 2010

Sorry -- I just noticed that comments were only being allowed from people with Google accounts.  Must have been something I did to cut down on spam.  I've opened things up for now, but I'll have to keep an eye on whether I start getting spammed or not....
An important argument against the "target language only" environment.

I have never been a fan of "target language only" language environment.  I can give 101 sophisticated arguments about why I hate it, but instead I'm going to stick to a very simple one.

The other night, I was sitting in a pub, talking to a Spanish guy.  I was advising him to watch TV serieses (I'll explain this in another post soon) and I was comparing serieses and films to novels and short stories.  "Tecks," says he.  "What?" says I.  The word he was after was "texts".  He genuinely believed that a short story was "a text" and that my "short story" was an explanation, not the word.

He's not the only learner of English who uses the word "text" too much.

Why does this happen?  It's what I call Teacherese.

When we go into the target-language-only classroom, we're immediately restricted in our choice of words by what the students already know.  If we need them to know a new word, we have to teach it to them.  If we want to get onto the "good stuff" in the language, we need to avoid being distracted by classroom talk.

So what do we do about all the "articles", "stories", "poems", "letters" and "extracts" and "excerpts from" various things?  We use one word to describe them: "texts".  By using the same word throughout, we make life easier for the students.

Unfortunately, this is a word that is only really used this way in technical circles -- most particular in linguistic study.  I have had the same argument with several teachers on this, and the usual justification is that it is real English (true) and that it is appropriate to the context (in this case a language course -- also true).

However, if we accept that we learn by example and by frequency of exposure, then it follows that we are distorting the meaning by making it too familiar to the learner.  As for the context, this is something of a red herring, as we are now assuming that an ab initio language learner will have enough awareness of register variation to determine that the language in question is register specific.  But this isn't the end of it, because register analysis and awareness can only arise out of exposure to multiple registers.  For many learners, the classroom is not merely a context, it is the entire world in which they interact with the target language.

The words and phrases we use as teachers have an undeniable and unignorable effect on our students' internal model of the language.  When we talk about "texts", we skew their model away from the native one.  When we say "say again", something never heard outside of language classes, we radically undermine their understanding of grammar.  When we use Latinate verbs instead phrasal verbs or when we use the future simple instead of "going to", we are not making life easier for our students, we are teaching them incorrect English.

As Einstein said, "Everything should be made as simple as possible, but not one bit simpler."  If as simple as possible in the target language isn't simple enough, don't use the target language.