Showing posts with label language acquisition. Show all posts
Showing posts with label language acquisition. Show all posts

27 March 2013

Suitability of MOOCs - H817 Activity 12

The OU free MOOC Open Education set the following question as activity 12:
Before we examine MOOCs in more detail, briefly consider if the MOOC approach could be adopted in your own area of education or training. Post your thoughts in your blog and then read and comment on your peers’ postings.
Now, just which field should I address?  Computer science or language learning?  How about both?

And for now, I'll restrict myself to the type of MOOC proposed by Cormier, Siemens etc, the "connectivist" MOOC.

So I'll answer "yes" and "yes" and "no" and "no".

One of the bits of material supporting this activity was a video interview with the aforementioned Mr.s Cormier and Siemens.



What really jumped out at me was that little after a minute into it, George Siemens basically says that the system emerged from how they were running online conferences.  Sound familiar?  Well, a few weeks ago I came to the conclusion that the MOOC had far more in common with a conference than a "course".

So it's utterly trivial to ask whether the MOOC has a place in any given field: if there are conferences in that field, a conference-type MOOC can work.

So that's "yes" and "yes".  Now onto "no" and "no".

I'll start with a quote from Isaac Asimov that I picked up from somewhere in the last week while working through blog posts on MOOCs:
“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'
This could have been written for Web 2.0.  (No further explanation needed.)

But in the MOOC setting, it's particularly salient.  The whole idea of connectivism is to learn from each other... but we're not experts.  Everything I've read or heard from Cormier or Siemens to date seems to mention but quickly gloss over the fact that their MOOCs have focused on educational technology, a field with many informed practitioners, but no confirmed experts.  In fact, on of the papers mentioned in the disastrous Fundamentals of Online Education Coursera module described online education as being "at the buzzword stage", a thin euphemism for the fact that it's all opinion and no "knowledge".  And that's the space that conferences have always occupied: the point where we're sitting on the boundaries of the state of the art, where informed practitioners of roughly equal knowledge try to contemplate and push those boundaries.

But when there is an expert, why should we rely on the knowledge of peers, who may in fact turn out to be wrong?

Nowhere can this be more clear-cut than in the computer field (or at least "the discrete mathematics field", of which CS is a subset).

At the level of programming, there can be no subjective discussion about the best way of carrying out a given operation, because the methods can be empirically measured.  We can measure execution time, we can measure memory constraints, we can measure accuracy of results.  We get a definite right and wrong answer.  Yes, we can devise collaborative experiments where we pool our resources and share our data to find out what those right and wrong answers are, and in computer science courses we often do, but that serves not to teach the answer, but to teach the process of evaluating the efficiency of an algorithm or piece of software.

We do not generate more knowledge of how the computer works by discussing, only of how we work with it.

So there's my first "no", but this is not really specific to computer science, because in any undergraduate field, you teach/learn mostly the stable, established knowledge of the field.  Very little in an undergraduate syllabus is really open to much subjectivity in terms of knowledge, and in arts degrees, the subjectivity is restricted pretty much to the application of established knowledge.

Everyone discussing MOOCs at the moment seems to be talking about "HE" (higher education -- ie. universities) and not acknowledging that fundamental split between undergrad and postgrad.

So I've stated that no undergrad stuff can follow a connectivist approach, is it still worth saying anything about language specifically?

I think so.

Because language learning, more than any other field of education, can be scuppered by overenthusiastic learners -- the biggest obstacle in any language course is the presence of learners: how can I learn a language by hanging around with a bunch of people who don't speak the language?  And yet, for most of us these courses are vital if we are ever to learn a language.

And I myself have benefited greatly from informal networks of learners offering mutual support, so why not a MOOC?  Because the informal networks I have benefited from are of vastly different levels, so there's always been someone with some level of "expertise" above you.   But once you formalise into a "course", you're suddenly encouraging a group without that differentiation; a group of roughly equivalent level.  An overly confident error pushed by one participant can become part of the group's ideolect -- a mythological rule that through the application of collective ignorance crowds out the genuine rule.  Without sufficient expert oversight, how is this ever to be corrected?

A language MOOC would most likely be of far less use than either traditional classes or existing informal methods....

14 April 2012

Ooooh... it looks like I was wrong....


For years, I've been toeing the line that non-native languages never become like native languages, and that there's two language processing mechanisms in the brain: one for the infant learner, and one for the adult-learner.  This was science's best guess based on the data they had to hand: victims of "selective aphasia".  These people had suffered brain damage that had affected their abilities in languages, but not equally across languages.  The observed pattern was that all native languages would be affected more or less equally, and all non-native languages would be affected more or less equally, but the two would often be differently affected.
Well, via an article in the New York Times, I came across this paper at PLoS ONE that says otherwise.  Apparently they've been able to track the brainwaves of learners and find that there are similarities between the brainwaves of a proficient learner and patterns typical of the native.  This I find really cool.  (I wish I could afford a brainscanner so that I could measure my proficiency in terms of native-like brainwaves!!)

I'm concerned, though, that one thing they say will be overinterpreted:
Interestingly, both before and after the delay the implicitly trained group showed more native-like processing than the explicitly trained group, indicating that type of training also affects the attainment of native-like processing in the brain.
There are many people who advocate immersion from day one, and this would appear to be proof of the efficacy of the approach.  However, we are dealing with a very simple language here -- 13 words and a handful of extra inflectional morphemes, leading to a sum total of 1404 possible sentences in the language.  Even if the grammar's very different from English, there's still a very small number of "decision points" to be considered, so it is much easier to devine the meaning from the context -- this is something that just doesn't match the experience with real language.

They do, of course, admit as much in the body of the paper:
it may be that the results reported here are due to the limited size of the artificial language
Furthermore, the study says:
In the explicit training condition, participants were provided with 13.5 minutes of input of a type similar to that found in traditional grammar-focused classroom settings. Auditorily-presented metalinguistic explanations structured around word categories (e.g., nouns, verbs) were presented along with meaningful Brocanto2 phrases and sentences (which were also auditorily-presented, together with visually-presented corresponding game board configurations).
It's a shame the paper doesn't include the script.  My concern here is with the mention of "metalinguistic explanations".  Explicit instruction does not necessarily mean a lot of "metalinguistic" explanations -- yes, it will always require some, but often what starts out as an explanation overcomplicates things with unnecessary jargon and overly conscious processing.  Finely tuned explicit input can be very clear to the learner even if it looks messy on paper, and poorly tuned explicit input can be nigh-on impenetrable to the learner even if it looks neat on paper.

Which is why I object when they claim that:
A second possibility is that at end of training the explicit group's dependence on explicit, declarative memory-based knowledge resulted in the inhibition of the learning or use of procedural knowledge (see above), thus precluding anterior negativities. On this view, explicit training actually prevents, or at least slows, the development of native-like processing.
It is impossible for them to state that explicit training in particular prevents anything -- but that the training that they provided did.  (Of course, if I was to see their scripts, I'd definitely find fault with them, but that could always be me rationalising away anything that disagrees with me -- classic confirmation bias.  We're all guilty of it at times.)

Anyway, despite their statements about the difference between the explicit and implicit instruction, the  study does state towards the end:
the implication of declarative and procedural memory in this study is consistent with the predictions made by the declarative/procedural model for second language acquisition. This neurocognitive model posits that during L2 learning grammar initially depends largely on declarative memory, but that gradually aspects of grammar are increasingly learned and processed in procedural memory.
I am heartened to see that they haven't out-and-out supported the hardline distinction between "learning" and "acquisition" that Krashen and his ilk propose.

So it looks like I may have been wrong about the neurological differences between native and adult-learned languages, and I might still be proven wrong about the benefits of explicit vs implicit instruction, but this is not the study that proves that one.  If and when one comes along, I'll rethink my position.  Until then, I'll stick with what I currently believe.

24 March 2012

Children don't know how to repeat.

I've written about it before - it's my firm and considered opinion that the idea of trying to learn "like a child" is contrary to all the evidence and completely counterscientific.

It's something that I used to believe in, although I thought it would only work in an intensive situation, and that trying to force it into a short format for night classes or as part of a high school curriculum was doomed to failure.  But that was before I studied language at university.  The magic of immersion is pretty alluring, right up until you see the cold hard facts behind it.
Now don't get me wrong, it's impossible to set up a large scale double-blind study on child-rearing, so most of the facts are more tepid and slightly crumbly than cold and hard, but they're maybe the best we're going to get.

What research there has been is pretty much ad hoc - all we really have is a series of case studies performed by (mostly female) academic linguists as they bring their own children up.  The main finding, as my university teachers told me, was that children cannot be corrected.  They showed us several transcriptions of attempted corrections by the mums, but the most common occurrance was for the children to repeat the same thing they'd just said, "error" and all.  Don't believe me?  Have a look at this video of a parent from YouTube...

Why can't this kid say banana?  Because he hasn't learnt the word yet.  Simple.  But surely he should be able to repeat it when he hears it...?

Well no, because he hasn't learnt the fundamental components of the word banana.  A word is not a fundamental unit -- it is composed of syllables built and combined within certain constraints that vary from language to language.  "Banana" happens to be a very unusual word in English, so the rules that govern its construction won't usually be learned until fairly late on.  The traditional conclusion is that a child cannot repeat something that they wouldn't produce spontaneously; that the child develops a "theory" of language which is constantly refined by input, and that every utterence is realised in accordance with this theoretical grammar.

But people kept telling me that I didn't know what I was talking about, because I don't have kids myself.  They have kids, and they corrected them.  Why does this sensation persist?

Well have a look at this talk by MIT researcher Deb Roy.  He's a machine intelligence researcher rather than a linguist, but trying to mimic natural language is a major theme in machine intelligence.


At 4:20 onwards we get to hear Deb's son slowly progress from "gaga" to "water".  Interestingly, you can see a period of instability - he doesn't switch immediately to "water" and seems to revert to "gaga" for a while.  If there is a perception among parents that children can be corrected, it may well be because there is a zone where the child's grammar accepts both possibilities, and at this point the child presumably can be corrected, because he can spontaneously use both the incorrect and correct forms.

He goes on to point out the patterns of parent-child interaction:
And what we found was this curious phenomena, that caregiver speech would systematically dip to a minimum, making language as simple as possible, and then slowly ascend back up in complexity. And the amazing thing was that bounce, that dip, lined up almost preciselywith when each word was born -- word after word, systematically. So it appears that all three primary caregivers -- myself, my wife and our nanny -- were systematically and, I would think, subconsciously restructuring our language to meet him at the birth of a word and bring him gently into more complex language. 
So the natural teaching process appears to occur as and when the child is ready for the language feature in question, without the caregiver ever knowing they're doing it.

The parent's perception of correction can most likely be explained thus:
The parent attempts correction frequently.
The child generally rejects the correction.
BUT
The child accepts the correction a rare few times, when they're in the unstable zone between the incorrect and the correct zone.
These few successful instances are more significant to the parent than the many unsuccessful instances, biasing the parent's memories.

Whatever, children learn whether we consciously try to teach them or not.
But children will not be able to repeat "what is your name" until they have learnt question word order, possession, "what" and "name".  An adult, however, will be able to parrot them without any access to the underlying concepts whatsoever.  A child can only learn these concepts and structures through exposure as this is our only "interface" with the infant brain, but we can get direct access to an adult brain through the language the adult already has.  It is pretty trivial to demonstrate that any successful adult learner does indeed think about a new language in the abstract, regardless of the medium of instruction: just ask them a question.

The adult learner attempting to "learn like a child" will be relying on higher-order reasoning, but the immersive environment does little to prepare the material for a higher-order approach. Conscious instruction, with native language input, gives the opportunity to do it right.

(Which is not to say that it guarantees to do it right, but that's a matter of methodology....)

29 December 2011

Counterintuitive, perhaps, but sometimes it's easier to start with the harder material...


In general, whenever we teach or learn something new, we start with the easy stuff then build on to the more difficult stuff.  But this isn't always a good idea, because sometimes the easy stuff causes us to be stuck in a "good enough" situation.

When I started learning the harmonica, I learned to play with a "pucker technique", ie I covered the wholes with my lips.  The alternative technique of "tongue blocking" (self descriptive, really), was just "too" difficult for me as a learner.  So for a long, long time, the pucker was "good enough" and tongue blocking was too difficult for not enough reward.  It limited my technique for a good number of years, and now that I can do it, I wish I'd learnt it years ago.

The same block of effort vs reward happens in all spheres of learning.  If you learn something easy, but of limited utility, it's far too easy to just continue along doing the same old thing, and it's far too difficult to learn something new, so you stagnate.  Harmonicas, singing, swimming, skiing, mathematics, computer programming; there's always the temptation to just hack about with what you've got rather than learn a new and appropriate technique.

This problem, unsurprisingly, rears its ugly head all too often in language learning, but with language it has an altogether insidious form: the "like your native language" form.  If you've got a choice of forms, one is going to be more like your native language than the other, and this is therefore easier to learn.  Obviously, this form is going to be "good enough", and the immediate reward to the learner for learning the more difficult form (ie different from the native language) isn't enough to justify the effort.  However, in the long term, the learner who seeks mastery is going to need that form in order to understand language encountered in the real world.

The problem gets worse, though, when you're talking about dialectal forms.

Here's an example.  Continuous tenses in the Celtic languages traditionally use a noun as the head verbal element (known as the verbal noun or verb-noun).  I am at creation [of] blog post, as it were.  Because it's a noun, the concept of a "direct object" is quite alien, and instead genitives are used to tie the "object" to the verbal noun.  In the case of object pronouns, they use possessives.  I am at its creation instead of *I am at creation [of] it.  Note that the object therefore switches sides from after to before the verbal noun.

Now in Welsh, the verbal noun has become identical to the verb root, and is losing its identity as a noun.  This has led to a duplication of the object pronoun, once as a possessive, once as a plain pronoun -- effectively I am in its creation [of] it.  This really isn't a stable state, as very few languages would tolerate this sort of redundancy, and the likely end-state is that the possessive gets lost, and the more English-like form (I am in creation [of] it) will win out.  In fact, there are many speakers who already talk this way.

But for the learner, learning this newer form at the beginning is a false efficiency.  There are plenty of places where the old form is still current, so unless the learner knows for certain that they'll be spending their time in an area with the newer form, they're going to need the conservative form anyway.  To a learner who knows the conservative form, adapting to the newer form is trivially easy, but for someone who knows only the newer form, the conservative form is really quite difficult to grasp.

So teaching simple forms early risks restricting the learner's long-term potential.  So while you want to make life simple for yourself or you students, make sure you're not doing them or yourself a disservice.

20 May 2011

"Say what sounds right."

Bad advice has an annoying habit of sounding like good advice, and this little phrase really is something of a wolf in sheep's clothing.  It's definitely appealling when someone points out that that's what we do in our own language.

I wouldn't argue that my end-goal isn't to be able to simply say what sounds right, but I just can't see how that end-goal affects my learning path: nothing will never "sound right" until I've learnt it, so how can I learn by what "sounds right"?

The consequences of this are not insubstantial, because if I've not learned it yet, what's going to sound right to me?  What sounds right is something that I have learned, but this will be out of context.

Take for example the verb "start" in English.
You can "start something".
You can "start doing something".
You can "start with something".
You can "start by doing something".

If you learn only two of these, then only those two will "sound right".

Saying "what sounds right" traps you into what you know and stops you expanding your language.  What you need to do is stop and think, and use the appropriate form, even if it isn't familiar enough to "sound right" yet.

Here's another example.

French has a feature called "liaison" -- certain final consonants are silent but reappear when followed by a vowel, but only if the two words are tied together syntactically.

In the word "vous", the S is normal silent, but in the phrase "vous allez" (you go), it has a /z/ sound.
Now, the past participle of to go is "allé", which is pronounced identically to "allez", so when you ask "êtes-vous allé" (have you gone), if you go by what "sounds right", you might pronounce that /z/.  But in this question "vous" and "allé" are not syntactically bound together and liaison should not occur.

The idea of "what sounds right" reaches a very messy conclusion in Scottish Gaelic. A single syllable consisting of a schwa before a noun can be one of three things: "the", "his" or "her".  "his" always causes initial lenition (soft mutation of the first consonant) of the following noun, "her" never does.  As "the", this form can occur before masculine and feminine nouns in certain cases, and causes initial lenition, and only with certain letters.

Many teachers suggest that you learn noun gender by "what sounds right", by agreement with the article, but your ear will be exposed to the various case-inflected forms and possessives, so what sounds right might not be "the boy" at all, but "her boy" or "his boy".

13 May 2011

Gatekrashen

One of the most popular figures in language methodology is a certain Stephen Krashen.  People love him.  People quote him.  People even refer to what he does as "research", but Krashen himself has been involved in very little research, and most of the papers he writes cite papers which refer to other papers.  And many of the papers he cites were written by him.

Krashen's view of teaching is massively oversimplistic, and unfortunately that appeals to people.

I've had a pop at Krashen in the past (in the latter part of my post Expository vs Naturalistic language), and I'm not the only one.

The journalist Jill Stewart got stuck into him over a dozen years ago about bilingual education in her Los Angeles Times article Krashen Burn.  In it, she attacked his views on the education of Spanish speakers in the USA as being not only contrary to the evidence from teaching practice, but also diametrically opposed to the principles he professes for adult second language acquisition, even though he suggests adults should learn "like children".

In fact, Krashen's theories are so all-pervasive that Timothy Mason, when working as an English teacher trainer in the Université de Versailles St Quentin, dedicated most of a semester-long degree-level course to deconstructing Krashen's claims and rebutting them with references to real research and other academic opinions.  And he's now put the lectures on-line for all to enjoy. [Edit 2015-07-30: the pages appear to have disappeared from the site, but are available via archive.org's wayback machine.]

As I said, Krashen's theories are popular because their apparent simplicity appeals to both the teacher and the learner.  But more insidiously, Krashen claims there is really no such thing as "learning" a language, saying instead that you "acquire" it.  Now logically, if there's no such thing as "learning", then there can be no such thing as "teaching".  This is particularly appealing for the teacher, because if there's no such thing as "teaching", there is no such thing as "bad teaching"; instead, we have the idea that the teacher gives the student the opportunity to acquire the language, and can't really be blamed if the student doesn't take it.

(Actually, can we still call the language learner a "student"...?  Or even a "language learner"...?  If there is no learning, there is no study, so surely the appropriate word is "acquirer"?  This may seem like a simple game of semantics, but my understanding of the word "student" is someone who actually works at learning.  By continuing to use the term "student", Krashen's followers risk unconsciously passing all the blame for failed learning to the students.)

I understand all this, yet I am still baffled as to how the gaping flaws in Krashen's reason are still so hard to point out to people.

Krashen says we don't learn by production, we learn by listening and understanding.  Yet it is self-evident that this simply isn't enough in the real world.  Think of any immigrants you know.  All over the UK we have had wave after wave of immigration, particularly since the second world war.  There are loads of people who have lived here since the 70s who still haven't "acquired" English to a native-like level, with continued native-language interference.

As Timothy Mason points out, choosing not to correct mistakes "may be seen as a perfectly rational judgement on the part of the learner, who decides that any further investment in perfecting his grasp of the L2 will not pay sufficient dividends in added communicative and social power."

But it's more than this.  Certain errors cost more to fix at a later date than others.  I addressed one of these a few weeks ago: the falling together of phonemes.

Using the same example as in my previous post, we can predict that a French beginner of English will have difficulty distinguishing T from unvoiced TH, and D from voiced TH.  We can intervene and make them see the distinction, but only through production.  There is simply too much redundancy in language to be able to force the student to need to discriminate.  Word pairs such as "this" and "diss" differ so much in usage that discriminating the phonemes is very rarely going to be required to comprehend the sentence.

And as I said in the earlier post, my French friend can pronounce all the phonemes of English (with a bit of an accent) and he can even hear them when he listens for them, but they are missing from his model of the language, and in order to learn to pronounce every word correctly, he would have to relearn all his vocabulary.  He can function perfectly well in English, so learning correctly would certainly not pay "sufficient dividends" compared to the time he would have to invest, so it is "perfectly rational" that he doesn't.

The same goes for many people's grammar.  Little errors like missing the word "to" from "need to" rarely confuse anyone, and so there is no impetus for correction.

Current thinking is that it is sufficient to focus on survival language and "getting the message across" without any concern for accuracy.  Accuracy, they tell us, is an advanced skill for advanced students, and isn't worth the effort for most people, who just want to have a nice holiday.

But accuracy must be an early focus or it will never be achieved.  Grammatical and phonological accuracy is cheap and easy to start with, and gets more difficult and expensive the later it's left. 

So we have to teach, and we have to teach accuracy, or the student will never achieve it.