29 April 2013

Rhizomatic non-learning

I've justed watched a video on "rhizomatic learning" set as part of the OU MOOC Online Education.



The "course" proceeded to ask four questions, and suggest that we answer then on our blogs.
  1. Were you convinced by rhizomatic learning as an approach?
  2. Could you imagine implementing rhizomatic learning?
  3. How might rhizomatic learning differ from current approaches?
  4. What issues would arise in implementing rhizomatic learning?
So here's my answers:

Were you convinced by rhizomatic learning as an approach?

No.  One of the most important rules of knowledge and understanding is that if you can't explain something, you don't understand it.  Cormier singularly failed to explain anything about what rhizomatic learning actually means.  He explained the roots of the analogy, but failed to explain how that maps on to pedagogy.  It is hard enough to be convinced of something that you don't understand, and it's harder still to be convinced of something that isn't understood by its leading proponents.

Could you imagine implementing rhizomatic learning?

No.  Until I actually know what it is, I have no way of picturing such a process. All I know is that it "deals with uncertainty", but all good teachers already do that.  I already try to give my students strategies to deal with unknown language, including guiding them to understanding how to determine what is an important word, and which words can safely be ignored without losing the main thrust of the sentence.  If that's "rhizomatic learning", then the term is pretty trivial and meaningless, because it's already common practice.  If that's not rhizomatic learning, then rhizomatic learning is unnecessary.

How might rhizomatic learning differ from current approaches?

Rhizomatic learning appears to be fundamentally very similar to other recent approaches in that it builds a complex and intriguing narrative to capture the imagination and build a following, but it gives no concrete, reproducible guidelines or anything approaching "information".

What issues would arise in implementing rhizomatic learning?

Simple: you'd have to figure what the hell they were talking about before you started.


Don't get me wrong: the philosophical notion of the rhizome is a very useful conceptual tool when analysing large bodies of data, and it gives an interesting way of looking at learning schemata, but the thing is that the rhizome is an attempt to explain the underlying conceptual structure of information, and not a model of the learning process.  It can and should inform the teaching process, and it does provide a philosophical counterpoint to a hardline belief in a single "correct" order of teaching, but this alone does not justify it as a direct model of the learning process.

In fact, Cormier doesn't even seem to talk about the central point of the rhizome paradigm: the interconnectedness of knowledge.  Instead he veers back off into networked learning, and instead of the "rhizome" representing the culture connecting various visible phenomena, its something connecting people as nodes of information

And this leads us to the biggest contradiction in the connectivist pedagogical ideology: Cormier talks briefly about the qualities of MOOCs (and by this he means "connectivist learning") and he talks about self-organised groups learning from each other ("the community is the syllabus").  But we naturally self-organise into groups with shared interests and philosophies.  To use an extreme example, people who believe that the Earth is flat are more likely to be members of the Flat Earth Society than members of their local astronomical society.  Their network therefore contains information that is objectively and scientifically verifiably incorrect, and the network reinforces the belief of all members.

If a course was to be written that brought together a bunch of educators that were predisposed to believe in the untested, unverifiable and barely defined theories of a bunch of educational ideologues, would we not similarly find that the "truth" within their network would be very different from the "truth" in the global network, or indeed the actual truth (as much as there is) in the peer-reviewed studies published in scientific journals...?

13 April 2013

The myth of "no training required".

I was just watching a "slidecast" by Martin Weller, through the H817 MOOC.  I stopped.  Why? Because any time my computer did something else, I jumped out of my skin.  There was a reason for this: the volume on the recording was so low that I could only hear it with a pair of headphones on and the volume turned up to 11, and any beep, bloop or bing that my computer made was deafening.

As a side effect of having such low volume, there was a lot of hiss.

My laughter was pretty ironic when at around 3:30 he claimed that this sort of thing needed "minimal tech skill" and suggested that you don't need training.  This is one of the pervasive myths of the internet age: "intuitive", "natural and easy, "no training required"; no matter how you word it, it's not true.

Recording volume is a perfect example of this.

I was asked by the university to lend my voice to a language course they were recording.  I brought along my field recorder, because I had a feeling the person doing the recording wouldn't have been adequately trained.  She wasn't -- not her fault, but she wasn't.  And she didn't know how to set the recording levels, so I ended up recording the session on my own equipment.  And all because the university didn't set a big enough budget for the recording... "no training required", right?  But our university has a department dedicated to that sort of thing, and you'll often see students in the corridors and the car parks with video cameras and boom mics.  But anyone who's been in a university knows that effective interdepartmental knowledge sharing is something of a pipe-dream....

A year or two ago, I was watching an Al Jazeera programme online, Living the Language: Canada: The Ktunaxa.  The Ktunaxa people were using technology to record their language and produce software to help teach it to others.  The pictures show them using an expensive-looking, high-quality microphone, but the output is pretty poor.  The following pictures demonstrate why.

First up, here's a picture taken by the film crew during an on-site recording of an elder speaking:

What you are looking at, if you've never seen the inside of an audio editor before, is a very, very, very quiet recording.  Now have a look at this:


This is the woman's post-recording editing process.  Here she has taken a very, very, very quiet recording and boosted the volume by about 5 times.

Unfortunately, when you record very, very, very quietly, you don't capture much information.  The software cannot just magically pull that information out of the ether, so instead it takes a best guess, which results in a muddy, unnatural output.

Because nobody taught her how to set her levels.

And that need for education is well known.  It has been observed time and time again that an untrained user will more often than not set the volume far too low.  They know that you can "max out" a recording (red lights flash!!!) but they don't appreciate the problems of poor quality that occur when the volume is set too low.

We know this -- anyone with the slightest background in audio engineering will tell you.  And yet the "learning technologists" tell you that you don't need to be trained.

Well I'm sorry, but you do.  And that includes you, Martin Weller.

12 April 2013

Modern learning experiences... repeatability...?

In the H817 MOOC, and everything else written about connectivist teaching, there is an evident strand of frustration among the proponents of connectivism that other teachers just don't "get it" and aren't buying into this new trend.

George Siemens claims that connectivism is a pedadogy for the internet age, as opposed to everything else which is pre-internet pedagogy crammed into an internet-shaped package.  Leaving aside the obvious criticism (that the human brain has not evolved since Tim Berners Lee first pinged his server), the real question is whether their successes (if they were indeed successes) are repeatable.

Since the start of the H817 course, I have been trying to remember the name of a guy I read about on Slashdot over a year ago... and funnily enough I never thought to check my bookmarks, and when I went looking for another bookmark today, voilà! Michael Wesch.

Wesch proposed a model of teaching based on social media and interactions.  He did it in his classroom and had great results.  He gave talks, he wrote articles, he encouraged other people to apply his techniques.  He was a teaching technology evangelist.

But eventually he stopped evangelising his techniques because the feedback he got from other teachers was that they weren't working.  There was something missing, some kind of magic that he hadn't included in his instructions.  And of course people with completely different techniques were getting results that were as good as his.

So he's stopped evangelising.

The important thing is the connection between the teacher and the student, and that's not down to the technology.  In fact, I would say that the technology has to follow as part of the teacher's passion and way of thinking.  What does that mean?  I haven't a bloody clue.  And neither does anybody, or that mystery -- "wonder" in Wesch's word -- of teacher/student rapport would be formulisable, and therefore teachable.  And if it was teachable, Wesch would have been able to teach people how to teach with technology.

When discussing language learning with other learners, I have always made a strong distinction between "what you do when you are learning" and "how you learn".

What I mean by this is that when someone does a series of grammar drills from a book, we cannot say that those drills are directly causing them to learn.  In fact, for every person who appears to learn successfully from such a book, you will find another half-dozen who fail to learn from exactly the same book.  Therefore we have to conclude that looking at the book's activities only gives us a very superficial view of the learning process.  We have to attempt to analyse the difference in approach between the successful and the unsuccessful learner.

But these approaches are very poorly understood and documented and very rarely taught.  The successful language learner's natural and intuitive learning process is not available to be repeated, so the method doesn't improve.

As soon as I started training to teach English, I quickly came to the conclusion that the same distinction affected language teaching techniques.  Everything in the how-to-teach books struck me as "what to do when you are teaching" rather than specifically "how to teach".  None of the activities really taught the language, and yet these books were written by very successful teachers.  They must have constructed sophisticated teaching styles and structures unconsciously, or their students would be failing -- it's just a shame they don't know what that is, or they could tell us.

The more and further I read into teaching, the more I find that this isn't specific to language teaching.  Don't get me wrong -- it's really not as bad in most fields as it is in language, but there is still a huge conceptual gap between "classroom activities"/"what I do" and "teaching"/"how I teach".

The connectivists are a prime example -- they give a list of fuzzy... I don't know, stuff; guidelines and that sort of thing, and a couple of fuzzy justifications for why it should work, but they simply do not give enough information to make it repeatable.  It's "what" not "how", "activity" not "teaching".

And right now the world is full of people trying to replicate the "MOOC", and as Siemens and Cormier are only too happy to tell us, they're doing it wrong.

Well, maybe that's because Siemens and Cormier haven't told us how to do it right.

And the most likely reason for that is simply that they do not know how to do it right.

07 April 2013

Adaptive learning systems: nothing to be afraid of.

I was meandering through various blogs last week, following various links to material that I found idly interesting.  On Stephen Downes's website, I saw his link to a blog post by David Wiley (a name getting frequent mention in the OU's H817 MOOC*).  The post discusses the dangers of adaptive learning systems -- systems that track your learning and teach your stuff.
(* Actually, I'm starting to get a little stir-crazy reading a lot of data-free opinion pieces from the same four names: Siemens, Cormier, Downes and Weller.)

Wiley's criticism is that you don't own any of the material you access, and he accuses the adaptive learning companies of exploiting our willingness to pay for services while expecting content for free.
Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other.
But how is this any different from any other "teaching" experience?  Wiley seeks to equate adaptive learning systems with textbooks, but is he right to do so?

There are several problems here:
  1. The difference between a teaching text and a reference book.
  2. Course as "teaching" vs course as "material".
  3. The need to ensure that your material is new to the student.

The difference between a teaching text and a reference book

Perhaps I'm lucky in that the subjects I have studied make a big distinction between these two categories.

In languages, you get course books, learners books, textbooks, workbooks etc... a whole bewildering array of paper that leads you through your learning, but when you've finished, you've got practically no need for any of it, and it sits gathering dust on your shelf as you can't bring yourself to get rid of stuff that cost so much to collect, but in the end, all you ever use is a dictionary (probably online) and a single reference grammar book.  That reference grammar book was no use to you when you started out, of course, as the examples contained far too many unfamiliar words, and the ordering of the book made it impossible to really understand anything new.

The same in computers, where a learner's book would hold your hand through the various concepts required to learn a new technology, but looking back on it later, the learner's book was never any good for looking up basic concepts (which were drawn out over entire chapters) and didn't contain enough information on the advanced concepts that would be of some use to you by this stage.  So that book goes to the second-hand shop and you buy a desk-reference or bookmark a webpage.

And yet publishers continue to attempt to sell books to suit both markets, invariably falling between two stools in the process.

A learner doesn't need access to a teaching text after the class is over, and is free to go out and buy a reference book instead.

Course as "teaching" vs course as "material".

As a teacher, I give out lots of material in class, but during the course of the lesson, the material is quickly "consumed" -- worksheets are filled in, and by the end of the day, the student has not accrued any additional work-at-home material over and above any specific homework I may set.  Do Wiley and Downes object to this?  Am I cheating my students if my material is not infinitely reusable?

Because an adaptive learning system is not an attempt to replace the textbook -- it's an attempt to replace the teacher.  A computer, in theory, is capable of producing a more individualised learning path for each student than a teacher, thanks to the computer's essentially limitless perfect memory.  I cannot remember every single difficulty each of my students has, but a computer can.

The need to ensure that your material is new to the student.

Even where there is a developed culture of sharing between teachers, ever teacher holds some things back for themselves.  Why?  It's the "old standby" -- that exercise or activity that can be adapted to various levels to provide an emergency lesson when the projector breaks or the new books haven't arrived.  If you don't share that lesson, then you're safe to use it with any and every class, but if you share it with your colleagues, there's a very high chance that sooner or later you'll have a class say "we did that with Mr So-and-so", and your stumped.

Most sharing at the local level is facilitated by having a shared syllabus -- if the lesson is for 2nd years, you're safe to do it with any 2nd year class, and unsafe to do it with any 3rd year class.  But one way or another, there has to be some way to ensure that students get tasks they've never seen before, because while a good story is no worse for being told a second time, a good lesson is destroyed by being taught a second time.

In the classroom, you can always rely on "we already did that" to let you know, and then you improvise something else, but could you do that in an adaptive learning system?  I don't really think you can.  You can't just accept any old feedback from the student (natural language processing systems aren't that sophisticated yet) and a big red button marked "already done it" would be very off-putting to the student and would make the software look really unprofessional.

No, the software has to know what you've done and what you haven't, and that means keeping control of when and how the learner accesses the material.


In essence, though, I think that Downes and Wiley are objecting on ideological grounds rather than practical, pedagogical ones.  They have aligned themselves with a rather dubious view of learner-centred education where the learner makes all the choices, apparently empowering and enabling them.  Adaptive learning systems take the diametrically opposite view: that by taking the decisions away from the learner and instead presenting whatever the learner most needs or is best ready for at any given moment, the learner attains a much more complete and well learned education.

And anyway, all the evidence is on the side of the adaptive systems guys, because connectivism and the like breaks away from the proven techniques of staggered repetition, planned progression and learning-by-testing which are the very foundations of adaptive learning, and replaces them with an almost entirely unstructured meander through materials effectively chosen by a known non-expert (the learner) with no real "testing" of concepts.

And yet this type of vague handwavery is presented in the absence of discussion about the many known effective techniques in a course that is supposed to be part of a masters-level module.  I am appalled.

05 April 2013

H817 Activity 9: Ask a superficial question....

Everyone knows the saying: "ask a stupid question, get a stupid answer".  What's unfortunate in that old cliché is the word "stupid", because it casts an implicit judgement on the asker.  Let's take that word and replace it with "superficial".

Ask a superficial question, get a superficial answer.

Isn't that far more constructive?

Now that we've done that, let's move on to the superficial question in question, or rather the superficial learning task:
"For your blog content and other material you produce, consider which of the Creative Commons licences you would use, and justify your choice."
[Activity 9, Open Education, the Open University]
Now I hd initially skipped it, dismissing it as a "stupid question", which was rather uncharitable and unconstructive of me.  But as more and more answers came through the blog aggregator from other course participants, I realised it was a very dangerous question that was leading to poorly thought out answers.  It was a superficial question leading to superficial analysis, which was leading my fellow students to draw conclusions based on inadequate data.

Blogs are atypical

The task asks about "your blog content and other material you produce", but in the end most people ignored that second bit because it was so vague.  The only seed the question planted for active consideration was that of the blog.

A blog is not reusable.  My blog is my opinion.  You cannot sell my opinion.  You cannot modify my opinion just by editing text (that would be putting your words in my mouth).  You may "share" my opinion, but only by virtue of having the same opinion as me, not by copying my exact words (that would be putting my words in your mouth).  Moreover, because it is opinion it is of very little value as a resource.  I do not want people copying my half-thought-out ramblings as though they have the same merit as a properly researched, reviewed, edited and professionally published article.  So there can and should be no "Creative Commons" license on my blog.  Quote me like you would anyone else; link to my post -- fine, just don't exhalt my witterings above their station.

In fact, one of the things I've always consciously tried to do when writing this blog is avoid the trap of the populist bloggers who are all buddy-buddy and effusive, and convince a lot of people that everything's good and great and exactly the way they tell it.

No, my blog is oftentimes abrasive and confrontational, because I don't want to convince anyone: instead I want to make everyone doubtful, skeptical.  I may offer suggestions aimed at resolving the doubt, but my first aim is to make readers ask themselves the right questions: deep, searching questions, not superficial ones.

I get what the course team were trying to do with the task they set -- they wanted to start with something that was relevant to all the students.  As always, though, focusing on relevance to the students distracts from relevance to the course aims.

Let's get one thing clear: a blog is not an "open education resource"!

That doesn't mean that blog copyright is entirely irrelevant, it's just a side issue, so it was inappropriate as the central question of the task.  As a warm-up, a lead in, it's fine.  But at no point did the task throw in anything more relevant for consideration; at no point did it say (for example):
How is the situation different for media resources such as photos, videos and music?
So let's ask ourselves that:

How is the situation different for media resources such as photos, videos and music?

A blog is inherently bound to its subject matter and its intended purpose.  Take this blog post, for example: it would be nigh-on impossible to repurpose it as an article on flower arranging, or an advert for a Mars bar.

But consider one of your wedding photo.  It can be used as a picture of your wedding, or it could be a picture of a wedding.

It could be used as the cover of an expensive bridal magazine, or the poster advertising a huge wedding convention.  Both of those are changing the "purpose" of the picture.  A flower arranger could zoom in and crop to leave just the bride and the bouquet and use it as her business card.  A little copyright notice on the card, and they'd still be adhering to CC-BY.  But it's suddenly become fundamentally dishonest -- that flower arranger didn't make your bouquet but they're implying they did.

And let's take it a step further.  What if someone takes your photo and photoshops it... to use in an advertising campaign for "XXXL Dating" or "Ugly Singles".  Suddenly your big day has been utterly defiled by pictures that are recognisably of you with all your minor imperfections exaggerated and laid bare for everyone to see... that slight gap in your front teeth widen to the width of a cigarette, the kink in the bridge of your nose opened up like one of the bends on the Manx TT course, and your slightly droopy eyelid now hanging halfway over your pupil, as though you've got no sight in that eye at all.


OK, this discussion is all within the ground staked out by the task, but at no point were we, as students, forced to think beyond the effects of licensing on our own blogs, which are in reality little different from the blether in pubs and staffrooms the world over.

How can we start thinking about the effects of licensing of something with high value and utility when they only ask us to consider something of low value and utility?


Perhaps if they had prompted us with something a little deeper than a the swimming pool in Barbie's dream house, we'd all be discussing the fundamental flaws inherent in Lawrence Lessig's original vision for and ongoing direction of the Creative Commons movement.

But that wouldn't do, because if this course encouraged us to look beyond the superficial, we would see that behind the veil of sophistication in this brave new open education world, there is no deeper meaning, no profound insights into the human condition or the learning process.

04 April 2013

More Maths for MOOCs!

The guys behind connectivist MOOCs seem to be against the teaching atomic, well-defined concepts, which is all well and good, but I think those same concepts might inform their theories a bit better.

This morning, I received a "welcome to week 4" message from the organiser of the OU MOOC Open Education.  He comes across as a really nice guy on both email and video, which makes it a lot more difficult to criticise, but one thing he said really caught my attention as indicative on the problems with the "informal education" model that the connectivist ideologues* profess. (* I refuse to call them theorists until they provide a more substantial scientific backing for their standpoint -- until then, it's just ideology.)

So, the quote:
"As always try to find time to connect with others. One way of doing this I've found useful is to set aside a small amount of time (15 minutes say) and just read 3 random posts from the course blog h817open.net and leave comments on the original posts."
Don't get me wrong, I commend him on this.  Many MOOC organisers take a step back and stay well clear of student contributions, for fear of getting caught up in a time sink.  No, my problem is the word "random" coupled with the number "3".

The cult of random

There is a scientific truism about random: when the numbers are big enough, random stuff acts predictably.  You can predict the buying patterns of a social group of thousands well enough to say that "of this 10000 people, 8000 will have a car", or the like.

The best examples, though, come in the realm of birthdays.

If you take the birthdays (excluding year) of the population of a country (eg the UK) and plot a graph, you'll get a smooth curve peaking in the summer months and reaching its lowest in the winter months.  Now if you take a single city in that country (eg London), you'll find a curve that is of almost indistinguishable shape, just with different numbers.  Take a single district of the city, and the curve will be a similar shape, but it will start to get "noisy" (your line will be jagged).  Decrease to a single street (make it a large one) and the pattern will be barely recognisable, although you'll probably spot it because you've just been looking at the curve on a similar scale.  Now zoom down to the level of a single house... the pattern is gone, because there aren't enough people.

In physics, this is the difference between life on the quantum scale and the macro scale.  Everything we touch and see is a collection of tiny units of matter or energy, and each of those units acts as an independent, unpredictable agent, but there are so many of these units that they appear to us to function as a continuous scale, describing a probability distribution like in the example of birthdays. Why should I care whether any individual photon hits my eye if the faintest visible star in the night sky delivers 1700 photons per second.  The computer screen I'm staring at now is probably bombarding me with billions as we speak. The individual is irrelevant.

But massive means massive numbers, right?

I know what you're thinking -- with MOOC participants typically numbering in the thousands, these macro-scale probabitimajigs should probably cut in and start giving us predictability.  Well yes, they do, but not in the way you might expect, because MOOCs deal with both big numbers and small numbers.

Again, let's look at birthdays.

Imagine we've got 365 people in a room, and for convenience we'll pretend leap years don't exist (and also imagine that there aren't any seasonal variations in births and deaths).

What is the average number of people born on any given day?
Easy: one.

And what is the probability that there's someone born on every day of the year?
This one isn't immediately obvious, but if you know your stats, it's easy to figure out.

First we select one person at random.
He has a birthday that we haven't seen yet, so he gets a probability of 1, whatever his birthday is OK.
Now we have 364 people, and 364 target days out of 365.
Select person 2 -- the chance he has a birthday we haven't seen yet is 364/365.
Person 3's chance of having a birthday we haven't seen is 363/365... still high.
...
but person 363's chance is 3/365, person 364's chance is 2/365 and person 365's chance is 1/365.

To get the final probability of 1-person-per-day-of-the-year, we need to multiply these:
1 x 364/365 x 363/365 x ... x 3/365 x 2/365 x 1/365

Mathematically, thats
365! / (365^365)
or
364! / (365^364)

It's so astronomically tiny that OpenCalc refuses to calculate the answer, and the Windows Calculator app tells me it's  "1.455 e-157" -- for those of you who don't know scientific notation, that "e minus" number is the number of zeros, so fully expanded, that would be:
0.000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 145 5

...unless I've miscounted my zeros, but you can see clearly that the chances of actually getting everyone with different birthdays is pretty close to zero.  It ain't gonna happen.

A better statistician than me would be able to predict with some accuracy the number of dates with multiple birthdays, the number of dates without birthdays etc, but not which ones they would be.

The reason we have these gaps is that we two relatively high numbers and one relatively low number.  That gives us a predicatable probability distribution with predictable gaps.

Now, Martin Weller tries to look at and comment on 3 random blog posts when he connects.  Now let's imagine that this was a policy that everyone adhered to without exception (either above or below).

OK, so let's imagine our MOOC has 365 students (so that we can continue using the same numbers as above) and that for each post we put up we comment on 3.  What's the chances that every blog post gets at least one comment?

Well this is pushing my memory a bit, cos I haven't done any serious stats since 1998, and it's all this n-choose-r stuff.  Oooh.... In fact, I can't remember how to do it, but it's still going to be a very small probability indeed.

In order to get a reasonable chance of everybody getting at least one comment, you need to get rid of small numbers entirely, and require high volumes of feedback for each user.  But even though we're talking unfeasibly high volumes here, it still doesn't guarantee anything.

Random doesn't work!

You cannot, therefore, organise a "course" relying entirely on informal networking and probability effects -- there must be some active guidance.  This is why Coursera (founded by experts in this sort of applied statistics) uses a more formalised type of peer interaction.

The Coursera model

Coursera's model is to make peer interaction into peer review, where students have to grade and comment on 5 classmates' work in order to get a grade for their own work.  The computer doesn't do this completely randomly, though, and assigns review tasks with a view to getting 5 reviews for each and every assignment submitted.

Now in theory, their approach is perfect, because even if they don't achieve a 100% completion rate/0% dropout rate, you should be able to get something back.  However, their execution was flawed in a way which convinces me that Andrew Ng didn't write the algorithm!

You see, the peer reviews appear essentially to be dealt out like a hand of cards -- when the submission deadline is reached, all submissions are assigned out immediately.  Each submitter gets dealt 5 different assignments, each assignment is dealt out to 5 different reviewers.  It doesn't matter when you log in to do the review -- the distribution of assignments is already a fait accompli.

How did I come to this conclusion?  Well, after the very first assignment in the Berklee songwriting course, I saw a post on the course forums from someone who had received no feedback on his assignment.  I immediately checked mine: a full set of 5 responses.

Even though they had distributed the assignments for peer review in a less random fashion, they did nothing to account for the dropout rate, even though the dropout rate is reportedly predictably similar across all MOOCs -- and not only similar, but very high in the first week or two.  So statistically speaking, gaps were entirely predictable.

What Coursera should have done....

The problem was this "dealing out" in advance.  If they'd done the assignment redistribution on a just-in-time basis.  When a reviewer starts a review, the system should assign one with the minimum number of reviews.  No-one should receive a second review until everybody has received one, and I definitely shouldn't have received 5 when some others were reporting only getting 3, 2 or even none.

"Average" is always a bad measure

As a society, we've got blasé about our statistics, and we often fall back on averages, and by that I mean "mean".  But if the average number of reviews per assignment is 4.8 of a targeted 5, that's still not success and it's still not if it doesn't mean that everyone got either 4 or 5.

Informal course organisation will never work, no matter how "massive" the scale, because the laws of statistics don't work like people expect them to -- they don't guarantee that everyone gets something -- they guarantee that someone gets nothing.

02 April 2013

Rosetta Stone and LiveMocha to merge!

Just a couple of seconds ago, I got an email from LiveMocha announcing they're merging with Rosetta Stone.  I doubt this is strictly true -- I reckon RS have bought them out because they were offering a product that was a little too similar and was undermining the public perception of the value of RS.

One of my big worries about LiveMocha was always about the way they relied on free translations from the public, and the risk that they could just switch your translation off as soon as it interfered with their business model.

Take for example when the ActiveEnglish, French, Spanish, German and Italian came out: the free courses were just switched off (to new subscribers, anyway).  No, these weren't amateur translations, but it was still something of a warning sign.

Only a few days ago, I received another email from LiveMocha encouraging more and more translations from the "community".  But what future do they have when a large PLC with falling profits takes control of the company?  Rosetta Stone does already have a fairly wide range of languages that it wants an awful lot of money for, and a PLC has to serve the interests of its shareholders before those of the community.

I don't expect LiveMocha to survive in any real form after a tail off period of a year or two....

Reinventing square wheels.

It's amazing the number of people on the internet that promise to do something entirely new, and then retread the same "new" ground that has proved fruitless for thousands before them.

Now don't get me wrong, I think it's great that so many people are willing to experiment, and I wish I had half the confidence in myself that they have in themselves, but I just wish they'd stop and do a bit more research and a bit more reflection before jumping in and declaring that they have the method to end all methods.

I won't name any names here (although if you've seen it, you'll know which one I'm talking about), but the latest is yet another picture-word mnemonic idea.  Well sorry, but that has been done innumerable times before, and there are even many professional resources on the market based on this idea.

The idea is certainly appealing: you can understand why it should work.  But more insidiously, you're consciously aware of everything you have learned using it, making it appear far more noteworthy than it is.  But it's a very limited technique, as a recent literature review published in Psychological Science in the Public Interest aptly demonstrates (see section 5: The Keyword Mnemonic).  It clearly works in the short term, but appears to be a potential barrier in the long term.  Either way, it takes up a lot of time, and the more mnemonic images you have, the harder it is to sort through them.  I myself have a couple of mnemonic images for Russian stuck in my head... but I can't remember what the word means.

Ponimatz.  A pony on a mat.  What does it mean?  I have no idea.
Patchimoo.  A Frisian cow (cos it's all patchy).  What does it mean?  I have no idea.

I can hear these words on TV and recognise them, and see the image, but I have absolutely no notion of what they mean.

The PSPI article picks up on this problem, with the idea of teaching French "dent" (tooth) with the image of a dentist holding a tooth in his pliars.  When you recall the image you have to decide whether you're looking at the dentist, the pliars, the tooth, the patient....  (Of course, "dent" was always going to be a particularly stupid word to teach that way, given that with "dentist" and "dental" in the English language, it's not a hard word to teach as a cognate.)

PSPI states also the obvious: images only lend themselves ready to certain, very concrete, concepts -- the problem with my Russian images is (if I remember correctly) that they were fairly abstract terms that don't have an obvious visual representation.

The latest "big idea" that I started talking about earlier seems to take that on board, thankfully.  The proposer suggests that he's picking some easy and memorable words to start with simply so that the learner has enough vocabulary to be able to learn grammar, based on the belief that you can't learn grammar without enough words behind you.  (Note that this is demonstrably untrue, as Michel Thomas demonstrated multiple times during his life.)  He decided from there that the trick was to use toilet humour, and teach moderately dirty words.

Unfortunately, he's fallen foul of just about every trap in the image mnemonic book.

First up, he mixes phonetic mnemonics with orthographic mnemonics.

So "pene" (penis) is phonetically twinned with "penne" (the pasta shape), but "ballena" (slender L, realised as /j/ in many dialects) is orthographically twinned with "ballet".

His mnemonics are also polluted by native-language phonology, particularly the Y and W-glide long vowels in English.  So "borracho" (drunk) (which he incidentally spelled wrong) is compared with a rat chewing.  Not only is the E blatantly the wrong vowel, but the mnemonic encourages the W-glide, something which has to be actively fought against when teaching Spanish to an English speaker, not encouraged.

This goes a step further when he equates "aburrido" (bored) with "a burrito".  As soon as I read that, I heard his accent.  But the weak US T is not the same as a Spanish D, and like the diphthongisation of vowels, the T/D distinction is something that needs to be actively taught/learned to avoid the learner falling into bad habits that are difficult to shift later on.  Worse: burrito is a Spanish word, borrowed into English, and the two words have nothing in common.  The mnemonic not only reinforces pre-existing phonemic confusion, but it also starts to mislead in terms of morphology: -ido is a past participle ending, and -ito is a diminutive.

Loan-words as a teaching device have always been fraught with difficulties, and there is one bookshelf audio course out there that makes the same mistake, teaching the -ado past participle ending in Spanish by comparison to "bravado", which we borrowed from Spanish in the first place.  The problem is, the teacher says a southern English "ado", with it's weak A sound, not the harder A of Spanish, and with the w-glide O diphthong, rather than Spanish's pure O.

I was going to give this guy a break, and leave him to just run out of steam quietly, but then after all his promises of having a "new" way that would teach grammar using dirty stories and insults, he suddenly popped up again on one of the forums asking for advice on how to teach grammar.

And this is the problem with languages on the internet -- there are a million and one guys out there who have "an idea", but one idea is not enough for an entire course.  There are so many variables to think about in teaching a language that I've been running over them in my head for about 8 years now and it's only after spending hundreds of hours in front of a classroom of students that I really feel I can start to knit my "ideas" together into some sort of coherent whole.

Are ideas useless...?
No, ideas are great.  But if you have one idea, don't attempt to build an entire teaching solution around it.  Start small.  Build resources; make things that other people with other ideas may be able to stitch together into something far more useful than you alone can make.

The internet is full of lesson 1s, sometimes with a lesson 2 and occassionally even a lesson 3.  But if you never get beyond that, and your material is tied to your lesson (whether through technical means or due to your choice of license), no-one will benefit from it.

If you try to be everything to everyone, you will fail, so why not just be something small, and let others may that something part of something big?