Showing posts with label #h817open. Show all posts
Showing posts with label #h817open. Show all posts

29 April 2013

Rhizomatic non-learning

I've justed watched a video on "rhizomatic learning" set as part of the OU MOOC Online Education.



The "course" proceeded to ask four questions, and suggest that we answer then on our blogs.
  1. Were you convinced by rhizomatic learning as an approach?
  2. Could you imagine implementing rhizomatic learning?
  3. How might rhizomatic learning differ from current approaches?
  4. What issues would arise in implementing rhizomatic learning?
So here's my answers:

Were you convinced by rhizomatic learning as an approach?

No.  One of the most important rules of knowledge and understanding is that if you can't explain something, you don't understand it.  Cormier singularly failed to explain anything about what rhizomatic learning actually means.  He explained the roots of the analogy, but failed to explain how that maps on to pedagogy.  It is hard enough to be convinced of something that you don't understand, and it's harder still to be convinced of something that isn't understood by its leading proponents.

Could you imagine implementing rhizomatic learning?

No.  Until I actually know what it is, I have no way of picturing such a process. All I know is that it "deals with uncertainty", but all good teachers already do that.  I already try to give my students strategies to deal with unknown language, including guiding them to understanding how to determine what is an important word, and which words can safely be ignored without losing the main thrust of the sentence.  If that's "rhizomatic learning", then the term is pretty trivial and meaningless, because it's already common practice.  If that's not rhizomatic learning, then rhizomatic learning is unnecessary.

How might rhizomatic learning differ from current approaches?

Rhizomatic learning appears to be fundamentally very similar to other recent approaches in that it builds a complex and intriguing narrative to capture the imagination and build a following, but it gives no concrete, reproducible guidelines or anything approaching "information".

What issues would arise in implementing rhizomatic learning?

Simple: you'd have to figure what the hell they were talking about before you started.


Don't get me wrong: the philosophical notion of the rhizome is a very useful conceptual tool when analysing large bodies of data, and it gives an interesting way of looking at learning schemata, but the thing is that the rhizome is an attempt to explain the underlying conceptual structure of information, and not a model of the learning process.  It can and should inform the teaching process, and it does provide a philosophical counterpoint to a hardline belief in a single "correct" order of teaching, but this alone does not justify it as a direct model of the learning process.

In fact, Cormier doesn't even seem to talk about the central point of the rhizome paradigm: the interconnectedness of knowledge.  Instead he veers back off into networked learning, and instead of the "rhizome" representing the culture connecting various visible phenomena, its something connecting people as nodes of information

And this leads us to the biggest contradiction in the connectivist pedagogical ideology: Cormier talks briefly about the qualities of MOOCs (and by this he means "connectivist learning") and he talks about self-organised groups learning from each other ("the community is the syllabus").  But we naturally self-organise into groups with shared interests and philosophies.  To use an extreme example, people who believe that the Earth is flat are more likely to be members of the Flat Earth Society than members of their local astronomical society.  Their network therefore contains information that is objectively and scientifically verifiably incorrect, and the network reinforces the belief of all members.

If a course was to be written that brought together a bunch of educators that were predisposed to believe in the untested, unverifiable and barely defined theories of a bunch of educational ideologues, would we not similarly find that the "truth" within their network would be very different from the "truth" in the global network, or indeed the actual truth (as much as there is) in the peer-reviewed studies published in scientific journals...?

13 April 2013

The myth of "no training required".

I was just watching a "slidecast" by Martin Weller, through the H817 MOOC.  I stopped.  Why? Because any time my computer did something else, I jumped out of my skin.  There was a reason for this: the volume on the recording was so low that I could only hear it with a pair of headphones on and the volume turned up to 11, and any beep, bloop or bing that my computer made was deafening.

As a side effect of having such low volume, there was a lot of hiss.

My laughter was pretty ironic when at around 3:30 he claimed that this sort of thing needed "minimal tech skill" and suggested that you don't need training.  This is one of the pervasive myths of the internet age: "intuitive", "natural and easy, "no training required"; no matter how you word it, it's not true.

Recording volume is a perfect example of this.

I was asked by the university to lend my voice to a language course they were recording.  I brought along my field recorder, because I had a feeling the person doing the recording wouldn't have been adequately trained.  She wasn't -- not her fault, but she wasn't.  And she didn't know how to set the recording levels, so I ended up recording the session on my own equipment.  And all because the university didn't set a big enough budget for the recording... "no training required", right?  But our university has a department dedicated to that sort of thing, and you'll often see students in the corridors and the car parks with video cameras and boom mics.  But anyone who's been in a university knows that effective interdepartmental knowledge sharing is something of a pipe-dream....

A year or two ago, I was watching an Al Jazeera programme online, Living the Language: Canada: The Ktunaxa.  The Ktunaxa people were using technology to record their language and produce software to help teach it to others.  The pictures show them using an expensive-looking, high-quality microphone, but the output is pretty poor.  The following pictures demonstrate why.

First up, here's a picture taken by the film crew during an on-site recording of an elder speaking:

What you are looking at, if you've never seen the inside of an audio editor before, is a very, very, very quiet recording.  Now have a look at this:


This is the woman's post-recording editing process.  Here she has taken a very, very, very quiet recording and boosted the volume by about 5 times.

Unfortunately, when you record very, very, very quietly, you don't capture much information.  The software cannot just magically pull that information out of the ether, so instead it takes a best guess, which results in a muddy, unnatural output.

Because nobody taught her how to set her levels.

And that need for education is well known.  It has been observed time and time again that an untrained user will more often than not set the volume far too low.  They know that you can "max out" a recording (red lights flash!!!) but they don't appreciate the problems of poor quality that occur when the volume is set too low.

We know this -- anyone with the slightest background in audio engineering will tell you.  And yet the "learning technologists" tell you that you don't need to be trained.

Well I'm sorry, but you do.  And that includes you, Martin Weller.

12 April 2013

Modern learning experiences... repeatability...?

In the H817 MOOC, and everything else written about connectivist teaching, there is an evident strand of frustration among the proponents of connectivism that other teachers just don't "get it" and aren't buying into this new trend.

George Siemens claims that connectivism is a pedadogy for the internet age, as opposed to everything else which is pre-internet pedagogy crammed into an internet-shaped package.  Leaving aside the obvious criticism (that the human brain has not evolved since Tim Berners Lee first pinged his server), the real question is whether their successes (if they were indeed successes) are repeatable.

Since the start of the H817 course, I have been trying to remember the name of a guy I read about on Slashdot over a year ago... and funnily enough I never thought to check my bookmarks, and when I went looking for another bookmark today, voilà! Michael Wesch.

Wesch proposed a model of teaching based on social media and interactions.  He did it in his classroom and had great results.  He gave talks, he wrote articles, he encouraged other people to apply his techniques.  He was a teaching technology evangelist.

But eventually he stopped evangelising his techniques because the feedback he got from other teachers was that they weren't working.  There was something missing, some kind of magic that he hadn't included in his instructions.  And of course people with completely different techniques were getting results that were as good as his.

So he's stopped evangelising.

The important thing is the connection between the teacher and the student, and that's not down to the technology.  In fact, I would say that the technology has to follow as part of the teacher's passion and way of thinking.  What does that mean?  I haven't a bloody clue.  And neither does anybody, or that mystery -- "wonder" in Wesch's word -- of teacher/student rapport would be formulisable, and therefore teachable.  And if it was teachable, Wesch would have been able to teach people how to teach with technology.

When discussing language learning with other learners, I have always made a strong distinction between "what you do when you are learning" and "how you learn".

What I mean by this is that when someone does a series of grammar drills from a book, we cannot say that those drills are directly causing them to learn.  In fact, for every person who appears to learn successfully from such a book, you will find another half-dozen who fail to learn from exactly the same book.  Therefore we have to conclude that looking at the book's activities only gives us a very superficial view of the learning process.  We have to attempt to analyse the difference in approach between the successful and the unsuccessful learner.

But these approaches are very poorly understood and documented and very rarely taught.  The successful language learner's natural and intuitive learning process is not available to be repeated, so the method doesn't improve.

As soon as I started training to teach English, I quickly came to the conclusion that the same distinction affected language teaching techniques.  Everything in the how-to-teach books struck me as "what to do when you are teaching" rather than specifically "how to teach".  None of the activities really taught the language, and yet these books were written by very successful teachers.  They must have constructed sophisticated teaching styles and structures unconsciously, or their students would be failing -- it's just a shame they don't know what that is, or they could tell us.

The more and further I read into teaching, the more I find that this isn't specific to language teaching.  Don't get me wrong -- it's really not as bad in most fields as it is in language, but there is still a huge conceptual gap between "classroom activities"/"what I do" and "teaching"/"how I teach".

The connectivists are a prime example -- they give a list of fuzzy... I don't know, stuff; guidelines and that sort of thing, and a couple of fuzzy justifications for why it should work, but they simply do not give enough information to make it repeatable.  It's "what" not "how", "activity" not "teaching".

And right now the world is full of people trying to replicate the "MOOC", and as Siemens and Cormier are only too happy to tell us, they're doing it wrong.

Well, maybe that's because Siemens and Cormier haven't told us how to do it right.

And the most likely reason for that is simply that they do not know how to do it right.

07 April 2013

Adaptive learning systems: nothing to be afraid of.

I was meandering through various blogs last week, following various links to material that I found idly interesting.  On Stephen Downes's website, I saw his link to a blog post by David Wiley (a name getting frequent mention in the OU's H817 MOOC*).  The post discusses the dangers of adaptive learning systems -- systems that track your learning and teach your stuff.
(* Actually, I'm starting to get a little stir-crazy reading a lot of data-free opinion pieces from the same four names: Siemens, Cormier, Downes and Weller.)

Wiley's criticism is that you don't own any of the material you access, and he accuses the adaptive learning companies of exploiting our willingness to pay for services while expecting content for free.
Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other.
But how is this any different from any other "teaching" experience?  Wiley seeks to equate adaptive learning systems with textbooks, but is he right to do so?

There are several problems here:
  1. The difference between a teaching text and a reference book.
  2. Course as "teaching" vs course as "material".
  3. The need to ensure that your material is new to the student.

The difference between a teaching text and a reference book

Perhaps I'm lucky in that the subjects I have studied make a big distinction between these two categories.

In languages, you get course books, learners books, textbooks, workbooks etc... a whole bewildering array of paper that leads you through your learning, but when you've finished, you've got practically no need for any of it, and it sits gathering dust on your shelf as you can't bring yourself to get rid of stuff that cost so much to collect, but in the end, all you ever use is a dictionary (probably online) and a single reference grammar book.  That reference grammar book was no use to you when you started out, of course, as the examples contained far too many unfamiliar words, and the ordering of the book made it impossible to really understand anything new.

The same in computers, where a learner's book would hold your hand through the various concepts required to learn a new technology, but looking back on it later, the learner's book was never any good for looking up basic concepts (which were drawn out over entire chapters) and didn't contain enough information on the advanced concepts that would be of some use to you by this stage.  So that book goes to the second-hand shop and you buy a desk-reference or bookmark a webpage.

And yet publishers continue to attempt to sell books to suit both markets, invariably falling between two stools in the process.

A learner doesn't need access to a teaching text after the class is over, and is free to go out and buy a reference book instead.

Course as "teaching" vs course as "material".

As a teacher, I give out lots of material in class, but during the course of the lesson, the material is quickly "consumed" -- worksheets are filled in, and by the end of the day, the student has not accrued any additional work-at-home material over and above any specific homework I may set.  Do Wiley and Downes object to this?  Am I cheating my students if my material is not infinitely reusable?

Because an adaptive learning system is not an attempt to replace the textbook -- it's an attempt to replace the teacher.  A computer, in theory, is capable of producing a more individualised learning path for each student than a teacher, thanks to the computer's essentially limitless perfect memory.  I cannot remember every single difficulty each of my students has, but a computer can.

The need to ensure that your material is new to the student.

Even where there is a developed culture of sharing between teachers, ever teacher holds some things back for themselves.  Why?  It's the "old standby" -- that exercise or activity that can be adapted to various levels to provide an emergency lesson when the projector breaks or the new books haven't arrived.  If you don't share that lesson, then you're safe to use it with any and every class, but if you share it with your colleagues, there's a very high chance that sooner or later you'll have a class say "we did that with Mr So-and-so", and your stumped.

Most sharing at the local level is facilitated by having a shared syllabus -- if the lesson is for 2nd years, you're safe to do it with any 2nd year class, and unsafe to do it with any 3rd year class.  But one way or another, there has to be some way to ensure that students get tasks they've never seen before, because while a good story is no worse for being told a second time, a good lesson is destroyed by being taught a second time.

In the classroom, you can always rely on "we already did that" to let you know, and then you improvise something else, but could you do that in an adaptive learning system?  I don't really think you can.  You can't just accept any old feedback from the student (natural language processing systems aren't that sophisticated yet) and a big red button marked "already done it" would be very off-putting to the student and would make the software look really unprofessional.

No, the software has to know what you've done and what you haven't, and that means keeping control of when and how the learner accesses the material.


In essence, though, I think that Downes and Wiley are objecting on ideological grounds rather than practical, pedagogical ones.  They have aligned themselves with a rather dubious view of learner-centred education where the learner makes all the choices, apparently empowering and enabling them.  Adaptive learning systems take the diametrically opposite view: that by taking the decisions away from the learner and instead presenting whatever the learner most needs or is best ready for at any given moment, the learner attains a much more complete and well learned education.

And anyway, all the evidence is on the side of the adaptive systems guys, because connectivism and the like breaks away from the proven techniques of staggered repetition, planned progression and learning-by-testing which are the very foundations of adaptive learning, and replaces them with an almost entirely unstructured meander through materials effectively chosen by a known non-expert (the learner) with no real "testing" of concepts.

And yet this type of vague handwavery is presented in the absence of discussion about the many known effective techniques in a course that is supposed to be part of a masters-level module.  I am appalled.

05 April 2013

H817 Activity 9: Ask a superficial question....

Everyone knows the saying: "ask a stupid question, get a stupid answer".  What's unfortunate in that old cliché is the word "stupid", because it casts an implicit judgement on the asker.  Let's take that word and replace it with "superficial".

Ask a superficial question, get a superficial answer.

Isn't that far more constructive?

Now that we've done that, let's move on to the superficial question in question, or rather the superficial learning task:
"For your blog content and other material you produce, consider which of the Creative Commons licences you would use, and justify your choice."
[Activity 9, Open Education, the Open University]
Now I hd initially skipped it, dismissing it as a "stupid question", which was rather uncharitable and unconstructive of me.  But as more and more answers came through the blog aggregator from other course participants, I realised it was a very dangerous question that was leading to poorly thought out answers.  It was a superficial question leading to superficial analysis, which was leading my fellow students to draw conclusions based on inadequate data.

Blogs are atypical

The task asks about "your blog content and other material you produce", but in the end most people ignored that second bit because it was so vague.  The only seed the question planted for active consideration was that of the blog.

A blog is not reusable.  My blog is my opinion.  You cannot sell my opinion.  You cannot modify my opinion just by editing text (that would be putting your words in my mouth).  You may "share" my opinion, but only by virtue of having the same opinion as me, not by copying my exact words (that would be putting my words in your mouth).  Moreover, because it is opinion it is of very little value as a resource.  I do not want people copying my half-thought-out ramblings as though they have the same merit as a properly researched, reviewed, edited and professionally published article.  So there can and should be no "Creative Commons" license on my blog.  Quote me like you would anyone else; link to my post -- fine, just don't exhalt my witterings above their station.

In fact, one of the things I've always consciously tried to do when writing this blog is avoid the trap of the populist bloggers who are all buddy-buddy and effusive, and convince a lot of people that everything's good and great and exactly the way they tell it.

No, my blog is oftentimes abrasive and confrontational, because I don't want to convince anyone: instead I want to make everyone doubtful, skeptical.  I may offer suggestions aimed at resolving the doubt, but my first aim is to make readers ask themselves the right questions: deep, searching questions, not superficial ones.

I get what the course team were trying to do with the task they set -- they wanted to start with something that was relevant to all the students.  As always, though, focusing on relevance to the students distracts from relevance to the course aims.

Let's get one thing clear: a blog is not an "open education resource"!

That doesn't mean that blog copyright is entirely irrelevant, it's just a side issue, so it was inappropriate as the central question of the task.  As a warm-up, a lead in, it's fine.  But at no point did the task throw in anything more relevant for consideration; at no point did it say (for example):
How is the situation different for media resources such as photos, videos and music?
So let's ask ourselves that:

How is the situation different for media resources such as photos, videos and music?

A blog is inherently bound to its subject matter and its intended purpose.  Take this blog post, for example: it would be nigh-on impossible to repurpose it as an article on flower arranging, or an advert for a Mars bar.

But consider one of your wedding photo.  It can be used as a picture of your wedding, or it could be a picture of a wedding.

It could be used as the cover of an expensive bridal magazine, or the poster advertising a huge wedding convention.  Both of those are changing the "purpose" of the picture.  A flower arranger could zoom in and crop to leave just the bride and the bouquet and use it as her business card.  A little copyright notice on the card, and they'd still be adhering to CC-BY.  But it's suddenly become fundamentally dishonest -- that flower arranger didn't make your bouquet but they're implying they did.

And let's take it a step further.  What if someone takes your photo and photoshops it... to use in an advertising campaign for "XXXL Dating" or "Ugly Singles".  Suddenly your big day has been utterly defiled by pictures that are recognisably of you with all your minor imperfections exaggerated and laid bare for everyone to see... that slight gap in your front teeth widen to the width of a cigarette, the kink in the bridge of your nose opened up like one of the bends on the Manx TT course, and your slightly droopy eyelid now hanging halfway over your pupil, as though you've got no sight in that eye at all.


OK, this discussion is all within the ground staked out by the task, but at no point were we, as students, forced to think beyond the effects of licensing on our own blogs, which are in reality little different from the blether in pubs and staffrooms the world over.

How can we start thinking about the effects of licensing of something with high value and utility when they only ask us to consider something of low value and utility?


Perhaps if they had prompted us with something a little deeper than a the swimming pool in Barbie's dream house, we'd all be discussing the fundamental flaws inherent in Lawrence Lessig's original vision for and ongoing direction of the Creative Commons movement.

But that wouldn't do, because if this course encouraged us to look beyond the superficial, we would see that behind the veil of sophistication in this brave new open education world, there is no deeper meaning, no profound insights into the human condition or the learning process.

04 April 2013

More Maths for MOOCs!

The guys behind connectivist MOOCs seem to be against the teaching atomic, well-defined concepts, which is all well and good, but I think those same concepts might inform their theories a bit better.

This morning, I received a "welcome to week 4" message from the organiser of the OU MOOC Open Education.  He comes across as a really nice guy on both email and video, which makes it a lot more difficult to criticise, but one thing he said really caught my attention as indicative on the problems with the "informal education" model that the connectivist ideologues* profess. (* I refuse to call them theorists until they provide a more substantial scientific backing for their standpoint -- until then, it's just ideology.)

So, the quote:
"As always try to find time to connect with others. One way of doing this I've found useful is to set aside a small amount of time (15 minutes say) and just read 3 random posts from the course blog h817open.net and leave comments on the original posts."
Don't get me wrong, I commend him on this.  Many MOOC organisers take a step back and stay well clear of student contributions, for fear of getting caught up in a time sink.  No, my problem is the word "random" coupled with the number "3".

The cult of random

There is a scientific truism about random: when the numbers are big enough, random stuff acts predictably.  You can predict the buying patterns of a social group of thousands well enough to say that "of this 10000 people, 8000 will have a car", or the like.

The best examples, though, come in the realm of birthdays.

If you take the birthdays (excluding year) of the population of a country (eg the UK) and plot a graph, you'll get a smooth curve peaking in the summer months and reaching its lowest in the winter months.  Now if you take a single city in that country (eg London), you'll find a curve that is of almost indistinguishable shape, just with different numbers.  Take a single district of the city, and the curve will be a similar shape, but it will start to get "noisy" (your line will be jagged).  Decrease to a single street (make it a large one) and the pattern will be barely recognisable, although you'll probably spot it because you've just been looking at the curve on a similar scale.  Now zoom down to the level of a single house... the pattern is gone, because there aren't enough people.

In physics, this is the difference between life on the quantum scale and the macro scale.  Everything we touch and see is a collection of tiny units of matter or energy, and each of those units acts as an independent, unpredictable agent, but there are so many of these units that they appear to us to function as a continuous scale, describing a probability distribution like in the example of birthdays. Why should I care whether any individual photon hits my eye if the faintest visible star in the night sky delivers 1700 photons per second.  The computer screen I'm staring at now is probably bombarding me with billions as we speak. The individual is irrelevant.

But massive means massive numbers, right?

I know what you're thinking -- with MOOC participants typically numbering in the thousands, these macro-scale probabitimajigs should probably cut in and start giving us predictability.  Well yes, they do, but not in the way you might expect, because MOOCs deal with both big numbers and small numbers.

Again, let's look at birthdays.

Imagine we've got 365 people in a room, and for convenience we'll pretend leap years don't exist (and also imagine that there aren't any seasonal variations in births and deaths).

What is the average number of people born on any given day?
Easy: one.

And what is the probability that there's someone born on every day of the year?
This one isn't immediately obvious, but if you know your stats, it's easy to figure out.

First we select one person at random.
He has a birthday that we haven't seen yet, so he gets a probability of 1, whatever his birthday is OK.
Now we have 364 people, and 364 target days out of 365.
Select person 2 -- the chance he has a birthday we haven't seen yet is 364/365.
Person 3's chance of having a birthday we haven't seen is 363/365... still high.
...
but person 363's chance is 3/365, person 364's chance is 2/365 and person 365's chance is 1/365.

To get the final probability of 1-person-per-day-of-the-year, we need to multiply these:
1 x 364/365 x 363/365 x ... x 3/365 x 2/365 x 1/365

Mathematically, thats
365! / (365^365)
or
364! / (365^364)

It's so astronomically tiny that OpenCalc refuses to calculate the answer, and the Windows Calculator app tells me it's  "1.455 e-157" -- for those of you who don't know scientific notation, that "e minus" number is the number of zeros, so fully expanded, that would be:
0.000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 145 5

...unless I've miscounted my zeros, but you can see clearly that the chances of actually getting everyone with different birthdays is pretty close to zero.  It ain't gonna happen.

A better statistician than me would be able to predict with some accuracy the number of dates with multiple birthdays, the number of dates without birthdays etc, but not which ones they would be.

The reason we have these gaps is that we two relatively high numbers and one relatively low number.  That gives us a predicatable probability distribution with predictable gaps.

Now, Martin Weller tries to look at and comment on 3 random blog posts when he connects.  Now let's imagine that this was a policy that everyone adhered to without exception (either above or below).

OK, so let's imagine our MOOC has 365 students (so that we can continue using the same numbers as above) and that for each post we put up we comment on 3.  What's the chances that every blog post gets at least one comment?

Well this is pushing my memory a bit, cos I haven't done any serious stats since 1998, and it's all this n-choose-r stuff.  Oooh.... In fact, I can't remember how to do it, but it's still going to be a very small probability indeed.

In order to get a reasonable chance of everybody getting at least one comment, you need to get rid of small numbers entirely, and require high volumes of feedback for each user.  But even though we're talking unfeasibly high volumes here, it still doesn't guarantee anything.

Random doesn't work!

You cannot, therefore, organise a "course" relying entirely on informal networking and probability effects -- there must be some active guidance.  This is why Coursera (founded by experts in this sort of applied statistics) uses a more formalised type of peer interaction.

The Coursera model

Coursera's model is to make peer interaction into peer review, where students have to grade and comment on 5 classmates' work in order to get a grade for their own work.  The computer doesn't do this completely randomly, though, and assigns review tasks with a view to getting 5 reviews for each and every assignment submitted.

Now in theory, their approach is perfect, because even if they don't achieve a 100% completion rate/0% dropout rate, you should be able to get something back.  However, their execution was flawed in a way which convinces me that Andrew Ng didn't write the algorithm!

You see, the peer reviews appear essentially to be dealt out like a hand of cards -- when the submission deadline is reached, all submissions are assigned out immediately.  Each submitter gets dealt 5 different assignments, each assignment is dealt out to 5 different reviewers.  It doesn't matter when you log in to do the review -- the distribution of assignments is already a fait accompli.

How did I come to this conclusion?  Well, after the very first assignment in the Berklee songwriting course, I saw a post on the course forums from someone who had received no feedback on his assignment.  I immediately checked mine: a full set of 5 responses.

Even though they had distributed the assignments for peer review in a less random fashion, they did nothing to account for the dropout rate, even though the dropout rate is reportedly predictably similar across all MOOCs -- and not only similar, but very high in the first week or two.  So statistically speaking, gaps were entirely predictable.

What Coursera should have done....

The problem was this "dealing out" in advance.  If they'd done the assignment redistribution on a just-in-time basis.  When a reviewer starts a review, the system should assign one with the minimum number of reviews.  No-one should receive a second review until everybody has received one, and I definitely shouldn't have received 5 when some others were reporting only getting 3, 2 or even none.

"Average" is always a bad measure

As a society, we've got blasé about our statistics, and we often fall back on averages, and by that I mean "mean".  But if the average number of reviews per assignment is 4.8 of a targeted 5, that's still not success and it's still not if it doesn't mean that everyone got either 4 or 5.

Informal course organisation will never work, no matter how "massive" the scale, because the laws of statistics don't work like people expect them to -- they don't guarantee that everyone gets something -- they guarantee that someone gets nothing.

30 March 2013

"Open", but no "source"....

While looking at open educational resources (OERs) for the OU MOOC H817, I am reminded of one of the big failures I identified in "open" materials right from the early days.

The Creative Common aimed to create something analogous to the open source movement in computing.  In open source, whenever you get an application, you are entitled to a copy of the "source code", that is the program in an editable manner, so that you can change its functionality easily.

The Creative Commons did very little to replicate this, with most items released on a Creative Commons license being released in their finished form only.  Yes, you can take material from a JPEG image or an MP3 file and reuse it, but the end result will be heavily degraded.

Just search YouTube for "best science experiments" or "best piano cats" or anything of the like, and you'll find a very blurry video made by editing a series of slightly blurry videos together -- at every stage, quality is lost.

Wikimedia Commons has made efforts to correct this, by encouraging people to post their images using the editable scalable vector graphics (.SVG) format.  This has been widely accepted among the Wikipedia community, as it has led to the production of high quality diagramming that can be readily translated, eg this rather beautiful map of the Scottish island of Islay, originally produced by an French-speaking amateur cartographer.

But the biggest stumbling block, as I see it, is video.

Filming is a complex, time-consuming activity that needs dedicated, trained personnel.  Editing is a complex, time-consuming activity that needs dedicated, trained personnel.

The Open University has the personnel and the resources, and they have released various video resources under a Creative Commons attribution - non-commercial - sharealike (CC-BY-NC-SA) license, explicitly giving users permission to adapt and remix the content, including creating translations into other languages... but how can you translate a video when the audio has already been mixed down?

Consider that you often have the "live" background sound from the scene (footsteps, wind, birdsong etc), and then a piece of music played over the top, and finally a disembodied voice speaking over the top of that (known as voiceover, or VO).  To make a decent translation of a video, you need these tracks separately, so that you can replace the VO alone, or to allow "ducking" of on-camera interviews without losing any continuing music (ducking is when you turn down the volume on one track to allow someone to speak over it, as used in most news and documentary programs when there is a foreign speaker on the screen).

But the Open University provides only a web-quality video with premixed sound, so I couldn't, for example, do a simple translation of their digital film school videos to Scottish Gaelic (something that would be quite useful to people interested in taking part in the annual short film competition FilmG).  I could ask, I suppose, but I don't even know if they would still have the source files.

Besides, one of the most overlooked senses of the word "open" is the idea of being "out in the open".  Materials are more useful if they're immediately available, so that someone can just get the notion to do something and do it.  If it takes a lot of effort, and there's no guarantee you're going to get what you really want, in the end, it's easier just to cobble together something for yourself, something that's unlikely to see any reuse....

29 March 2013

OERs: moving towards further reusability

I'm pretty skeptical about OERs (open education resources) as I've said before.  A lot of the talk on the H817 blog aggregator is about how things are too tightly coupled – you can't break apart the courses as you'd like.

To have any real evidence of any substantial OER use in the real world, we'd need to see the same thing appear in two places... and it just so happens that I have seen the same thing occur in two places.  Take this picture:



Last year I was studying Gaelic full-time, and for a bit of variety I took anatomy and physiology as an outside course.  It was all online, and there were a lot of pictures of the same style as the above, and every time I failed to understand something in the course notes, I turned to Wikipedia for guidance and more often than not found identical pictures to the ones I was seeing in my course.  The picture above was taken from the Wikimedia Commons site, where it is free for any use, commercial or otherwise.

But more interestingly, it originated from a US federal government scheme, and all works of the federal government are property of the US people, meaning that effectively they're in the public domain as soon as they're published.

Now, the original purpose of these images was to support a basic module entitled Anatomy and Physiology on the US National Cancer Institute training site, SEER.  As I've still got an archive of my course notes on an external hard drive, I decided to see whether the course organisers had just gone to Wikipedia, or if they'd gone straight to the source.  It turns out they'd just gone to Wikipedia, and created a lot of unnecessary work for themselves in the process.  At first glance, the SEER material looks far better written than the actual course materials, in that it is less disjointed and easier to understand.  There is far more consistency in the look of the various images, whereas the course I took is a hodge-podge of widely varying drawing styles.

So there is clearly some reuse going on, but it really only seems to be starting at the level of images.

This, of course, is where it makes most sense: it's the "media" part of any course that is the hardest and most expensive to produce, so it seems only right that this is the place to begin.

Perhaps, then, the most suitable approach to improving uptake of reusable materials is to start with individual media resources, then build bundles of media resources, only once we have these bundles encourage teachers to start building text around them.

The problem with current efforts is that the text is added in too soon.  As soon as we start writing a text, we are making decisions about what to include and what to exclude from the activity/lesson/whatever.  Once we've made those decisions, we unconsciously blind ourselves to the gaps in the media set – if we don't need it for our lesson, we don't see it as required to complete the set, and the media set remains conceptually linked to our lesson.

For many types of media, no-one but the original artist/producer can mimic the style of the set.  Say one person records 101 anatomical terms so that learners can hear the pronunciation, and includes renal artery but misses renal vein.  The next guy who wants to use the set either has to adjust his lesson to suit, or record the word, but it will be in a different voice, so will be very noticeable.

27 March 2013

Suitability of MOOCs - H817 Activity 12

The OU free MOOC Open Education set the following question as activity 12:
Before we examine MOOCs in more detail, briefly consider if the MOOC approach could be adopted in your own area of education or training. Post your thoughts in your blog and then read and comment on your peers’ postings.
Now, just which field should I address?  Computer science or language learning?  How about both?

And for now, I'll restrict myself to the type of MOOC proposed by Cormier, Siemens etc, the "connectivist" MOOC.

So I'll answer "yes" and "yes" and "no" and "no".

One of the bits of material supporting this activity was a video interview with the aforementioned Mr.s Cormier and Siemens.



What really jumped out at me was that little after a minute into it, George Siemens basically says that the system emerged from how they were running online conferences.  Sound familiar?  Well, a few weeks ago I came to the conclusion that the MOOC had far more in common with a conference than a "course".

So it's utterly trivial to ask whether the MOOC has a place in any given field: if there are conferences in that field, a conference-type MOOC can work.

So that's "yes" and "yes".  Now onto "no" and "no".

I'll start with a quote from Isaac Asimov that I picked up from somewhere in the last week while working through blog posts on MOOCs:
“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'
This could have been written for Web 2.0.  (No further explanation needed.)

But in the MOOC setting, it's particularly salient.  The whole idea of connectivism is to learn from each other... but we're not experts.  Everything I've read or heard from Cormier or Siemens to date seems to mention but quickly gloss over the fact that their MOOCs have focused on educational technology, a field with many informed practitioners, but no confirmed experts.  In fact, on of the papers mentioned in the disastrous Fundamentals of Online Education Coursera module described online education as being "at the buzzword stage", a thin euphemism for the fact that it's all opinion and no "knowledge".  And that's the space that conferences have always occupied: the point where we're sitting on the boundaries of the state of the art, where informed practitioners of roughly equal knowledge try to contemplate and push those boundaries.

But when there is an expert, why should we rely on the knowledge of peers, who may in fact turn out to be wrong?

Nowhere can this be more clear-cut than in the computer field (or at least "the discrete mathematics field", of which CS is a subset).

At the level of programming, there can be no subjective discussion about the best way of carrying out a given operation, because the methods can be empirically measured.  We can measure execution time, we can measure memory constraints, we can measure accuracy of results.  We get a definite right and wrong answer.  Yes, we can devise collaborative experiments where we pool our resources and share our data to find out what those right and wrong answers are, and in computer science courses we often do, but that serves not to teach the answer, but to teach the process of evaluating the efficiency of an algorithm or piece of software.

We do not generate more knowledge of how the computer works by discussing, only of how we work with it.

So there's my first "no", but this is not really specific to computer science, because in any undergraduate field, you teach/learn mostly the stable, established knowledge of the field.  Very little in an undergraduate syllabus is really open to much subjectivity in terms of knowledge, and in arts degrees, the subjectivity is restricted pretty much to the application of established knowledge.

Everyone discussing MOOCs at the moment seems to be talking about "HE" (higher education -- ie. universities) and not acknowledging that fundamental split between undergrad and postgrad.

So I've stated that no undergrad stuff can follow a connectivist approach, is it still worth saying anything about language specifically?

I think so.

Because language learning, more than any other field of education, can be scuppered by overenthusiastic learners -- the biggest obstacle in any language course is the presence of learners: how can I learn a language by hanging around with a bunch of people who don't speak the language?  And yet, for most of us these courses are vital if we are ever to learn a language.

And I myself have benefited greatly from informal networks of learners offering mutual support, so why not a MOOC?  Because the informal networks I have benefited from are of vastly different levels, so there's always been someone with some level of "expertise" above you.   But once you formalise into a "course", you're suddenly encouraging a group without that differentiation; a group of roughly equivalent level.  An overly confident error pushed by one participant can become part of the group's ideolect -- a mythological rule that through the application of collective ignorance crowds out the genuine rule.  Without sufficient expert oversight, how is this ever to be corrected?

A language MOOC would most likely be of far less use than either traditional classes or existing informal methods....

# I talk to the MOOCs, but they don't listen to me...

When I was studying languages with the OU, I found it very difficult to motivate myself to do most of the task.  These tasks I would happily do in a classroom, but on my own, I couldn't be bothered.

What was the difference?  Two things: one, in the classroom, if you don't do the task, you just sit there waiting for the others to finish -- you don't actually get the time back; two, normally your work will be examined by someone else -- either the teacher or a classmate, and if someone reads or hears your work, it has a purpose.  Even if only half of your classwork is ever read or heard, it at least provides some kind of motivation.

But when the book I was reading on my told me to write 200 words on my opinion of the treatment of minority languages in Spain, I knew I could be doing something else with my time, and I couldn't be bothered sitting down and writing something no-one else would ever read.

MOOCs, they would have us believe, address this, by making sure you have peers available at all times to read and comment.  Sadly, there's no guarantees, with some postings to group forums getting lots of views and/or comments, and some getting none at all.  The act of writing becomes an act of uncertainty -- it's like talking to the darkness without knowing whether or not there's actually anyone there.

I don't know about you, but this doesn't really motivate me to write much.  The latest task description:
Before we examine MOOCs in more detail, briefly consider if the MOOC approach could be adopted in your own area of education or training. Post your thoughts in your blog and then read and comment on your peers’ postings.
Well... who am I writing it to?  Who's going to read it?  Is anyone actually going to see it before it rolls off the bottom of the monolithic, uncategorised course blog aggregator?

Looking at the writing style of many of my peers, I'm not the only one with these doubts.  More than a few of the blog posts barely classify as prose, instead being little more than the writer's personal lecture notes.

This creates something of a death-spiral.  Because some of the blog posts don't lend themselves to reading, people don't read them, and don't comment on them.  This discourages them from viewing the blog aggregator, which means they don't see and don't comment on the genuinely readable posts, leading authors to become despondent about the lack of views, leading them to write without the expectation of gaining a readership, which leads to them not putting the effort in to make their posts readable, so people don't read them....

And yet, when we eventually tire of this and give up, the guys behind the MOOC don't view it as a pedagogical failure -- they shrug their shoulders and talk about learning choice and learner independence, and say that by leaving the course we're exercising those characteristics they want most to instill in us.

But I don't take courses to learn learner independence.  I take courses to get expert guidance to aid me in the acquisition of new domain knowledge, because while I can operate adequately as an independent learner, expert guidance gets me there quicker.

24 March 2013

Evaluating Open Education Resources (H817)

I'm getting rapidly disillusioned with the Open University's MOOC/non-MOOC Open Education.  After kicking off with a course "reading" that was a 77 slide PowerPoint file with no speaker notes, in week 2 they set a long reading from a decade ago, on a topic called "Learning Objects".  Now, it's not the length of the post in itself that bothers me, and the age is not a problem as this notion was a significant stepping stone to the open education systems of today... what winds me up is that after the link to the article, there was a little button marked "reveal" comment.  After the link.  So you would assume, wouldn't you, that it was to be read after reading the article... which is what I did.  Here is the content of the hidden comment in full:
Note: Downes goes into detail on many aspects that are not necessary for this course. You do not need to read the article in detail – your aim is to gain an understanding of what learning objects were and why they were seen as important.
... and you'll see why I was unhappy.  It's utterly sloppy design to leave you reading the whole thing before telling you not to!

This week's activities then follow on with one of the most spectacularly vague tasks ever, and judging by the stuff coming up on the course blog aggregator, I'm not the only person who thinks so.  Our task is to look at several repositories of "open education resources" (OERs) and evaluate the suitability of the material presented for assembling a course on "digital skills".

I'm presuming that they've chosen the task title "digital skills" to allow it to be an open task, but they've taken the original MOOC philosophy to its erroneous ultimate conclusion.  The philosophy of MOOCs (as embodied in change.ca) is the idea of learner independence, and the notion that learners work better when they can choose what to work towards, but yet unrestricted choice has been shown to be absolutely crippling, because with open choice comes indecision.  (If you're interested in this idea, check out Barry Schwartz's TED talk The Paradox of Choice.)

Consider also that many of the great artists imposed limits on themselves, such as Pablo Picasso's famous "blue period" (not that I personally rate Picasso's work much), in order to stimulate extra creativity.

But here I am with an excruciatingly vague task description, and there's nothing in the task to force me to narrow down and focus on a particular aspect of the large potential space of meaning before I am expected to wade through gigabytes of texts and videos looking for things that are specifically relevant or useful.

And the course to date hasn't given us any real guidance on how to evaluate the usefulness and applicability of the material anyway.  And we're back to this idea that there's no rules, and that individual creativity and "engagement" with material will show us the way, throwing out all the hard-learned lessons in pedagogy, instructional design and other closely related fields.

It is far easier to do a complex task by following a defined process than to try to intuit the process by attempting to complete the task.  Early guidance can develop good patterns of activity that are internalised over time and become automatic.

17 March 2013

h817 activity 3

The OU's MOOC Open education asked us for our first task to create a visual representation of some of the themes in open and online learning.

My submission is this picture, which I call "data density".


What does it represent?

Quite simply, it's an edited screengrab of the webpage on which the task was set.

I blanked out everything that was a repeated element, leaving only items unique to this page — specifically a copy of the unit number and name in fairly small type.  Everything else — everything else — was part of the Moodle template and is repeated on every single page of the course.  You land on a page, and you see no new content until you scroll down.  Several times I've found myself thinking I'd not gone to the right page.

This breaks a fundamental law of web design that has been known and understood for over 15 years: if you want to keep someone's attention, show them something new on the first page.

The problem in the OU's case is exacerbated by being on an environment within an environment within an environment.

At the core, we are in the course environment, and we're bundled up with info on the course and course navigation tools.  Then we're in the OpenLearn environment, and bundled up with all the cruft for navigating to other parts of OpenLearn.  Finally, OpenLearn sits within the wider OU environment, so we have that cluttering up the screen as well.

The OU's not the only place to do this.  One of my current favourite blogs on online education also suffers from having a title and banner image so big that every page looks identical until you scroll down.  (I'm not going to name the blog, but I hope she's reading!)

So here's the thing: all this theoretical talk about theoretical pedagogy is all well and good, but until the teachers are capable of using the tools, online education is going to be pretty rubbish.

Edit: and before anyone says it, yes, I know you can click "hide summary" on the OpenLearn site to bring some more relevant information onto the front page, but why should I have to?  I'm enrolled on the course, and the system knows this — it should automatically present me with the information I need rather than the sales blurb.  If I subscribe to a website, it usually stops pestering me with "the benefits of subscription" etc!

Slideshare? Oh for pity's sake, OU....

There's a high chance I won't be finishing the Open Education course after all.  I may just have started, but first impressions last.

Particularly when the first impression is delivered by Slideshare.

PowerPoint: the bane of students and employees everywhere.
Death by PowerPoint: the feeling of lethargy induced by an hour of listening to some middle manager drone on about "as you can see on the slide..."

The first prescribed reading is an article from the Open University's Journal of Interactive Media in Education, an open access journal I think I'll probably be reading a lot in the near future.  (I hadn't heard of it before.)  Slideshare was listed as a useful tool for open educational resources (OERs).

The next reading was a PowerPoint slide deck.  On Slideshare.

Nononononono.  Don't do this to me.

OU, when we first met, I thought we had something special, but you just keep finding new ways to hurt me.

Setting aside arguments about the effectiveness of PowerPoint (I personally agree with Edward Tufte -- I never liked it before I read his essay The Cognitive Style of PowerPoint and he helped me understand why), the distribution of slides has become a universal of the last decade, and it has become an absolute obstruction to communication rather than a facilitator.

One of the most criminally underused features of PowerPoint is the notes page -- most people don't know it exists.  PowerPoint lets you create a set of speaker notes with the slide at the top, and detailed notes below.  It's a very simple mechanism, but when used correctly, it's of inestimable value.  In a former life, I was tasked with organising various meetings and assembling PowerPoint slidedecks.  Most of these were just compilations of slides submitted by managers from across the organisation... and none of them included any speaker notes.  This was awkward, as quite often I had to pass these slide decks on to one of the local management team to deliver.

Have you ever been in a presentation where the presenter was reading the slides for the first time as he delivered his presentation?  I have, and far more than once, I'm sad to say.  In the corporate world, it comes across as a sign of disdain and disrespect for the workers.  Why am I sitting here listening to you when you haven't taken the time to prepare what you're going to say?  If you can't tell me what the figures actually mean, why did you bother coming?  Why not just put it in an email and stop wasting everybody's time?

The notes page -- the notes page can fix that.  I never wrote a professional presentation without extensive notes, so that someone else could pick it up and present it.

I knew the most important truth about PowerPoint: the slides never tell the full story, and a person with only the slides normally doesn't even understand everything included in the slides.  In short, without the notes, the presenter is even less useful than the slides themselves.

Now, back to Slideshare.

I have looked at Slideshare many times, but I've never seen any slide deck through to the end, because there's a heck of a lot missing.  Without the speaker, the story simply doesn't flow; all we have is a disjointed series of statements and assertions, often in confusingly abbreviated English.

and Slideshare does not show the notes page.  To me, that's almost criminal.  The one thing that can help the reader make sense of a slide deck, and it's just not there.  It may be in the PowerPoint file that you can download... or it may not be.  Probably not.

(I did in fact download the set reading to see, and sure enough, there wasn't a single note added for any of the 74 slides included.)

And that's the thing.  By hiding the notes page from us unless we specifically ask for it, PowerPoint encourages people to write without making notes.  If Slideshare exposed the notes page, maybe people would start making a point of using it, and maybe slide decks would be useful to someone other than the original author.  Until then, they're useless to everyone else.

Open University online course

I'm just starting on the free course Open Education by the Open University.  I've got a lot of time for the OU in general, although my experience as a student led me to wonder whether their move to online was going to destroy everything they'd created as they moved on-line in an apparently poorly-planned move to cut costs.

I spent several years studying languages with them, and the physical books that I had appreciated so much at the start were gradually placed with half-hearted web-page-ised versions that were less flexible and less useful, and the face-to-face tutorials replaced with voice-only virtual classrooms.

Have you ever tried to speak a foreign language without any visual contact?  It's bloody hard, and there's no way round it.  More often than not, I disconnected from my tutorials out of sheer stress and terror, and I'm not usually one to shy away from another language -- I'm a native English speaker, and I've learned Spanish, Gaelic and French to near-fluency, as well as Catalan, Italian and Corsican to a passable conversational level, and a few words of several other languages besides.

I'm approaching this course with equal amounts of optimism and skepticism, because I know the OU do, on some level, know what they're talking about, but I simultaneously fear that they've bought into their own hype and may be starting to believe ideologically in decisions that were originally made for logistical reasons....