27 March 2013

Suitability of MOOCs - H817 Activity 12

The OU free MOOC Open Education set the following question as activity 12:
Before we examine MOOCs in more detail, briefly consider if the MOOC approach could be adopted in your own area of education or training. Post your thoughts in your blog and then read and comment on your peers’ postings.
Now, just which field should I address?  Computer science or language learning?  How about both?

And for now, I'll restrict myself to the type of MOOC proposed by Cormier, Siemens etc, the "connectivist" MOOC.

So I'll answer "yes" and "yes" and "no" and "no".

One of the bits of material supporting this activity was a video interview with the aforementioned Mr.s Cormier and Siemens.

What really jumped out at me was that little after a minute into it, George Siemens basically says that the system emerged from how they were running online conferences.  Sound familiar?  Well, a few weeks ago I came to the conclusion that the MOOC had far more in common with a conference than a "course".

So it's utterly trivial to ask whether the MOOC has a place in any given field: if there are conferences in that field, a conference-type MOOC can work.

So that's "yes" and "yes".  Now onto "no" and "no".

I'll start with a quote from Isaac Asimov that I picked up from somewhere in the last week while working through blog posts on MOOCs:
“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'
This could have been written for Web 2.0.  (No further explanation needed.)

But in the MOOC setting, it's particularly salient.  The whole idea of connectivism is to learn from each other... but we're not experts.  Everything I've read or heard from Cormier or Siemens to date seems to mention but quickly gloss over the fact that their MOOCs have focused on educational technology, a field with many informed practitioners, but no confirmed experts.  In fact, on of the papers mentioned in the disastrous Fundamentals of Online Education Coursera module described online education as being "at the buzzword stage", a thin euphemism for the fact that it's all opinion and no "knowledge".  And that's the space that conferences have always occupied: the point where we're sitting on the boundaries of the state of the art, where informed practitioners of roughly equal knowledge try to contemplate and push those boundaries.

But when there is an expert, why should we rely on the knowledge of peers, who may in fact turn out to be wrong?

Nowhere can this be more clear-cut than in the computer field (or at least "the discrete mathematics field", of which CS is a subset).

At the level of programming, there can be no subjective discussion about the best way of carrying out a given operation, because the methods can be empirically measured.  We can measure execution time, we can measure memory constraints, we can measure accuracy of results.  We get a definite right and wrong answer.  Yes, we can devise collaborative experiments where we pool our resources and share our data to find out what those right and wrong answers are, and in computer science courses we often do, but that serves not to teach the answer, but to teach the process of evaluating the efficiency of an algorithm or piece of software.

We do not generate more knowledge of how the computer works by discussing, only of how we work with it.

So there's my first "no", but this is not really specific to computer science, because in any undergraduate field, you teach/learn mostly the stable, established knowledge of the field.  Very little in an undergraduate syllabus is really open to much subjectivity in terms of knowledge, and in arts degrees, the subjectivity is restricted pretty much to the application of established knowledge.

Everyone discussing MOOCs at the moment seems to be talking about "HE" (higher education -- ie. universities) and not acknowledging that fundamental split between undergrad and postgrad.

So I've stated that no undergrad stuff can follow a connectivist approach, is it still worth saying anything about language specifically?

I think so.

Because language learning, more than any other field of education, can be scuppered by overenthusiastic learners -- the biggest obstacle in any language course is the presence of learners: how can I learn a language by hanging around with a bunch of people who don't speak the language?  And yet, for most of us these courses are vital if we are ever to learn a language.

And I myself have benefited greatly from informal networks of learners offering mutual support, so why not a MOOC?  Because the informal networks I have benefited from are of vastly different levels, so there's always been someone with some level of "expertise" above you.   But once you formalise into a "course", you're suddenly encouraging a group without that differentiation; a group of roughly equivalent level.  An overly confident error pushed by one participant can become part of the group's ideolect -- a mythological rule that through the application of collective ignorance crowds out the genuine rule.  Without sufficient expert oversight, how is this ever to be corrected?

A language MOOC would most likely be of far less use than either traditional classes or existing informal methods....


John said...

I think these comments about experts are interesting - unfortunately perhaps, I think the world is more complex than Asimov supposed.... With language learning and programming it's fairly easy to tell who has a better knowledge...mainly because these are "instrumental" kinds of knowledge - there's a goal of the kind "how do you say 'cuchara'?" or "what's the best way to command 'do this until var=5'?
What we're discussing in Open Education is different. The most experienced conventional teacher, and the most "visionary" tech enthusiast have perspectives that really are equally true.

Nìall Beag said...

I reckon knowledge is more inherently instrumental than you think.

Every field is open to empirical enquiry, and a fair amount of empirical enquiry has been carried out in education, even if the mechanisms underlying the statistical phenomena can't be identified as precisely as with language (which is itself still open to much debate*).

The "theories" being presented in Open Education do not appear to be grounded in any empirical studies, and are not being analysed with respect to any verifiable, independently observed phenomena.

But the MOOC guys are merely following an age-old pedagogical tradition: declaring that the old rules don't apply, because "this is different". And yet, the human brain has remained remarkably unchanged for several millenia....

* A great deal of the debate in language is between modelling language features based on their historic origin vs reanalysing them without reference to origin (as Spanish speakers, for example, mostly don't know anything about Latin).

John said...

This is the point where we differ:

"Every field is open to empirical enquiry"

At its base, I think "education" as a field relies on conceptual answers to questions like: "what is knowledge?" "How do different things get accepted as knowledge in different societies?" In Europe, people started asking this question with Socrates..and some progress has been made since then, but less than you might imagine.

Nìall Beag said...


All empirical enquiry has to start with a question -- enquiry, from Latin "inquaerere", to "ask into" something. Every field starts with conceptual questions. Physics: "what is the nature of matter?" "is there any such thing as true emptiness?" "what happens if you sit on a train moving at the speed of light and shine a torch ahead of you?"

The lack of progress is the result of a willingness to postulate many philosophical answers to such questions, but a lack of willingness to attempt to measure or prove such postulations.

I'd strongly recommend reading Dr Ben Goldacre's submission to the UK Department for Education advocating "evidence based teaching" and drawing an analogue to medicine.

Prior to a change in medical practice in the 1970s, doctors did what the experts suggested was best, but the experts never needed to test their theories, and patients were caused unnecessary harm.

And we are doing exactly the same thing in education!

Let's go back to that question "what is knowledge?"...

We can never truly evaluate the "correctness" of any answer, true, but we can and should evaluate its utility.

IE. once you have an answer, can you apply it in such a way that makes a positive difference in educational practice? If not, don't use it.

I don't know if you've read my recent post about the journal Psychological Science in the Public Interest, but PSPI did a literature review a few years ago of research into learning styles. Their investigations did not seek to prove or disprove learning styles as a theory, but simply to ask whether the application of learning style theory resulted in better education, and they found that it didn't.

They didn't recommend that people stop asking the questions or that people stop investigating -- in fact they explicitly encouraged researchers to continue researching.

The scientific method starts with a question, moves on to experimentation, tests and validates the results of the experiments and then publishes results.

After that we start the engineering process, establishing trusted, repeatable principles that all practitioners can follow.

We can do all this in education too.

John said...

I'm using the term "instrumental" very advisedly...but your idea of "utility" is pretty much the same thing. It's realistic to devise experiments to find out the best way of achieving a particular goal. In this sense everybody wants to be "evidence based".

It is a different matter how we decide what the goals should be. For example all 3 of these might sound plausible to someone:

1 - we should minimise the number of illiterate and inumerate people in the country.

2 - we should prepare people to fill the jobs predicted to exist in 2025

3 - we should foster creativity and critical thinking

Choosing between these goals is not an instrumental question, it is a question of values or ways of being. Goldacre is quite smart enough to know this, so I presume he is pursuing a hidden agenda of his own disguised as a plea for objectivity.

Nìall Beag said...

I can't see why Goldacre would have any hidden agenda with regards to education -- he's a medical doctor, not a teacher.

And I'm afraid you're oversimplifying his point somewhat. At no point does he suggest that everything can be calculated by evidence. At no point does he tell us that evidence will decide categorically which corrections are the right ones to ask.

Evidence-based teaching doesn't necessarily attempt to determine which of these goals is best, but it does ask us to critically evaluate whether our attempt to achieve that goal does indeed achieve it better than competing methods.

Goldacre's problem (and mine) is that once the questions are asked and answers are posited, the respondents practically never provide any sound evidence that the answer is true -- the vast majority of studies into educational methods publish figures fall into Disraeli's third category of lies. Goldacre describes at some length the methodological flaws that are endemic in education research, and he's far from the first person to point them out.

Nìall Beag said...

Let's take your three different goals.

1 - we should minimise the number of illiterate and inumerate people in the country.

Fine. We don't know whether that's a suitable goal, but most people would agree that it is, so we'll do it. Now how do we do it? (Let's stick to illiteracy for now.) Phonics? Whole-word reading? The education world swings backwards and forwards between the two as new leading figures rise to prominence and reintroduce "the way I was taught" because "it worked for me".

Both sides produce figures that purport to show improvement, which can't be true. The discrepancy is all down to a lack of academic rigour in the experimental methodology.

2 - we should prepare people to fill the jobs predicted to exist in 2025

A more contentious goal, much harder to measure and based on so many assumptions it's difficult to know where to start; but while evidence won't give us the "right" answer, it can still give us a logically sound answer given the premises.

So let's just take one prediction: pencils and pens will be banned in 2020 due to overflowing landfill sites. Therefore, all "writing" will be done on computer.

Now I've actually already heard some people propose that we stop teaching handwriting because everything happens on computer... so let's look for evidence on that.

The question to investigate (and gather evidence on) is this:
Does teaching handwriting assist, hamper, or have no effect on the child's ability to learn to use a keyboard. That can be studied, measured and answered.

(Even though that's not necessarily the core atomic question. A more interesting question is perhaps whether physically writing the letters assists in reading skill, although I suspect that this one has already been investigated to death.)

3 - we should foster creativity and critical thinking

Well done on saving the best for last.

This should not be as controversial a topic as it seems to be at the moment.

Why is it so controversial? I would suggest because of the anti-scientific atmosphere in which it is discussed.

"Fostering creativity" often is a euphemism for throwing out all facts, which sounds good, but creativity relies on well developed cognitive schemas for dealing with information. Creativity is, in a very real sense, about rules. A creative work is one that combines established rules in such a way to achieve a unique result.

The ultimate example would be a song: the lyrics are drawn from the vocabulary of the source language and the arrangment of words follows the rules of grammar (albeit often taking a few liberties along the way). The tune uses the notes of an established scale, and combines them in ways that can be statistically modeled quite successfully. Rhythm and speed changes do occur, but the nature of these changes is very restricted.

A good songwriter may have been taught these rules, or he may have worked them out from a lifetime of listening to other people's songs, but one way or another, you need to know how to use a brush before you can create a painting.

The people who want us to focus on creativity need to start thinking about how to demonstrate it, because we all want to foster creativity -- objections to other people's proposals for doing so usually stem from a lack of belief that the proposed method will be of any use. So prove it to us, please!