Showing posts with label online education. Show all posts
Showing posts with label online education. Show all posts

30 January 2015

The slow death of the MOOC continues

This morning, I checked my email like I always do, and Coursera were plugging their latest "specialization" -- one for so-called cloud computing.

Coursera specialisations were originally launched as a single certificate for a series of "signature track" (ie "paid for") courses, but there's always the free option alongside it.

So I was very surprised when I clicked on the link for more information about the specialisation, then clicked through to the course, and it was only offering the $49 paid-for version. Now I did go back later and track down the free version of the course by searching the course catalogue, but the notable thing was that you can't get to the free version by navigating from the information about the specialisation.

It's there -- it is -- but by making click-through impossible, they're actively trying to push people into the paid versions. This suggests that the business model isn't working, and it's not really much of a surprise -- there's no such thing as a free lunch, and the only free cheese is in the mousetrap.

Some of the universities seemed to be using the free courses as an advert for there accredited courses, but it's a very large and expensive way to advertise -- teaching thousands in order to get half-a-dozen extra seats filled on your masters programme -- and so really the only way to get money is to get more of the students to pay.

Is it worth it for the student?

Cloud Computing costs £150, and going by their time estimates, that's between 120 and 190 hours of work. The academic credit system here in Scotland says that ten hours of work is one "credit point", and there are 120 credits in a year. Timewise, the Cloud Computing specialisation is then roughly equivalent to a 15-point or 20-point course -- ie. a single "module" in a degree course. A 15-point module costs £227.50, and a 20-point module costs just over £300, so £150 for this seems like a pretty good deal. Of course, this is only the cost to students resident in Scotland to begin with, and it is controlled by law to stay artificially low -- in England, the basic rate would be £750 for a 15-point course or £1000 for a 20-point one, but many universities "top-up" their fees by half again: £1125 and £1500 respectively. And English universities are still cheaper than many of their American counterparts.

So the Coursera specialization could be half the price of a university equivalent, or a tenth, or even less, depending on where you live. Sounds like a good deal, right?

Sadly, though, the certificates are worthless -- almost all the institutions offering courses through Coursera (and EdX, and FutureLearn) are allowed to accredit their own courses for university credit, but they choose not to. If they accredited a £30 course as university-level study, they'd be competing against themselves, and they'd kill the market for their established distance courses, and perhaps even their on-campus courses.
If they can run a course for £150, is there any justification for their usual high prices? Well... yes. Coursera is on a freemium model (free for basic use, pay for "premium" services), but in reality everything on Coursera is still the "free" part of the freemium. The online-only courses are not viable for universities for a number of reasons, so it's the fully-accredited courses run by the universities themselves that make it possible for the universities to offer the cheap courses "on the side", using repurposed versions of their existing course materials.

Technology and knowledge sharing can and should be used to reduce the cost of education. When I studied languages with the Open University, I looked at the cost of the course I was taking, vs equivalent unaccredited alternatives -- I could have bought equivalent books and spent more time with a one-on-one teacher than I did in group tutorials, and still only spent half of the money I did with the OU. If I hadn't wanted to get the degree, it would have made no sense at all to continue with them, but I want to teach in schools, so I need the degree.

So yes, there is undoubtedly unnecessary expense in education and there's a lot of "fat" that could be trimmed away, but the Coursera model won't do it, and for now it remains something of a distraction -- a shiny object that draws our attention away from the real problems and solutions.

17 March 2013

Open University online course

I'm just starting on the free course Open Education by the Open University.  I've got a lot of time for the OU in general, although my experience as a student led me to wonder whether their move to online was going to destroy everything they'd created as they moved on-line in an apparently poorly-planned move to cut costs.

I spent several years studying languages with them, and the physical books that I had appreciated so much at the start were gradually placed with half-hearted web-page-ised versions that were less flexible and less useful, and the face-to-face tutorials replaced with voice-only virtual classrooms.

Have you ever tried to speak a foreign language without any visual contact?  It's bloody hard, and there's no way round it.  More often than not, I disconnected from my tutorials out of sheer stress and terror, and I'm not usually one to shy away from another language -- I'm a native English speaker, and I've learned Spanish, Gaelic and French to near-fluency, as well as Catalan, Italian and Corsican to a passable conversational level, and a few words of several other languages besides.

I'm approaching this course with equal amounts of optimism and skepticism, because I know the OU do, on some level, know what they're talking about, but I simultaneously fear that they've bought into their own hype and may be starting to believe ideologically in decisions that were originally made for logistical reasons....

08 February 2013

The misunderstandings in MOOCs

I had another article lined up and ready to go, and then I started reading about MOOCs on-line, and in particular a lot of the reaction to the total collapse of Coursera's MOOC Fundamentals of on-line education (#foemooc), one of the biggest and most revealing ironies in history.

One article quoted a law professor as follows:
“Part of what Coursera’s gotten right is that it makes more sense to build your user base first and then figure out later how to monetize it, than to worry too much at the beginning about how to monetize it,” said Edward Rock, a law professor serving as the University of Pennsylvania’s senior adviser on open course initiatives.
Is he mad, or just plain ignorant?  It's this sort of thinking that leads people to compare the MOOC boom to the internet bubble at the turn of the century.  After all, that bubble was based on companies building massive user bases with no idea of how to monetarise them.

We are currently experiencing a similar bubble in Web 2.0.  So-called investors* have ploughed money into companies like Facebook and Instagram, only to see stock prices suffer when the market suddenly realises that they have no business plan.

[* The modern stockmarket makes a mockery of the term "investor".  "Investing" is supposed to mean putting money in so that a company can grow, but most high-profile IPOs, including Facebook and the like, are simply a change of ownership.  The money is for the previous owner, not the business.]

Why do we want to replicate that model at a cost to our valued educational institutions?

I've commented on that side of things before, noting Udacity's move toward corporate sponsorship: MOOCs as adverts for a given vendor.

This MOOC-as-sponsorship model might even work to some extent for the universities too -- the University of Edinburgh's Coursera MOOC E-Learning and Digital Cultures (#edcmooc) is based on a module taken from one of their masters programmes, and it's possible that the increased exposure it gives the course among potential students will get back the cost of putting the MOOC version together and on-line.  The risk, though, is that if too much material is available for free, the adverts will basically kill the market for the product.

One article in particular caught my attention.  Over at the blog online learning insights, blogger and instructional designer Debbie Morrison wrote a post about three takeaways from the collapse of the Coursera Fundamentals of Online Education course.

Her first "takeaway" is, in my ever so humble opinion, completely and utterly wrong.  Totally.  Completely.  Utterly.  Let's take a look:

1) The instructional model is shifting to be student-centric, away from an institution or instructor-focused model.In a massive, open and online course with thousands of students, the instructor must relinquish control of the student learning process. The instructor-focused model is counter intuitive to the idea of a MOOC; in the MOOC model the student directs and drives his or her learning. The pedagogy used for traditional courses is not applicable to a course on a massive scale. With the Web as the classroom platform, students learn by making connections with various ‘nodes’ of content [not all provided by the instructor] on the Web, they aggregate content, and create knowledge that is assessed not by the instructor, but by peers or self. This pedagogy builds upon the constructivist theory, and more recently a theory developed by Downes and Siemens, the connectivist learning model.

Wrong... how so?

Debbie Morrison clearly knows a thing or two about building online courses; unlike me, she's actually written them.  But when she claims that the a tight teacher-led course is "counter intuitive to the idea of a MOOC", she's projecting her views into a reality that is distinctly different.  The origin of the MOOC is in the field of Computer Science and Artificial Intelligence.  The original MOOCs were very much instructor-led, with very specific, well defined tasks for the student to follow.

Because the tasks were very tightly controlled, marking could be automated, and hundreds of thousands of students could be individually assessed by machine.

Morrison is mistakenly applying the standards for her own niche, the small-to-medium online course, to a very different animal.

Morrison's courses are comparable to a seminar-based course in more traditional education.  Every generation has accepted that seminar-based courses are better than lecture-based ones, but that they're more work for the teacher because the uncontrolled variables lead to a massive space of potential outcomes and directions for the class.

Online education for small classes is a good realisation of the seminar-based course, because the online medium removes some of the time pressures on the instructor in terms of organisation, logistics and delivery; time which can be devoted instead to tailored group and individual feedback.

But when you've got one instructor and one teaching assistant for a class of 41,000 students (foe) or 1 instructor for a class of 150,000 (claimed by Ed Tech Magazine for a Udacity course), then that's out the window.  A teacher cannot be a "guide" or a "facilitator" to a group that big.  A teacher cannot even have a concept of the individual students as human beings (we're well beyond Dunbar's Number here).

A massive course cannot be student-centred because there are simply too many students.

And let's go back to the seminar-style class for a minute. Seminar-based classes are far more common in taught postgrad courses than in undergrad courses, and while you might encounter them in undergrad courses, you'll only see them in degree-year modules, or in very specialised, low intake degree schemes.  As I said before, seminar based courses are more work, but more specifically, they take a lot more time to grade.  An open-ended course leads to an open-ended assessment.  If you have a class of 400 students, marking essays is an endless task, and it is very difficult to mark them all fairly and equally, but if you give them a structured exam, marking is quick, easy and completely objective.

But more than that, the standard system provides a gradual shift in abstraction.  Perhaps this is a happy accident rather than by design, but as we progress through our degree scheme, we should be building up a toolkit of useful skills and concepts that can later be applied elsewhere.  As that toolkit expands, we are slowly given greater freedom to apply and think.

My opinion is, and always had been, that the MOOC must be seen as a vehicle for "basic skills" courses.  Many within the educational establishment reject basic skills teaching in favour of "whole topic" teaching, but I personally believe that they've done so on the wrong grounds.

If you were to tell me that basic skills teaching is categorically bad, I would disagree with you.  Basic skills are prerequisite to advanced skills.

But if instead you were to tell me that teaching basic skills properly and exhaustively is good in theory, but unfortunately not practical given class sizes and constraints on classroom time, I think I would be able to accept your point.

And this is where online can really flourish: we can take the basic skills load out of the classroom.  Have the students work on the basic skills individually, using constrained tasks that a computer can assess.  These types of tasks can be completed much quicker without all the usual classroom faff of handing out sheets, explaining the task, checking up on students, going over the answers, addressing individual errors with the whole group etc.  The volume of work that is assessed or otherwise receives feedback can be increased, as the teacher's time is no longer a limiting factor.

Then the students can go into the class with the basic skills they need to address larger, more abstract, more challenging tasks.  Lessons can be more rewarding for both the student and the teacher.

Debbie Morrison proposes turning a course with a 5-figure class roll into a student-centred, open-ended seminar course, but that is precisely what the FOE Mooc attempted to do, and it was a total train wreck.  Debbie's "takeaway" from the incident isn't asking us to learn from their mistakes, but to repeat them.

23 October 2012

Udacity review: Web development (C253)


OK, so this isn't strictly about language, but I've been following Udacity's course on web app development as I've been working on designing a language learning app for a while now, and I'm really not too hot on web technologies at the moment (and where would you put a language learning app other than on the web these days?).
A lot of commenters suggested that the problems identified were unique to the particular course, but with it was with those criticisms in the back of my head that I've spent several hours over the last couple of weeks rattling through this course, and I have to say that I have very similar concerns to Delta over at AngryMath.

To summarise, Delta picked out a “top 10” of problems:
  1. Lack of planning
  2. Sloppy writing
  3. Quiz regime
  4. Population and sample
  5. Normal curve calculations
  6. Central Limit Theorem not explained
  7. Bipolar difficulty
  8. Final exam certification
  9. Hucksterism
  10. Lack of updates?
Everything there matches to my own observations with the web development course, except the final exam (which I haven't reached yet – I'm on unit 6 of 7) and the stats-specific items (4,5,6) – although there are problems with Steve Huffman's course that are analogous to these.

1. Lack of planning
It is not uncommon to hear Huffman change his mind halfway through a unit, or even after giving a quiz. Mostly, this is because he uncovers another quirk in the Google AppEngine or one of the Python code libraries that affects the outcome. OK, we can forgive the guy for not being an expert on a relatively new technology, but why didn't he take a couple of hours to check all these things before starting filming?
In video 4.38 he even says "One final password thing. I know I promised the last one was the final one but..." Now he really should have known he was going to say that when he filmed the previous segment, and if he really wanted to change it, he could have gone back and reshot part of the earlier section in order to edit it out (or even just redubbed the section in question).
If he can't plan an hour or two ahead, it throws his whole schedule into doubt.

2. Sloppy writing

Huffman makes several spelling errors during the course on some pretty fundamental computing terms, talking about “algorithims” (ouch) or a database being “comitted” (yuck). After having “protol buffers” on screen for half a minute, he spots it and corrects it to “protocol buffers” (5.16).

His handwriting becomes progressively more crooked, moving across the screen at an angle, and he consistently and clearly writes his quote marks as open and close quotes on the whiteboard, even though most computers make no distinction (and Python, along with most languages, definitely doesn't).
 
This is core stuff he's dealing with, and he's failing to be precise.

3. Quiz regime

The quizzes seem just as forced as Delta found in the stats course, with annoying simple ones, then difficult ones that require you to remember an exact command that you've seen once, to ones that suffer from a rather odd sense of humour. I was not familiar with the “hunter2” meme, and the constant reference to that value forced me to go and look it up. Not particularly interesting. As an inside joke, using it as the default password example was sufficient – giving it as an incorrect option to several multiple-choice quizzes was unnecessary and distracting.

But the other thing that I really noticed about the quizzes in this course is more serious: they just didn't feel like an integral part of the lesson. Most of them started with a dedicated video, rather than just being asked at the end of a video. This inserted a little pause as the next video loaded. You'd sit there waiting as Huffman unnecessarily read out the answers (I can read, as you may have noticed). That wasn't the worst of it, though.  Huffman insisted on constantly telling you you were about to have a quiz. Why? Isn't it enough to ask the question?
Worse, this kills one of the clearest pedagogical rules: don't overwrite useful information in working memory – take full advantage of the "echo effect". I found myself lost on several occasions, because after giving me new information, Huffman would wipe the “echo” from my working memory by telling me “I think it's time for a quick quiz”. There'd then be a pause while the next video loaded, where the only thing repeating in my head was the fact that there was going to be a quiz – the information I needed to actually complete the quiz was gone.  I skipped the quiz and went straight to the answer, because I didn't know it, and there was no scaffolding or structured guidance in the question.

And then, of course, whether I got the answer right or wrong (or didn't even try), Huffman decides to explain why all the answers are right or wrong anyway. No attempt was made to focus on my specific misunderstandings, and when you're giving a course to thousands of people, wouldn't it make sense to take a little extra time and include a few extra video snippets to match the different answer combinations to the quizzes? A couple of hours of your time to save 10-20 minutes each for thousands of people is a good trade-off (and what you might consider being a “good citizen”, Huffman, as your own course proposes we all should be).
4. Population and sample / ACID

Delta complains that Thrun's course doesn't present a clear distinction between two fundamental statistical concepts – I would say that Huffman's course similarly fails when it touches on databases. It's not as serious a problem, as this isn't a database course, but if you're going to teach something, for pity's sake, teach it right. ACID stands from Atomicity, Consistency, Isolation and Durability.  Huffman's explanation in unit 3 fails to fully define consistency, leaving it difficult to see the difference between atomicity and consistency. The confusion is compounded by the fact that the whole definition of ACID relies on the idea of a database “transaction”, which Huffman readily admits to not having talked about before. (So I could actually add this into “poor planning” above if I wanted to.)

5. Normal curve calculations / multiple frameworks and libraries
There's not necessarily anything as fundamental as this missing from this course as the normal curve, but the end result of something “magical” happening (ie powerful, important, and not understood) is present. By jumping about from framework to framework and library to library, Huffman keeps introducing stuff that we, as learners, just aren't going to understand. To me, that decreases my confidence: I like to understand (which is why I'm taking the course).
6. Central Limit Theorem not explained

No real equivalent, I suppose.

7. Bipolar difficulty
The difficulty problem in Thrun's stats course is slightly different from the problem here. Thrun asked questions that he didn't expect the student to know the answer to (oddly), but here Huffman expects you to know the answer... except that he has a very odd set of assumed prior knowledge.
For example, he starts with the assumption that you have never encountered HTML before, but HTML is extremely well known now, even among non-geeks. But then he assumes you know Python. Python is a fairly popular programming language at the moment, but really – not everyone knows it. I'm also willing to wager a fair chunk of cash that most Python scripters are very comfortable indeed with HTML, but that the converse is not true.

Now, I might be doing him a disservice – his assumption no doubt comes because Udacity's own Computer Science 101 course teaches Python, but then again the course prerequisites don't mention either Udacity CS101 or specifically Python:

What do I need to know?

--------------------------------------------------------------------------------

A moderate amount of programming and computer science experience is necessary for this course.


See? No mention of Python. Now I've got a degree in Computer Science, so I've got what I thought was a “moderate amount” of experience. But as soon as he asked a code-based question, I was stuck. Not only did I not know the appropriate syntax, but often I had no idea of the type of structure required.
You see, Python is a very sophisticated, very high-level language that does lots of clever things that a lot of the lower-level languages don't. It has very useful and flexible tools for manipulating strings and data-sets, and even allows you to build “dictionaries” of key/value pairs. A great many of the tasks presented in the course were easy if you were familiar with the structures. If you weren't, you wouldn't A) know how to write the code or B) know where to look for the answer, or what it would be called. OK, so the answer to B is “the course forums,” I suppose, but that's hardly adequate, surely? Audience participation is great and all, but shouldn't good teaching prevent these blockages, these obstacles to the learner?

8. Final exam certification

As I said, I haven't got that far yet. I suspect retaking will be less of an issue as a lot more of the material will be practical, and you can't expect to pass a coding exam by trial and error.

9. Hucksterism

Huffman doesn't seem to be as evangelistic as Thrun, but he still does talk a bit too positively after some of the quizzes (despite not knowing whether I got the answer right or wrong), and he does say from time to time that now we “know how to” do something. Are you sure? I've followed a fairly tightly defined process – take away the scaffolding, and could I repeat it? That's not guaranteed.

10. Lack of updates?
The grating positivity does seem to die down during the course, so there's some evidence of responding to feedback, but the course first went out months ago, and despite presumably thousands of completed courses, there's no evidence of them going back to attempt to fix any problems in the earlier videos. As I stated in my previous post on MOOCs: any conscientious teacher reconsiders his material after any class, which means an update for every 20-30 students – this course has had a lot more students than that, so where are the updates.
 
My own evaluation

So the above was recreating Delta's complaints, with the specific purpose of defending him/her against those who claim that the AngryMath article was unfair as it focused on a sample size of one. But I'd also like to post my views in their own terms.

Because to me, the big problem wasn't one that appeared in Delta's top 10; it was that the course is not what I would consider a university-level course. Or at least, not a complete university-level course. What I have experienced so far feels a little too blinkered and focused on one project. I don't remember any course at any of the three universities I've studied at where the teaching was driving so clearly towards one single end-of-course task. Each of the end-of-unit programming tasks brings you closer to that final task, and there feels like there's a lack of breadth. As I went through my programming tasks as a student at Edinburgh, we were dealing with incrementally increasing code complexity, but on an exponentially increasing problem base – no more than two homework tasks would be as closely linked as all the tasks here. In essence, what we're doing is more like a “class project” than a full “class”.  Most courses in Edinburgh would change the programming tasks substantially from year to year (certain courses excepted – my hardware design and compiler classes were fairly specialised), but Udacity simply cannot do this as the tasks are fundamental to the syllabus structure.

And Huffman, we're told, “teaches from experience” – which basically translated to "he is not a teacher," in layman's terms. He does an admirable job for someone who hasn't been trained in pedagogy, but really, seriously, would it kill them to get an actual teacher to teach the course?  Huffman's awkwardness and uncertainty about the format is the reason he keeps killing the echo effect – he hasn't developed the instinct to know how much time and space we need to process an idea. At times, he gives a reasonably broad view of the topic, but at others, he just splurges onto the page what is needed for the task at hand. There's no progressive differentiation of concepts, and he doesn't use any advance organisers to help the learner understand new concepts.
 
Case in point: introducing ROT13/the Caesar cypher without once demonstrating or even describing a codewheel – a video demonstration of the code wheel is easy, cheap and clear.  His demonstration with lines on the virtual whiteboard was not clear.  Even if you don't use a codewheel, you can always use the parallel alphabets method:
 

So, yeah, I can see that Thrun really genuinely believes that the educational establishment doesn't “get it” when it comes to new education, but he's throwing the baby out with the bath water if that means getting rid of educationalists altogether.

Teaching vs training

But Udacity isn't completely abandoning academia – oh no; it's recreating its mistakes. A recent post on the Udacity blog repeats that hoary old complaint that education simply doesn't adapt fastenough to newtechnologies. In Udacity's own words:
Technologies change quickly. While savvy companies are quick to adapt to these changes, universities are sometimes slower to react. This discrepancy can lead to a growing gap between the skills graduates have and the skills employers need. So how do you figure out exactly what skills employers are looking for? Our thinking: work with industry leaders to teach those skills!
 
It's the old “academic” vs “vocational” debate once again, and just as many universities are sacrificing their academic credentials by providing more and more courses that are mere “training courses” for a given technology, that's what Udacity is becoming. Forthcoming courses from Udacity are pretty specific:.
  • Mobile Applications Development with Android
  • Applications Development with Windows 8
  • Data Visualization with Mathematica  
Thrun keeps talking himself up as an alternative to university, but he's starting to repaint his site as something that's more an alternative to a Sams/Teach Yourself/for Dummies coursebook. Because as they say:
We are working with leading academic researchers and collaborating with Google, NVIDIA, Microsoft, Autodesk, Cadence, and Wolfram to teach knowledge and skills that students will be able to put to use immediately, either in personal projects or as an employee at one of the many companies where these skills are sought after.
 
That's not what university is about. So Thrun doesn't like university. Fine. But plenty of us do. Stop criticising universities for being universities.  If you want to be a vendor-specific bootcamp, knock yourself out, but please don't criticise universities for teaching us how to think instead of leading us through the nose on writing a Sharepoint site.

The UK used to have a strong distinction between vocational institutions (known as “colleges of further education”) and academic institutions (universities, higher education). It's a useful distinction, and we should have both – it's not an “either/or” question.
 
On the other hand, I suppose Thrun's worked out the answer to how to fund MOOCs: sell out to big business.  I hope they're paying you well enough.

04 October 2012

Monetising MOOCs

I've just had an email from the organiser of a free online course that I took (but I never watched or read any of the materials at all).  It was borderline spam: it was an advert for the book he was about to launch.

Is that the future of the MOOC, then?  Someone using it simply to get a mailing list for his next publication?  If it is, I' not sure how I feel about that.  I'm not convinced that a marketing campaign is the correct motivation for someone to write a coherent and (crucially) academically rigorous higher education course.

The warnings were there earlier, though.  Coursera (before they were so-named) were at once point advertising an entrepreneurship course called Lean LaunchPad, but these guys eventually jumped ship and climbed aboard with Thrun's Udacity... presumably for commercial reasons.  Yes, the course started out as Stanford course, but the name strikes me as more than a little...trademarky.  Isn't real higher education supposed to be generic?  Aren't we supposed to present a moderately broad and balanced view of the whole area of study, and not hone in one one specific methodology to the exclusion of all others?

To me, that looks like education taking one more step towards being a simple packager of vendor-specific training courses.  It's cheap, but efficiency isn't much good when you sacrifice education in the process.

So what can we do?

Muvaffak commented on my earlier post, saying that courses need to be self-financing, but the big question is how to do that without affecting openness.  No, $10 isn't much to me, but there are places where it is a hell of a lot.  Simple fees aren't practical.

The solution normally kicked about is "something or other... certification".  No, not very specific.  The idea is usually the that course -- the "education" -- is free, but testing (and therefore certification) will be paid for.  But that threatens to bring us back into an inequitable state, because we're still establishing a two-tier system.  Rich people in rich countries get certified, poor people in poor countries don't.

So there is still the very real issue of openness at the commercial level.  The internet makes the obvious answer difficult to see, or possibly just difficult to swallow: different prices in different places.  If A Book On C isn't affordable in India at the US and European retail price, print it locally cheaper.  But every couple of years, someone in the US makes a big thing about being "ripped off" by US prices, or someone gets taken to court for importing unlicensed copies of books.

So while people rave about the potential for free education to improve the lot of the poor, as soon as you start talking about offering them the same thing at a different price, you're no longer seen as helping the poor, you're now ripping off the pretty well-off (even if you're miles cheaper than the alternative).

Realistically, I'd say the fair and equitable way to fund MOOCs is through proctored exams with differential pricing.  Institutions in various countries act as agents for the exam, and pay commission to the course writers.  Make that commission a percentage, and the local market will determine local pricing.

No major exams at the moment really have this local pricing though -- the biggest example of inequity would have to be a certain internationally recognised English exam, which is several hundred pounds wherever you sit it.  A reasonable chunk of cash for a European student, but a heck of a lot of money for someone from South America.  The reason?  They papers all go back to a rich country, where they're marked by people who demand pretty high wages (in global terms).

In order to allow differential pricing, then, we're going to have to allow the distribution of marking duties.  The institutions taking the students' fees are going to have to be hiring their own markers.

BUT...

Having a competitive market for examination centres is very dangerous -- just see how the multiple exam boards for England and Wales became mired in controversy a few years back, with claims that one group of trainers were giving teachers advance warning of exam questions.  Certain "bad apples" were effectively trying to get the pass mark up in order to make the exams more appealing to schools.

So the marking load has to be split, but you can't be marked by your own institution (so no way for them to game the system).

So what are you left with?

Well, say I sit the exam in Rome; I should now have no idea where my exam will be marked.  Say it ends up in Ouagadougou.  And a paper from Jean in Ouagadougou ends up in London.  So I've paid much more money than Jean.  But Jean's marker gets paid more than Jean paid for the entire exam, and my marker gets paid a tiny fraction of what I paid.

While the system would be entirely equitable -- we all get out the same, and we put pretty much equivalent amounts into the system -- it looks unfair, because people just aren't used to a barter economy.

So the most workable solution for funding these things will never happen.

So from now on, I'd expect to see more and more tie-ins to books and proprietary methodologies, because the only guys who'll be able to afford to do this are the people with something to sell.....

29 September 2012

Online education's elephant in the room.

It's funny how things come together to give you a better understanding of your own mind. A couple of weeks ago I got caught up the internet debate on mass-participation online education started by an American stats professor critiquing Udacity'sIntroduction to Statistics by Sebastian Thrun.  Then the other day I started debating online education again, this time triggered by the Technology Review article The Crisis in Higher Education linked and debated on Slashdot. One thing I didn't mention in the first debate, but did in the second, was something that has been bugging me for a very long time, and it's really only thanks to the recent debates I've been having with Owen Richardson on DI that I was finally able to articulate it.

These massive courses claim the potential to be better than anything that's come before, thanks to the availability of masses of automatically-collected feedback that will be used to improve them. This, theoretically, means the fastest pace of change in the history of education.
But is that really the case in practical terms?
Right now, I'm at the steepest part of the learning curve with respect to the courses I'm delivering at the university. I can't write more than one full lesson plan at a time, as in each new lesson I receive crucial feedback on what my students are capable of. So I'm constantly revising my material.
My father, during his career as a Chemistry teacher, delivered the same course year after year to classes of no more than 20 pupils at a time. Every time he taught a lesson, though, he was looking for improvements and refinements based on the reaction of the class. If someone made a mistake, he'd try to change the teaching to remove the possibility of someone in the next class making the same mistake.

So in the case of a conscientious teacher, material is revised for every 20 students taking the course.
 
Sebastian Thrun's first sitting of the Artificial Intelligence course had 160,000 pupils. OK, only 14% completed the course, but 22,400 students is still an incredibly high number. That's 1120 iterations of a class for me or my Dad. We're talking about numerous lifetimes of teaching. For a course taught once a year, it's equivalent to going back to the first millenium AD, not only before the computer, but before algebra, cartesian geometry and even the adoption of the Hindi-Arabic number system in Europe.  So we're talking about "A.D. DCCCXCII", not "892 AD".
A millenium's worth of teaching, with no improvement – I think that qualifies as the slowest rate of change in education ever, rather than the fastest.

Worse than that, while Thrun complains that his contemporaries are simply throwing existing courses onto the net without making them truly match the new paradigm, these are at least courses that have a fair amount of real-world testing behind them.  By contrast, his attempt at completely new means that he has giving a course to over 20,000 students without having tested it even once (as far as I can see). That's... worrying.
So what's the source of the problem?
The problem as I see it has two root causes: the medium and (as always) money.
The medium.

The current trend to massive online courses is a development of MIT's OpenCoursware initiative. Essentially, MIT videoed a bunch of lectures and stuck them online with various course notes, exercise sheets and textbook references. I know a few people who got a lot out of one or two courses, but often the quality was bitty, with incomplete materials (due to copyright or logistical reasons) and little motivation to complete.

The early pioneers of the current wave saw a major part of the problem as being in the one-hour lecture format, and revised it to a “micro-lecture” format, delivering short pieces to camera, interspersed with frequent concept-checking and small tasks.
But however small the lecture, it is still fundamentally the same thing, with a live human writing examples on some kind of board, and any revision means the human going back to the board and writing it out again, and giving the explanations again. The presented material cannot be manipulated automatically, so the potential for rapid revision and correction is reduced.
Money.

Revising a course manually takes time, and time is money. Squeezing several lifetimes' worth of improvements into a rapid development cycle isn't a part-time job – it's probably more than a full-time job, yet in the brave new world of online education, this is nobody's day job. Most of the course designers are still teaching and researching, and Thrun himself is still doing research while working at one of the world's biggest tech companies and trying to start up a new company.
No-one's yet really worked out the way to cash in on these developments, so no-one's investing properly.

Here in the UK, online education (on a smaller scale) is already on the increase, but mostly as a cost-cutting measure. That's fine as a long term goal, but in the short-term there is a need for massive investment in order to get things right.

What are we left with?
Not a lot, frankly. Data-mining requires a widely-varying dataset, in order to allow the computer to detect patterns that are too subtle or on too large a scale for a human to be able to pick up independently. But the data collected on these online programmes is pretty much one-dimensional. There are no variables explored in the teaching – there is one course, so the feedback can say if something is difficult or easy (based on number of correct answers and time taken to answer) – it can't tell us why, and it can't tell us what would be better. That means that the feedback from 22,400 students is less valuable to a good teacher than one question from an average student during an average class. That's.... worrying.

So much for the revolution.

So what's the solution?

If there's two parts to the problem, there must be two parts to the solution.

Medium
The Open University has, over the years, moved away from lectures to producing TV quality documentaries that use the best practices of documentary TV to present material in a way that genuinely enlightens the viewer.
 
As a documentary isn't a single continuous lecturer, it would theoretically be possible to have a computer modify and re-edit a documentary to make it easier to understand.
 
On the most basic level, a difficult concept might be made easier by inserting an extra second of thinking time at a certain point in the video -- an algorithm would be able to test this dynamically.  Conversely, the algorithm might find that reducing the pause is more effective, and do so dynamically (we assume then that the concept is easy and that extra time allows the student to become distracted).
 
Then there's the slides and virtual whiteboards used in the videos themselves -- produced in real-time as the presenter speaks.  This splits the presenter's attention, often resulting in rushed, unclear writing, or pauses and hesitations in speech.  Revising the visuals means redoing the whole video.
 
Why doesn't the computer build the visuals to the presenters specification, but with the ability to modify them to optimise to student feedback?
 
Eventually, we would get to the point where a course definition is a series over voice-over fragments and descriptions of intended visuals, and the computer decides what to put where.
 
But the reason that'll never happen in the current model is reason 2:

Money

Where there is a genuine incentive to drive down the cost of education, there on-line education will find its most fertile ground. When you look at the tuition fees in places like Stanford, Harvard and MIT, you'll see that these aren't the schools with the biggest incentive to make online education work.
Instead, we need to look to Europe, and in particular the countries with significant public funding for higher education. Universities funded by the public purse are under intense pressure to cut costs – it's the only way to balance the books in a shrinking economy.
However, the universities alone can't make this happen, as the current pressure is for cost savings NOW, and so they're producing online programmes with insufficient research and the quality of education is suffering for it.

Governments are sacrificing students to the God of Market Forces, when they should instead be planning intelligently. Instead of cutting funding to force universities to be more economical, they should be investing to make universities more economical. Give universities money now in order to produce high-quality programmes that will reduce costs for years to come.
 
But It Will Not Be Cheap – quite the opposite.  The creation of a genuinely high-quality online course is phenomenally expensive in terms of up-front costs, while being ridiculously cheap in the long term.


The current clientele of Udacity, edX and Coursera will no doubt feel cheated that I'm talking about education for the classic “student” rather than the free “everyman” approach of Coursera et al, but there's no need to. Established, well-researched, properly tested and adequately trialled online courses may take a while to perfect, but once they exist, their running costs will be so low that they will surely be made widely available. And while they're being developed, they're going to need a constant source of beta testers, and that's going to mean people who're doing it for personal interest, not for grades – ie you. The end result will still be open education, but it will be better.