OK, so this isn't strictly about language, but I've been following Udacity's course on web app development as I've been working on designing a language learning app for a while now, and I'm really not too hot on web technologies at the moment (and where would you put a language learning app other than on the web these days?).
I've written about MOOCs before, and shortly after writing that post I read a very detailed review of Udacity's Introduction to Statistics at the AngryMathblog.
A lot of commenters suggested that the problems identified were unique to the particular course, but with it was with those criticisms in the back of my head that I've spent several hours over the last couple of weeks rattling through this course, and I have to say that I have very similar concerns to Delta over at AngryMath.
To summarise, Delta picked out a “top 10” of problems:
- Lack of planning
- Sloppy writing
- Quiz regime
- Population and sample
- Normal curve calculations
- Central Limit Theorem not explained
- Bipolar difficulty
- Final exam certification
- Lack of updates?
Everything there matches to my own observations with the web development course, except the final exam (which I haven't reached yet – I'm on unit 6 of 7) and the stats-specific items (4,5,6) – although there are problems with Steve Huffman's course that are analogous to these.
1. Lack of planning
It is not uncommon to hear Huffman change his mind halfway through a unit, or even after giving a quiz. Mostly, this is because he uncovers another quirk in the Google AppEngine or one of the Python code libraries that affects the outcome. OK, we can forgive the guy for not being an expert on a relatively new technology, but why didn't he take a couple of hours to check all these things before starting filming?
In video 4.38 he even says "One final password thing. I know I promised the last one was the final one but..." Now he really should have known he was going to say that when he filmed the previous segment, and if he really wanted to change it, he could have gone back and reshot part of the earlier section in order to edit it out (or even just redubbed the section in question).
If he can't plan an hour or two ahead, it throws his whole schedule into doubt.
2. Sloppy writing
Huffman makes several spelling errors during the course on some pretty fundamental computing terms, talking about “algorithims” (ouch) or a database being “comitted” (yuck). After having “protol buffers” on screen for half a minute, he spots it and corrects it to “protocol buffers” (5.16).
His handwriting becomes progressively more crooked, moving across the screen at an angle, and he consistently and clearly writes his quote marks as open and close quotes on the whiteboard, even though most computers make no distinction (and Python, along with most languages, definitely doesn't).
This is core stuff he's dealing with, and he's failing to be precise.
3. Quiz regime
The quizzes seem just as forced as Delta found in the stats course, with annoying simple ones, then difficult ones that require you to remember an exact command that you've seen once, to ones that suffer from a rather odd sense of humour. I was not familiar with the “hunter2” meme, and the constant reference to that value forced me to go and look it up. Not particularly interesting. As an inside joke, using it as the default password example was sufficient – giving it as an incorrect option to several multiple-choice quizzes was unnecessary and distracting.
But the other thing that I really noticed about the quizzes in this course is more serious: they just didn't feel like an integral part of the lesson. Most of them started with a dedicated video, rather than just being asked at the end of a video. This inserted a little pause as the next video loaded. You'd sit there waiting as Huffman unnecessarily read out the answers (I can read, as you may have noticed). That wasn't the worst of it, though. Huffman insisted on constantly telling you you were about to have a quiz. Why? Isn't it enough to ask the question?
Worse, this kills one of the clearest pedagogical rules: don't overwrite useful information in working memory – take full advantage of the "echo effect". I found myself lost on several occasions, because after giving me new information, Huffman would wipe the “echo” from my working memory by telling me “I think it's time for a quick quiz”. There'd then be a pause while the next video loaded, where the only thing repeating in my head was the fact that there was going to be a quiz – the information I needed to actually complete the quiz was gone. I skipped the quiz and went straight to the answer, because I didn't know it, and there was no scaffolding or structured guidance in the question.
And then, of course, whether I got the answer right or wrong (or didn't even try), Huffman decides to explain why all the answers are right or wrong anyway. No attempt was made to focus on my specific misunderstandings, and when you're giving a course to thousands of people, wouldn't it make sense to take a little extra time and include a few extra video snippets to match the different answer combinations to the quizzes? A couple of hours of your time to save 10-20 minutes each for thousands of people is a good trade-off (and what you might consider being a “good citizen”, Huffman, as your own course proposes we all should be).
4. Population and sample / ACID
Delta complains that Thrun's course doesn't present a clear distinction between two fundamental statistical concepts – I would say that Huffman's course similarly fails when it touches on databases. It's not as serious a problem, as this isn't a database course, but if you're going to teach something, for pity's sake, teach it right. ACID stands from Atomicity, Consistency, Isolation and Durability. Huffman's explanation in unit 3 fails to fully define consistency, leaving it difficult to see the difference between atomicity and consistency. The confusion is compounded by the fact that the whole definition of ACID relies on the idea of a database “transaction”, which Huffman readily admits to not having talked about before. (So I could actually add this into “poor planning” above if I wanted to.)
5. Normal curve calculations / multiple frameworks and libraries
There's not necessarily anything as fundamental as this missing from this course as the normal curve, but the end result of something “magical” happening (ie powerful, important, and not understood) is present. By jumping about from framework to framework and library to library, Huffman keeps introducing stuff that we, as learners, just aren't going to understand. To me, that decreases my confidence: I like to understand (which is why I'm taking the course).
6. Central Limit Theorem not explained
No real equivalent, I suppose.
7. Bipolar difficulty
The difficulty problem in Thrun's stats course is slightly different from the problem here. Thrun asked questions that he didn't expect the student to know the answer to (oddly), but here Huffman expects you to know the answer... except that he has a very odd set of assumed prior knowledge.
For example, he starts with the assumption that you have never encountered HTML before, but HTML is extremely well known now, even among non-geeks. But then he assumes you know Python. Python is a fairly popular programming language at the moment, but really – not everyone knows it. I'm also willing to wager a fair chunk of cash that most Python scripters are very comfortable indeed with HTML, but that the converse is not true.
Now, I might be doing him a disservice – his assumption no doubt comes because Udacity's own Computer Science 101 course teaches Python, but then again the course prerequisites don't mention either Udacity CS101 or specifically Python:
What do I need to know?
A moderate amount of programming and computer science experience is necessary for this course.
See? No mention of Python. Now I've got a degree in Computer Science, so I've got what I thought was a “moderate amount” of experience. But as soon as he asked a code-based question, I was stuck. Not only did I not know the appropriate syntax, but often I had no idea of the type of structure required.
You see, Python is a very sophisticated, very high-level language that does lots of clever things that a lot of the lower-level languages don't. It has very useful and flexible tools for manipulating strings and data-sets, and even allows you to build “dictionaries” of key/value pairs. A great many of the tasks presented in the course were easy if you were familiar with the structures. If you weren't, you wouldn't A) know how to write the code or B) know where to look for the answer, or what it would be called. OK, so the answer to B is “the course forums,” I suppose, but that's hardly adequate, surely? Audience participation is great and all, but shouldn't good teaching prevent these blockages, these obstacles to the learner?
8. Final exam certification
As I said, I haven't got that far yet. I suspect retaking will be less of an issue as a lot more of the material will be practical, and you can't expect to pass a coding exam by trial and error.
Huffman doesn't seem to be as evangelistic as Thrun, but he still does talk a bit too positively after some of the quizzes (despite not knowing whether I got the answer right or wrong), and he does say from time to time that now we “know how to” do something. Are you sure? I've followed a fairly tightly defined process – take away the scaffolding, and could I repeat it? That's not guaranteed.
10. Lack of updates?
The grating positivity does seem to die down during the course, so there's some evidence of responding to feedback, but the course first went out months ago, and despite presumably thousands of completed courses, there's no evidence of them going back to attempt to fix any problems in the earlier videos. As I stated in my previous post on MOOCs: any conscientious teacher reconsiders his material after any class, which means an update for every 20-30 students – this course has had a lot more students than that, so where are the updates.
My own evaluation
So the above was recreating Delta's complaints, with the specific purpose of defending him/her against those who claim that the AngryMath article was unfair as it focused on a sample size of one. But I'd also like to post my views in their own terms.
Because to me, the big problem wasn't one that appeared in Delta's top 10; it was that the course is not what I would consider a university-level course. Or at least, not a complete university-level course. What I have experienced so far feels a little too blinkered and focused on one project. I don't remember any course at any of the three universities I've studied at where the teaching was driving so clearly towards one single end-of-course task. Each of the end-of-unit programming tasks brings you closer to that final task, and there feels like there's a lack of breadth. As I went through my programming tasks as a student at Edinburgh, we were dealing with incrementally increasing code complexity, but on an exponentially increasing problem base – no more than two homework tasks would be as closely linked as all the tasks here. In essence, what we're doing is more like a “class project” than a full “class”. Most courses in Edinburgh would change the programming tasks substantially from year to year (certain courses excepted – my hardware design and compiler classes were fairly specialised), but Udacity simply cannot do this as the tasks are fundamental to the syllabus structure.
And Huffman, we're told, “teaches from experience” – which basically translated to "he is not a teacher," in layman's terms. He does an admirable job for someone who hasn't been trained in pedagogy, but really, seriously, would it kill them to get an actual teacher to teach the course? Huffman's awkwardness and uncertainty about the format is the reason he keeps killing the echo effect – he hasn't developed the instinct to know how much time and space we need to process an idea. At times, he gives a reasonably broad view of the topic, but at others, he just splurges onto the page what is needed for the task at hand. There's no progressive differentiation of concepts, and he doesn't use any advance organisers to help the learner understand new concepts.
Case in point: introducing ROT13/the Caesar cypher without once demonstrating or even describing a codewheel – a video demonstration of the code wheel is easy, cheap and clear. His demonstration with lines on the virtual whiteboard was not clear. Even if you don't use a codewheel, you can always use the parallel alphabets method:
So, yeah, I can see that Thrun really genuinely believes that the educational establishment doesn't “get it” when it comes to new education, but he's throwing the baby out with the bath water if that means getting rid of educationalists altogether.
Teaching vs training
But Udacity isn't completely abandoning academia – oh no; it's recreating its mistakes. A recent post on the Udacity blog repeats that hoary old complaint that education simply doesn't adapt fastenough to newtechnologies. In Udacity's own words:
Technologies change quickly. While savvy companies are quick to adapt to these changes, universities are sometimes slower to react. This discrepancy can lead to a growing gap between the skills graduates have and the skills employers need. So how do you figure out exactly what skills employers are looking for? Our thinking: work with industry leaders to teach those skills!
It's the old “academic” vs “vocational” debate once again, and just as many universities are sacrificing their academic credentials by providing more and more courses that are mere “training courses” for a given technology, that's what Udacity is becoming. Forthcoming courses from Udacity are pretty specific:.
- Mobile Applications Development with Android
- Applications Development with Windows 8
- Data Visualization with Mathematica
Thrun keeps talking himself up as an alternative to university, but he's starting to repaint his site as something that's more an alternative to a Sams/Teach Yourself/for Dummies coursebook. Because as they say:
We are working with leading academic researchers and collaborating with Google, NVIDIA, Microsoft, Autodesk, Cadence, and Wolfram to teach knowledge and skills that students will be able to put to use immediately, either in personal projects or as an employee at one of the many companies where these skills are sought after.
That's not what university is about. So Thrun doesn't like university. Fine. But plenty of us do. Stop criticising universities for being universities. If you want to be a vendor-specific bootcamp, knock yourself out, but please don't criticise universities for teaching us how to think instead of leading us through the nose on writing a Sharepoint site.
The UK used to have a strong distinction between vocational institutions (known as “colleges of further education”) and academic institutions (universities, higher education). It's a useful distinction, and we should have both – it's not an “either/or” question.
On the other hand, I suppose Thrun's worked out the answer to how to fund MOOCs: sell out to big business. I hope they're paying you well enough.