So, last time I was talking about a discussion I had with a communicative approach hardliner. A couple of days later, I had a new student ask for exam prep classes, so I got out my exam prep materials and had a quick look over them to remind myself of the specifics of the Cambridge Advanced exam, and I very quickly remembered something else from Sunday's conversation.
One of his big bugbears about the Scottish education system was that the foreign language exams all have their instructions in English. This, of course, is a natural consequence in the belief in immersion above all else -- if language must be immersive, then native-language task instructions clearly break the immersion, and therefore burst the language "bubble".
But here's the thing: when I prepare students for an exam, I explain the language task to them and then practise it over and over. By the time my students reach the exam, they don't need to read the instructions. Now the exams I prepare people for are international exams, so economies of scale dictate that the exam questions stay in English. My students go into the exam, don't need to spend time reading and understanding the question and can instead focus on carrying out the actual task that is set for them.
But there are people who don't do a lot of preparation for exams, and will go in and need to read the task. Sometimes they misunderstand the task, which means they lose marks. A hardliner would say this is fair enough, because if they don't understand English, they shouldn't pass an English exam. That would be all well and good if anyone really understood the question first time round, but students who prepare are not being tested on understanding the nature of the task, so this is inherently asymmetrical.
Indeed, most adherents to a target-language-only method are also likely to believe in the "four skills" model of language (which I don't agree with, but that's not the point here), which is fundamentally incompatible with target-language-only exam instructions.
How so? Well, if you believe that language is composed of reading, writing, speaking and listening, then it follows that you should test the four components individually. However, if you put task instructions in the target language, then every exercise includes a reading component, and you cannot objectively measure the students' levels in the other four skills.
It's a dilemma I have heard discussed even at university level, and it's very much a living debate, so nobody really should be putting forward their views as though they are objectively correct, because as with everything, we can all agree that a line has to be drawn somewhere, but we all have different views on where.
I personally feel that with a student cohort with a shared native language, native-language task instructions are the fairest way to ensure that students are being tested on the skills that we claim to be testing.
But what about listening tasks? Should we be asking the comprehension questions in the native language too, in order to ensure that we are genuinely assessing their listening comprehension? I kind of think we should, but at the same time, it doesn't feel right. But I have personally done exam past papers with students where they have clearly understood the meaning of the recording, but didn't understand the synonym used in the answer. How can you lose a mark in a listening comprehension test for failing to understand a piece of written language?
But of course, that argument does start to extend to the reading comprehension test too, because you can understand the set passage perfectly, but again have problems with the question. Here it is a reading comprehension problem leading to a lost reading mark, but there is still a fundamental question to answer about whether you should be setting an exam where you cannot determine the cause of the students' errors.
When you think about it, though, the problem in both previous paragraphs (although only one example of the various types of errors that students might make) is not really one of listening or reading anyway -- it's a vocabulary problem; vocabulary, which we do not consider worthy of the title "skill".
Some examiners have tacitly recognised this, and stopped trying to explicitly score the "four skills" individually, such as the Open University, whose degree-level exams have only spoken and written assessment, with the written part incorporating listening and reading as source material for an essay writing task. It's a holistic approach that accepts that trying to identify why a student isn't good enough isn't really an issue -- either they are or they aren't. I was perfectly happy with the approach as a student, and I would be happy with it as a teacher.
Language is certainly too complicated for us to ever hope to devise a truly fair and balance way to assess student attainment, but the current orthodoxy has tied itself in something of a knot trying to reconcile two competing goals. So are we offering immersion, or are we assessing the skills?
23 January 2015
The militant wing of immersion and the examiner's dilemma.
Labels:
4 skills,
assessment,
exams,
immersion,
target-language-only,
teaching
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment