So, last time I was talking about a discussion I had with a communicative approach hardliner. A couple of days later, I had a new student ask for exam prep classes, so I got out my exam prep materials and had a quick look over them to remind myself of the specifics of the Cambridge Advanced exam, and I very quickly remembered something else from Sunday's conversation.
One of his big bugbears about the Scottish education system was that the foreign language exams all have their instructions in English. This, of course, is a natural consequence in the belief in immersion above all else -- if language must be immersive, then native-language task instructions clearly break the immersion, and therefore burst the language "bubble".
But here's the thing: when I prepare students for an exam, I explain the language task to them and then practise it over and over. By the time my students reach the exam, they don't need to read the instructions. Now the exams I prepare people for are international exams, so economies of scale dictate that the exam questions stay in English. My students go into the exam, don't need to spend time reading and understanding the question and can instead focus on carrying out the actual task that is set for them.
But there are people who don't do a lot of preparation for exams, and will go in and need to read the task. Sometimes they misunderstand the task, which means they lose marks. A hardliner would say this is fair enough, because if they don't understand English, they shouldn't pass an English exam. That would be all well and good if anyone really understood the question first time round, but students who prepare are not being tested on understanding the nature of the task, so this is inherently asymmetrical.
Indeed, most adherents to a target-language-only method are also likely to believe in the "four skills" model of language (which I don't agree with, but that's not the point here), which is fundamentally incompatible with target-language-only exam instructions.
How so? Well, if you believe that language is composed of reading, writing, speaking and listening, then it follows that you should test the four components individually. However, if you put task instructions in the target language, then every exercise includes a reading component, and you cannot objectively measure the students' levels in the other four skills.
It's a dilemma I have heard discussed even at university level, and it's very much a living debate, so nobody really should be putting forward their views as though they are objectively correct, because as with everything, we can all agree that a line has to be drawn somewhere, but we all have different views on where.
I personally feel that with a student cohort with a shared native language, native-language task instructions are the fairest way to ensure that students are being tested on the skills that we claim to be testing.
But what about listening tasks? Should we be asking the comprehension questions in the native language too, in order to ensure that we are genuinely assessing their listening comprehension? I kind of think we should, but at the same time, it doesn't feel right. But I have personally done exam past papers with students where they have clearly understood the meaning of the recording, but didn't understand the synonym used in the answer. How can you lose a mark in a listening comprehension test for failing to understand a piece of written language?
But of course, that argument does start to extend to the reading comprehension test too, because you can understand the set passage perfectly, but again have problems with the question. Here it is a reading comprehension problem leading to a lost reading mark, but there is still a fundamental question to answer about whether you should be setting an exam where you cannot determine the cause of the students' errors.
When you think about it, though, the problem in both previous paragraphs (although only one example of the various types of errors that students might make) is not really one of listening or reading anyway -- it's a vocabulary problem; vocabulary, which we do not consider worthy of the title "skill".
Some examiners have tacitly recognised this, and stopped trying to explicitly score the "four skills" individually, such as the Open University, whose degree-level exams have only spoken and written assessment, with the written part incorporating listening and reading as source material for an essay writing task. It's a holistic approach that accepts that trying to identify why a student isn't good enough isn't really an issue -- either they are or they aren't. I was perfectly happy with the approach as a student, and I would be happy with it as a teacher.
Language is certainly too complicated for us to ever hope to devise a truly fair and balance way to assess student attainment, but the current orthodoxy has tied itself in something of a knot trying to reconcile two competing goals. So are we offering immersion, or are we assessing the skills?
Showing posts with label exams. Show all posts
Showing posts with label exams. Show all posts
23 January 2015
24 September 2010
After writing my post on gap-fills and cloze tests in language lessons, I went back to a blog post that I'd read some time ago, which was part of the Pools project that had introduced me to technology such as Hot Potatoes. I commented about how I had never seen the purpose of fill-in-the-blanks as a learning tool and his response was that it was a test.
Now, it has often been observed that repeated testing aids student retention, so most teachers would assume that anything that is a test is valid as a retention aid. Is this true?
As I said in my earlier post, the cloze test and the gap-fill rely on having a sound internal model of the language under test. Does doing a cloze or gap-fill help build that knowledge?
In the previous post, I said that doing these tests early appeals to conscious knowledge. Many of the most successful students will look at a fill-in-the-blanks exercise and reason through it. If you ask them how they did it, they'll say things like "that's a noun and that's an article, so the thing between them must be an adjective". As the conscious strategy proves so successful, the learner will continue to apply it and will perhaps never develop the gestalt. And because any attempt to use gestalt at this time will appeal to the student's first language, the student who approaches the test in the intended way will be penalised.
In effect, the student learns how to pass the test, rather than learning the specific competencies that the test was originally designed to measure.
How much damage does this really do? In my opinion, a lot. As students go through their academic career, they will be expected to do more complicated things, and they will be expected to do them quicker. But there is only so fast that we can conscious churn through these rules. Sooner or later, we can only succeed by gestalt, but how can we encourage that? The mechanics of testing militate against going by your gut, because this leaves you with a worse mark.
I'm not saying that we shouldn't use testing as a learning mechanism early on, but that we should look again at what constitutes appropriate testing - what tests the student can carry out in a way that supports, rather than hinders, learning.
Testing, testing, 1, 2, 3...
This of course does not only hold for testing with fill-in-the-blanks. It has been said by a number of academics, with figures to back them up, that testing aids recall. I'd like to present an argument against that.
"Are you mad?" I hear you cry. Yes, but that's beside the point.
My point isn't that their figures are wrong, but that the word "test" is something of a red-herring. What is it we do when we test a student's knowledge? We check that we can recall accurately what they have been taught.
If we discuss this as "testing", we have in our mind a goal -- scores, marks, grades. We don't want to focus on that goal when we use testing as a learning aid -- we need to focus on the process.
The process involved in testing is accurate recall and application of learned information. A task can be designed to require recall and application without being strictly a test. If the figures say that testing is a successful classroom strategy, might it not be because they are comparing "tests" against other tasks that do not require any recall or genuine sentence construction skills? I'd like to demostrate how some of the most common exercises fail to rely on these skills.
The problem with drills
Form drills, pattern drills, substitution drills; these repetitive exercises fail to teach because the core language we expect the student to learn from the task is never subject to recall. The teacher says it and the student repeats it, changing only a vocabulary item of very small grammatical features. We cannot learn to recall something without doing it.
The problem with communicative tasks
Anyone who has been at either end of a classroom recently is likely to have come across communicative exercises based on an idea like the "knowledge gap". Each student has partial information or a partial picture, and the students have to talk to each other in the target language to get the information from each other. However, being understood by your classmates is different from being understood by a native speaker, and accuracy not only slows down the task but potentially renders you incomprehensible to a classmate of lower ability.
In general, though, regardless of the exact nature of the task, the class starts by presenting or otherwise providing the students with the language that they are going to use. If we attempt to do this by eliciting the information from the class, we may get one or two to recall it, but most will not -- instead they will end up holding the patterns in working memory, often meaninglessly and mechanically, and parroting the phrases during the class.
Conclusion
We all want to receive or provide the best education possible. Empirically we know that "testing" aids this, but that statement is an oversimplification of the real situation. There is nothing magical about "a test" that makes it more effective than "an exercise". We must examine what the core activity is in what we consider a test to be, and we must find ways to incorporate that into the day-to-day teaching process.
I contend that the distinction implied between "teaching" and "testing" is artificial -- teaching we consider to mean presenting information and going through some kind of repetitive "training" regimen, whereas testing is an unsupported check of recall.
The idea that "testing aids retention" is therefore an obstacle to good practice, because it prevents us from looking at the nature of the tests to identify what really happens. I believe the real point is that recall practice aids retention. If so, our task design must always be built around developing recall, or the student will never be able to spontaneously produce language.
Now, it has often been observed that repeated testing aids student retention, so most teachers would assume that anything that is a test is valid as a retention aid. Is this true?
As I said in my earlier post, the cloze test and the gap-fill rely on having a sound internal model of the language under test. Does doing a cloze or gap-fill help build that knowledge?
In the previous post, I said that doing these tests early appeals to conscious knowledge. Many of the most successful students will look at a fill-in-the-blanks exercise and reason through it. If you ask them how they did it, they'll say things like "that's a noun and that's an article, so the thing between them must be an adjective". As the conscious strategy proves so successful, the learner will continue to apply it and will perhaps never develop the gestalt. And because any attempt to use gestalt at this time will appeal to the student's first language, the student who approaches the test in the intended way will be penalised.
In effect, the student learns how to pass the test, rather than learning the specific competencies that the test was originally designed to measure.
How much damage does this really do? In my opinion, a lot. As students go through their academic career, they will be expected to do more complicated things, and they will be expected to do them quicker. But there is only so fast that we can conscious churn through these rules. Sooner or later, we can only succeed by gestalt, but how can we encourage that? The mechanics of testing militate against going by your gut, because this leaves you with a worse mark.
I'm not saying that we shouldn't use testing as a learning mechanism early on, but that we should look again at what constitutes appropriate testing - what tests the student can carry out in a way that supports, rather than hinders, learning.
Testing, testing, 1, 2, 3...
This of course does not only hold for testing with fill-in-the-blanks. It has been said by a number of academics, with figures to back them up, that testing aids recall. I'd like to present an argument against that.
"Are you mad?" I hear you cry. Yes, but that's beside the point.
My point isn't that their figures are wrong, but that the word "test" is something of a red-herring. What is it we do when we test a student's knowledge? We check that we can recall accurately what they have been taught.
If we discuss this as "testing", we have in our mind a goal -- scores, marks, grades. We don't want to focus on that goal when we use testing as a learning aid -- we need to focus on the process.
The process involved in testing is accurate recall and application of learned information. A task can be designed to require recall and application without being strictly a test. If the figures say that testing is a successful classroom strategy, might it not be because they are comparing "tests" against other tasks that do not require any recall or genuine sentence construction skills? I'd like to demostrate how some of the most common exercises fail to rely on these skills.
The problem with drills
Form drills, pattern drills, substitution drills; these repetitive exercises fail to teach because the core language we expect the student to learn from the task is never subject to recall. The teacher says it and the student repeats it, changing only a vocabulary item of very small grammatical features. We cannot learn to recall something without doing it.
The problem with communicative tasks
Anyone who has been at either end of a classroom recently is likely to have come across communicative exercises based on an idea like the "knowledge gap". Each student has partial information or a partial picture, and the students have to talk to each other in the target language to get the information from each other. However, being understood by your classmates is different from being understood by a native speaker, and accuracy not only slows down the task but potentially renders you incomprehensible to a classmate of lower ability.
In general, though, regardless of the exact nature of the task, the class starts by presenting or otherwise providing the students with the language that they are going to use. If we attempt to do this by eliciting the information from the class, we may get one or two to recall it, but most will not -- instead they will end up holding the patterns in working memory, often meaninglessly and mechanically, and parroting the phrases during the class.
Conclusion
We all want to receive or provide the best education possible. Empirically we know that "testing" aids this, but that statement is an oversimplification of the real situation. There is nothing magical about "a test" that makes it more effective than "an exercise". We must examine what the core activity is in what we consider a test to be, and we must find ways to incorporate that into the day-to-day teaching process.
I contend that the distinction implied between "teaching" and "testing" is artificial -- teaching we consider to mean presenting information and going through some kind of repetitive "training" regimen, whereas testing is an unsupported check of recall.
The idea that "testing aids retention" is therefore an obstacle to good practice, because it prevents us from looking at the nature of the tests to identify what really happens. I believe the real point is that recall practice aids retention. If so, our task design must always be built around developing recall, or the student will never be able to spontaneously produce language.
Subscribe to:
Posts (Atom)