21 July 2011

Receptive skills as a reflective act

I'm not feeling my usual self this week.  I've got an infection under a fingernail, so it's a bit sore to type, which makes it hard to concentrate.  It's leading to silly mistakes in everything I do, so I'm going to avoid the complex topic I'd planned to write about this week (phonology) and stick to something a bit less involved.

A couple of weeks ago, in my post 4 skills safe, I suggested that the comprehension of language is a reflective act, that is to say that we understand by considering what would cause us to say the sentence we've just heard or read.

Now, I mentioned mirror neuron theory, and my Dad said to me at the weekend "you'd better have a better explanation than that."  My Dad taught in a high school up until retiring, and one constant throughout his career was that new teaching fashions would always be justified by the latest idea from psychology, but that it was all theory, no practice.

Now I can't offer anything in the way of empirical research, only anecdote.  Hopefully, though, the anecdotes that I offer will be universal enough that other teachers will see the same phenomena occurring in their own students.

Let's just briefly revisit what I said last time (minus the bit about mirror neurons):
  • People often finish each other's sentences. To do so they must be actively constructing the utterance as they go.
  • People often mistakenly say that they've said something, when actually it was someone else who said it, and they only heard it. So we identify very closely with sentences we hear (and agree with), suggesting a very close link between the mental process behind listening and that of speaking.
  • There exists a (fairly harmless) neurological disorder which causes someone's lip to tremble when they're being spoken to, and they often echo the last word of your sentences (often suffixed with "uh-huh" for assent). For these people merely listening activates the physical speech organs.

 OK, so now let's move to anecdote.

Last night, I was at a language exchange (Spanish and English).  I was talking to a young woman called Cristina who I've spoken to on several occasions.  At one point she was concentrating so hard on what I was saying (in English) that she actually started mouthing the words.  Not the trembling lip of the neurological condition I mentioned before, she was literally mouthing the words.  It was a conscious act.

Now let's contrast this with reading, because I think reading aloud offers the best indication of language as reflection.

The first time I really noticed this was with a student in San Sebastian back in 2007.  Most of his peers read robotically when asked to read aloud -- What - I - Mean - Is - That -They - Would - Pronounce - Each - Word - Dist - inc - tly - And - Careful - ly.  This guy was different.  He spoke with natural flow and good intonation... but he didn't read what was on the page.  Specifically, he got his prepositions wrong.  Not wrong in a random way, though -- he simply substituted the preposition he would use for the word on the page.

Now this shouldn't come as a surprise to anyone -- studies of native language reading have established that most prepositions (and in fact many function words) aren't actually "read" by fluent readers.  Instead, the brain simply notices a "small word" and works it out from the context.

I've since seen multiple variations on this theme -- I had one student who kept reading "attitude" as the Spanish "actitud" with the English stress pattern imposed on it ("áctitud" or "acteetood", more or less), and one who adds an S to "sort of thing", to match the pluralisation of the Spanish phrase "tipo de cosas".

But this is reading, and as I said last week, that's not a core language skill (incidentally, since then I've found that Lev Vygotsky described speaking and reading as "second-order abstractions" -- I wish I'd had that quote when I wrote the post).  However, it does demonstrate how much preconceptions can affect our perceptions.

So will this hold in the spoken mode?

First, consider that language generally has a high degree of redundancy.  Even if a noise obscures part of a sentence, you often still manage to understand the sentence.  If you compare "attitude" with "áctitud", in my accent there are three perceptible differences: English has the C, the schwa in the middle syllable (possibly i-schwa), and the Y-glide in the final syllable.  In many accents (mostly American), there isn't even a Y-glide, so there's only two differences.  There is no other word that similar, so the "filter of perception" will probably let it through without really caring about the differences.

The same goes for the "sort of things".  If a Spanish person understands me when I say "sort of thing", I cannot take it as granted that he perceived it without an erroneous final S.  His brain may simply have assumed it is missing.  This is a particular issue for Spanish, as in some dialects, a final -S may be dropped completely*, so it would be very easy for a Spanish speaker to percieve the word "thing" as "things" if context suggested this.

What are the consequences for the teacher or learner?

Assuming this is true (and I accept that many readers will believe otherwise), the consequences are pretty profound.  If our perception of received input is altered to match our existing internal model of language, then no amount of input alone will lead to perfection in a language.  The internal model can only be rebuilt by some directed process.

The success of some students in "silent period"-style environments doesn't disprove this - such a student may well have succeeded through an active analysis of the input, rather than simply through the sheer volume of input.

* This is less common than many Spanish speakers think.  Most "dropped Ses" are in fact [s] phonemes realised by an aspirant allophone (/h/) or a hiatus.  And here again the filter of perception comes into play -- there is no [h] phoneme in Spanish, so even some native speakers don't seem to notice the /h/ sound in something like "rastos" (=rahhtohh) and appear unable to distinguish it from "rato"


Alexandre said...

When we text, as we start typing words, most phones suggest entire words, narrowing down possible words until only a few if not a single word is left as a possibility. I always felt that the same happens when I listen to someone speak. We finish eachother's sentences because as we near the end, there aren't many possible sentence endings left given the context.

Maybe we should be teaching students how to complete unfinished sentences.

Nìall Beag said...

Nice analogy.

Actually, as I understand it, predictive text started out in the artificial intelligence lab where they were trying to model the process of "anticipating" language -- ie they were trying to build computers that could finish your sentences for you based on statistics and generative grammars. Predicting single words based on frequency of occurrence is relatively easy compared to sentences, but uses some of the same principles as at the sentence level (but on a smaller scale).

The other place this area of research fed into was speech recognition. Because the actual audio can be ambiguous, and the computer doesn't always recognise the sounds accurately, the software models possible sentences and assesses the likelihood of each one. (On the simplest level, "I buy" is a more likely combination than "eye by".)

As for teaching students how to finish sentences... well, I'd argue that learning to finish sentences is simply a consequence of learning to produce correct sentences in the first place. If you teach it as an individual skill, the student may well learn it as individual skill, which would mean that the student was doing it wrong....

Anonymous said...

The silent period method indeed does not disprove it, but for a different reason than you think. For a silent period to be useful it's necessary to listen with no previous knowledge of the language and with no attempts to understand. Otherwise the learner's preconceptions will interfere with the process, exactly as you described. So it actually proves your point.