27 February 2016

Edinburgh's trouble with multilingual education.

The Scottish Government has long had an aspiration to wider availability of multilingual education, and recently formalised on the European model of 1+2. 1+2 is the idea that a child will be educated in their first language, and that during their primary schooling, they will be taught at least two additional languages; the first being introduced from the first year of schooling, the second no later than the 5th year of primary school. (Earlier draft versions of the regulations said the first additional language should be introduced no later than P3, but this has since changed.)

There are several steering principles underpinning the 1+2 approach. With regards to the first additional language ("L2") the government themselves state:
" The Working Group expects young people to continue with some form of language study in the L2 language up to the end of the broad general education, i.e to the end of S3. " [Language Learning in Scotland: A 1+2 Approach]
Children are therefore expected to be given the opportunity for continuity of access to their L2 until around the age of 14 or 15, and it is assumed that there will be the option to continue beyond that age, subject to the usual logistical constraints around class sizes and the viability of running exam-level classes for a small number of pupils.

Another of the principles is that language should not merely be taught as a subject, but should be embedded into classroom routine. There is even the hope that in the future it would be possible to offer subjects (or units within subjects) delivered through foreign languages. What could be more natural than listening to accounts from French or German WWII soldiers and civilians in their own language as part of the history curriculum, for example? It's a laudable goal, and even if we're not likely to achieve it in the foreseeable future, it's certainly something to aspire to.

The government's deadline for the implementation of this policy is 2020, and several local authorities are pushing to get themselves ready ahead of this date. Last year, Edinburgh City Council announced their intention to have the scheme implemented by 2017.

This too is laudable, but recent news has thrown the city council's commitment to this into doubt.

Gaelic-medium education (GME) has been available since 1988, when a Gaelic-medium unit was opened within a mainstream school, Tollcross Primary. Since then, uptake of the option for GME in the city has increased year on year. Tollcross Primary is a feeder school for the city-centre high school James Gillespie's, so secondary Gaelic-medium was implemented there. In 2013, primary GME education was moved to a dedicated school on Bonnington Road in the north of the city, outside of James Gillespie's school normal catchment area, but JG's retained its place as the city's secondary GME facility and the new school was given official status as a "feeder primary" to the school.

This year, however, James Gillespie's have found themselves with more applications for new admissions than they have capacity to accept, and the council have announced that the standard rules for oversubscription apply: priority to children within the geographical catchment area and those with older siblings already attending the school. As the intake for the Gaelic primary is drawn from the entire city (and beyond), it is most likely that the pupils who lose out will be those currently going through GME. There are 24 pupils in this year's primary 7 class, and current projections see 9 of them being refused a place at JG's.

The council's current proposed solution to this is to offer these children the option of attending Tynecastle High School, or the school for their local catchment area, but neither of these options fulfils the aspirations set out for 1+2, as local schools will offer these children no continuity in their L2 (Gaelic), and Tynecastle is little better. Tynecastle currently only offers Gaelic for learners, something which is not appropriate to children from a GME background. Indeed, children who have undergone three or more years in GME are not allowed to sit the Gaelic learners' exams at National 5 or above at all.

Going by the council's current projections, then, we're likely to see 15 GME kids in JG's first-year intake and at most 9 in Tynecastle's. With class sizes pegged at 30, that means that we've taken one class and turned it into two, which certainly does nothing to reduce problems of capacity at either school. When we look at what that means for course choice at 3rd and 4th year, when come of the pupils may be dropping Gaelic, what are the chances that either school will see a continuing Gaelic class for the GME pupils as viable?

This then leads on to a wider issue with GME provision at JG's. Aside from Gaelic itself, the school currently only teaches Art, RE, PE and Modern Studies through Gaelic, and currently none at a certificate level, although National 5 Modern Studies will be offered next year (see section 3.78). It seems likely that these classes will not operate in Gaelic for next year's first year, as that would mean having half-empty classrooms in a school that had already turned children away for capacity constraints.

Part of Edinburgh Council's justification for this decision is that:
"The level of current Gaelic provision at James Gillespie’s High School is not significant and could be relatively easily replicated, at least in part. There continue to be significant issues nationally with the recruitment of Gaelic speaking staff which limit what could actually be delivered at a secondary level, regardless of where it was provided. " (section 3.75, same document as above)

Both of these statements are true, but this is something of a question of cause and effect.

First of all, the reason for the low level of Gaelic provision is due to the lack of critical mass, and dispersing the GME primary cohort across two or more high schools will certainly not resolve this. Secondly, part of the problem nationally with the availability Gaelic-speaking staff is that for they typically spend the majority of their time teaching in English, and again this stems from a lack of critical mass within the pupil cohort. If the council's actions will lead to Gaelic-speaking teachers spending even less time teaching in Gaelic, then the council's justification is little more than a self-fulfilling prophecy that leads them to further squander what is already a limited resource, rendering their argument somewhat self-defeating.

The lack of availability of trained GME teachers is something that is being addressed at the national level, but there's something of a chicken-and-egg situation: with the low number of classes being taught in GME at present, it is very difficult for a teaching student to gain placement experience in a Gaelic-medium setting. Depending on the subject you are training to teach and the school you are placed in, Gaelic-medium classes may be limited to BGE (the first three years) or even only the first year or two. Some subjects may not be available at all. This makes it very difficult for a new teacher to build up the confidence required in delivering through Gaelic a subject that they themselves will have learned through English. Any action at a local level that risks decreasing the availability of GME has knock-on effects at a national level that hamper our ability to address the issue.

Not just a problem for Gaelic

Many people will shrug their shoulders and say "it's only Gaelic", but they're missing the point, because at the moment it's only Gaelic that offers us a current model for language learning throughout schooling, and much of the Scottish Government's policy on language learning leans on the experience of GME.

Four years away from the government's deadline on 1+2 and one year from its own self-imposed deadline, the council is already making decisions that take it further away from its goal. This does not bode well for children going through primary education in other languages, and Edinburgh council's schools will be offering a fairly broad selection (among them French, Spanish, Mandarin, Polish, Farsi and Gaelic). What happens if a child who has learned Spanish since primary 1 finds themselves allocated to Forrester High School (French and German)?

This is a logistical matter that will become a serious issue for parents across the city in the next few years, and this is an opportunity for the council to pilot a solution on a small scale and work out a strategy before it's too late.

If the council can't handle the transition between primary and secondary correctly, it will turn children off languages: kids placed in a language class that is too easy for them will lose interest in languages, and kids placed in classes above their level will lose confidence in their own ability to learn.

The goal of 1+2 isn't just to give kids "the right language", but to give them the right attitude to language, so that they can go on to be successful language learners and pick up the particular language they need later in life when the need arises. Getting the primary-secondary transition right is absolutely vital in developing this attitude, and if the council can get this right for 24 pupils next year, how can it hope to do so for the hundreds of pupils moving into its high schools in 2017 and every year after?

15 February 2016

Implicit and explicit, meaningful and meaningless

Last week I was across in Edinburgh catching up with friends. I arrived early, so went into a bookshop to kill time... and came out with two chunky academic texts. I probably would have escaped without buying anything if one particular book title hadn't caught my eye: Implicit and Explicit Knowledge in Second Language Learning, Testing and Teaching.

The main thrust of the book was looking into the ongoing debate as to whether implicit teaching styles lead exclusively to implicit knowledge and explicit styles to explicit knowledge only, or whether explicit teaching could lead to better implicit knowledge. It's an important area of discussion because at present the mainstream theory of language teaching holds that only implicit learning can ever lead to implicit understanding and production, and that explicit teaching only ever makes people consciously aware of rules and able to apply them mechanically and consciously.

And yet there are very few teachers who don't include some explicit instruction in their lessons, whether that's word-lists or conjugation tables. (Even Assimil, who sell themselves on the principle of natural assimilation, dedicate more letters to grammatical explanations than they do to the dialogues and their transcriptions and translation.)

I haven't read the whole book yet, and (unsurprisingly) what I've seen so far is pretty inconclusive. However, it does lean towards the opinion that explicit teaching does indeed help in language mastery. It also discounts a lot of the past counter-evidence to this theory on the grounds that their models of explicit teaching are simply bad examples, using overly mechanical, rote methods, and not being equivalently "meaningful" to the implicit method under examination.

It's this word "meaningful" that I think is the crux of the problems faced in language learning – language is nothing without meaning.

In the language classroom, items that seem to be inherently rich in meaning can paradoxically be rendered devoid of meaning by context.

Consider:
My cousins buy trousers.

In an objective sense, it carries a lot of meaning, and there is no truly redundant information in the sentence – every word, every morpheme, brings something not explicitly present elsewhere. (But even then, it has no real personal meaning to me, as I can't imagine myself ever saying it. This is a side issue for the moment, though.)

But what happens when we put that sentence into a classroom exercise?

For an extreme example, let's take the behaviorist idea of substitution drills. In "New Key" style teaching, a substitution drill would be target language only, and one element of the sentence would be substituted with something else in the target language. So our theoretical exercise might go:
Teacher: My aunt buys hats
Learner: My aunt buys hats
Teacher: My mother
Learner: My mother buys hats
Teacher: Trousers
Learner: My mother buys trousers
Teacher: My parents
Learner: My parents buy trousers
Teacher: My cousins
Learner: My cousins buy trousers

At the point of utterance, the learner does not have to pay any attention whatsoever to the meaning of anything in the sentence beyond the plural marking of my cousins and the s-free verb form of buy, which is made even easier by the fact that this plural example follows an earlier plural example. Thus the student has no immediate motivation to attend to meaning, and it is a struggle to do so.

Substitution drilling is, as I said, an extreme example, but I do feel it is useful in establishing a principle that affects a great deal of learning, even where the effects are not so obvious.

Consider, for example, the fairly established and mainstream idea of focusing on a particular grammar point in some particular lesson, or section thereof. If I am set a dozen questions all of which involve conjugating regular Spanish -er verbs into the present simple third person singular (or whatever), then I do not need to attend to the meaning of the present simple third person singular, just the form -e.

To me, attending to meaning is the single most important matter when it comes to language learning, and yet it is rarely explicitly discussed. Instead, it is typically wrapped into a specific embodiment of the principle. Krashen's comprehensible input hypothesis suggests we learn language by understanding input, the communicative approach says we learn when we use language to solve problems. Total Physical Response says language has to be tied to physical action All of these are attempts to address the meaningfulness of language, but they are a narrow, specialised form of attention to meaning. CI and TPR deny us access to the colourfulness of abstract language with its subtle, personal meanings, and the CA doesn't do much better in that regard – while modal language may be taught in a communicative classroom, the nature of the task implies a rather pragmatic, utilitarian meaning, so there isn't really any meaningful difference between blunt orders like give me it; plain requests (can I have...?) or those indirected with a conditional mood (could I have...?); and statements of desire either, whether in declarative (I want...) or indirected further in the conditional (I would like...).

Other teachers take the idea of "personalisation" and raise it above all other forms of meaningfulness, insisting that students only learn by inventing model sentences that are true for them. But isn't (eg) I want it, but I don't have it true for everyone? Does the brain not immediately personalise a sentence such as that? (When I came across a similar sentence in the Michel Thomas Spanish course, I was cast back to throwing coins in a wishing well as a child.)

Perhaps the reason few writers wish to discuss attention to meaning is that it throws up a lot of questions that often fundamentally challenge their methodologies. For example, comprehensible input (and any similar learn-by-absorption philosophy) is confounded by redundancy in language – there is no need to attend to the meaning of every morpheme when the same information is encoded twice in the sentence. Perhaps the clearest example of this is the present tense -s suffix for English verbs (first person singular). It is readily apparent that a lot of learners do not pick this up, and there are a great many foreigners who spend years in English speaking countries, hearing thousands of hours of input with the correct form, but who never pick it up. There is no need for the learner to attend to the meaning of the -s, because they already know from the subject of the sentence that it's the first person singular being discussed. When they speak, they are understood, because even though it sounds incorrect to a native speaker, there's practically no risk of being misunderstood. In the communicative approach, such an error is not a barrier to completing the intended task (particularly seeing as there's a good chance your conversation partner will make the same mistake), so there is no requirement to attend to it.

Language has a natural tendency to redundancy, in order to make our utterances easier to understand; we are naturally disinclined to attend to every element of the sentence. Therefore any attention to the meaning of all the individual components of an utterance will be a higher-order process, a conscious or semi-conscious overriding of our lower instincts. Surely that makes it an explicit process? And if it is an explicit process, surely it is better for it to be directed by an expert (the teacher) than carried out on an ad hoc basis by a non-expert (the learner)?

02 January 2016

Why I learn languages,and why I'm not learning any languages

I've probably said before, but the reason I like learning languages is because of the look on people's faces when I speak their language to them.

What I didn't realise was how far back this went.

My dad was a teacher at my high school, and I knew the main janitor before I knew most of the teachers. Lucien, the janny, was from France, and by a couple of years into high school I would say "Bonjour" to him - I couldn't say much else, but it was enough.

When my dad started talking about him this Christmas, the main memory in my head was just a smile - the delight of being spoken to in his own language, even for a moment. It was the same sensation that I've seen so many times since, and I started to wonder why it had taken me so long to start learning languages properly, and then I realised that I've stopped learning again.

I remember that all through my late teens and early twenties, I was keen on the idea of learning languages, and I picked up a couple of books here and there but never got anywhere. I only started to get the proper motivation back when I started speaking broken high-school Italian to a young woman serving in a local sandwich shop. Again I tried picking up the old books, and again I put them down.

As it turns out, restarting a half-forgotten language was really hard - if you attempt to read notes on things you already sort of know, you switch off, so that's when I switched to new languages: Spanish and then Scottish Gaelic. After that, returning to tidy up my French and Italian was a lot easier.

But right now, I'm not really learning, or relearning, or even consolidating anything. Why not? Maybe it's because I can already give lots of people the satisfaction of hearing their own language. More likely, though, it's just because I'm not meeting enough non-English-speaking people. That would be understandable, I suppose. Last summer I moved to an island off the north-west coast of Scotland, where I'm studying full-time, and there aren't many foreigners in the area at all.

What there is, though, is a lot of native speakers of Gaelic, and I just keep falling back to English.

Am I just being lazy? Or is my brain overworked? Or am I just being antisocial?

Probably a bit of each. I really want to get back on track this year, and start learning something new. To that end, I'm starting to plan my summer holidays now -- an epic cycle journey across part of Europe. I need to take in at least one area which requires a new language, and at the moment, I think Germany fits the bill. I've already got a solid basis to build on, so I just need to build fluency and vocab and see where I can get to.

Even just thinking about it, I can start to feel some of the anticipation building.

19 August 2015

The folly of trying to pronounce a place "like the natives do"

Well intentioned people often insist on trying to pronounce placenames in the "authentic" native form, even when there's a well-known variation in their own language.

At times, that change is remarkably successful, such as the change of "Peking" to "Beijing". There is an argument that this is futile, however, as the Chinese phonemes are rarely that close to English ones, and the tones of the Chinese are completely absent.

About a fortnight ago, the TV was marking the 70th anniversary of the brutal slaughter of the people of Hiroshima.

Now, most of us say "hiROshima", but a few people say "HEEroSHEEma". I thought about it a bit, and I figured that the first one is probably right, as the second one sounds like two words. I then looked up on the internet, and felt a bit sheepish when I read that the name means "wide island" in Japanese. Two words? Oh. But then I brought up Forvo and nope -- it's pronounced as one word.

So why do we end up with two forms in English?

It's all about perception. There are multiple things that you might detect. First up, there's word stress. In Japanese, Hiroshima is stressed on the second syllable, which is how I pronounce it in English. However, a knock-on effect of English stress is that adjacent syllables are weakened, so the Is are both I-schwa in English. However, in Japanese, vowels are generally clear, and vowel reduction is a matter of length, not vowel quality.

When the English speaker's ear hears Hiroshima, it either notes the correct stress, and fails to perceive the "EE" sounds, or it hears the EE sounds and fails to perceive the correct stress.

Which of these is further from the original? From an English speaker's perspective, it's impossible to say -- you need to make reference to the original language. I do not know for sure, but as Japanese has far fewer vowels than English, I would imagine hiROshima is readily recognised for the intended meaning, and that HEEroSHEEma would be pretty hard to process.

So it's a bit of a fool's errand trying to be "authentic", in my book.

30 July 2015

Language following

Last week, I was at a party in Edinburgh to mark Peruvian independence day. As I was leaving, I heard someone refusing a drink because "tengo que manejar" -- "I have to drive".

Funnily enough, I've had a couple of discussions recently about that very word "drive". It all started with a discussion on a Welsh-language Facebook group. The traditional word presented there was gyrru, whereas people often tend to use the term dreifio, presented there as an Anglicism. Strangely enough, the very next day, I ran into an old university classmate of mine, Carwyn, who was up from Wales to visit a conference in Edinburgh. When I asked him which word he would use to say "drive", his answer was "probably the wrong one", which I immediately took to mean dreifio.

I explained to him why I felt that dreifio was less of an Anglicism than gyrru.

How so?

This is a phenomenon that I call "dictionary following", for wont of a better term. (If there's a widely-accepted alternative name, please do let me know in the comments.) It's a peculiar form of language change that minority languages seem particularly prone to undergoing, where a word-form in one language gets locked to the changing meaning of a single equivalent in another language.
Edit: An Cionnfhaolach over at IrishLanguageForum.com tells me that this transferrence of all meanings for a word in one language to a similar word in another is called a "semantic loan".

In this case, the dictionary word gyrru is a word that means to spur animals onwards -- it's "drive" as in "driving cattle": what drovers do. The modern sense of "drive" comes via the idea of forcing carthorses forward, and thus the English word has broadened.

Across Europe, the equivalent word often evolved analogously. The French and Italian equivalent term is actually to "conduct" a car, and in Spanish, you either "conduct" or "handle" your car -- which is where manejar comes into the equation (manejar = manage = handle; mano = hand).

It's too easy to focus on the grammatical and lexical items as being the characteristics of a language, but if that is not underpinned by idiomatic usage and unique linguistic metaphors, then it doesn't feel like a complete language; and for me at least, much of the joy of learning and speaking that language is lost.

So for me, I'm happier to adopt the English "drive" morpheme into languages like Gaelic and Welsh than to adopt the English metaphor with a native room and claim that this is somehow "purer".

20 July 2015

Undefined article error

No, Blogger isn't on the blink, that's the intended title of the article.

The error in question is the continued use of classical terminology for grammatical articles: specifically the terms definite article and indefinite article. For over a decade, I tried to reconcile the grammatical feature with the common sense of the words "definite" and "indefinite" -- i.e. certain and uncertain -- but it made no sense at all.

It wasn't until I started discussing grammar in foreign languages that I clicked what I'd been missing all along -- the terms we use are basically a mistranslation of classical terminology.

The English word definite has diverged drastically from its etymological roots, but this is not true in the Romance languages on mainland Europe. When the French say défini or the Spanish say definido, what they are actually saying is defined.

That's right, the definite article is really the defined article, which means the indefinite article must be the undefined article. From that perspective, everything seems to make much more sense.

Plenty of languages survive quite well without any articles -- they are essentially redundant as even in English, in a lot of circumstances you can drop them without losing any information in the sentence.

What I'd never got my head round was that the articles don't add any information to the sentence -- they simply act as a sort of "signpost" to information that already exists elsewhere. But most importantly, it refers to the listener's frame of reference and not the speakers.

What the definite article flags up is essentially "you know which one I mean", and the indefinite article says "you don't know which one I mean". If I say "You should go home -- the wife'll be waiting," context says I'm talking about your wife, but if I say "I should go home -- the wife'll be waiting," then you know that I'm talking about my wife. And if I say "a friend of mine is coming to visit," I'm telling you that I don't expect you to know which one I'm talking about. But in both cases, if you delete the articles, I would still make the assumption of yours/mine or that I'm not sure in the second.

Now I know that isn't very clear, but to be honest, I still haven't got this that clear in my own head.

This "signposting" idea is pretty abstract, so describing it is pretty difficult. But to be fair, it's no more abstract than the phenomenon it's describing, and the more I think about articles, the more weird and abstract they look to me. For something at first class so basic, they are incredibly complex.

I suppose I'll be working for years trying to work out the best way to teach, discuss and describe them, but for now I'll satisfy myself with using the terms defined and undefined in place of definite and indefinite, because at the very least we'll be one step closer to a meaningful definition.

12 April 2015

I would of written this headline properly, but...

I wanted to revisit an old theme today. A lot of people still complain about people writing would of instead of would have. There's a saying in linguistics: there's no such thing as a common error (for native speakers) because standard language is (or should be) a statistical norm of what people actually say or write, and a legitimate standard is one that accepts all common variations (hence modern English spellcheckers accepting both "all right" and "alright" -- and as if just to cause maximum embarassment, the Firefox spellchecker doesn't like "alright"... or "spellchecker").

If people write "would of", it's because in their internal model of the English language, they do not see the verb "to have" in there at all. I was looking back at an earlier blog post on this topic, and I saw that I used the phrase "the "error" only occurs when have is used as a second auxiliary". Spot the mistake.

Standard Modern English clauses can only ever have one auxiliary -- there is no "I will can..." or "I would can...", you either have to switch to a copular construction ("I will be able to...") or inflect, eg can to could: I could tell him (if you want).

The have of the perfect aspect in English has traditionally been slightly ambiguous as to whether it's an auxiliary or not. Placement of adverbs gives us an indication of what's going on: "I always have time for it" is fine where "*I have always time for it" feels quite odd and stilted, whereas perfect have is perfectly OK with having such adverbs after it, which makes it look like an auxiliary: "I have always been lucky".

Negatives (and questions) take us further: "I don't have a car" is far more natural to many English speakers than "I haven't a car", but "*I don't have been to Russia" is clearly wrong, and "I haven't been to Russia" is the only possible correction.

So, let's say that the history of the perfect-aspect-have has been one of becoming more and more like the auxiliary verbs. English has, over time, lost the ability to have more than one auxiliary verb in a clause. Those two changes, taken in parallel, means the construction "would have" is in the process of becoming impossible in English.

What do we have instead? Well, like I said before, I see it as the formation of a new suffix, one that is applied to auxiliary verbs to indicate perfect aspect.

I would argue that we already have one established, recognised auxiliary suffix in English: -ould. This first appeared as "would" (or rather "wolde"), the past form (both indicative and subjunctive of "willan" (will). Notice that there are two changes here -- firstly the grammatical vowel change i->o (->ou), and the suffixing of past D. The same changes from first principles could describe shall giving us should, even though the exact vowel change is different, but cannot account for can giving us could, as the N->L change isn't typical in English. Furthermore, it is not a commonly observed pattern for people to spell could would and should differently. Therefore -ould must be a single morpheme common to all three words.

If this is the case, then adding another suffix to that seems perfectly sensible, and we've got coulda, woulda, shoulda; or could've, would've, should've; or coodov, woodov, shoodov or however you want to write it.

Of course, this same perfective suffix can be applied to certain auxiliaries without the -ould suffix:
  • must: that must've been him etc.
  • will: he'll've been told by now
And yet "must" is already practically dead (we all use have to/have got to) in normal usage, leaving "will" rather isolated as the only non-ould auxiliary to take [ha]ve, so even that might slip out of usage fairly quickly.

The case for writing "have" is purely etymological, it doesn't fit the evidence from "mistakes", and it presents a rather more complex model of the language than the alternative I present. It's a complexity that is possible, but I believe only insofar as it is as a transitional form between two stable conditions. I think we should let the language take that final evolutionary step to find a stable state.