Although the use of International Phonetic Alphabet gained significant momentum during recent decades, it still raises a lot of questions, especially among learners of English as a foreign language. Most of the confusion stems from the minor differences between transcriptions found in different dictionaries. In an attempt to clear this up we want to take a quick look at the history of the International Phonetic Alphabet, which for the greater part revolved around British English this far, but we will get to American English in the end too.
Using phonetic alphabet for the purpose of language learning is a fairly recent development. Until the middle of the twentieth century hardly anybody knew what it was apart from phoneticians, dialectologists and other linguistics professionals. The phonetic alphabet covers the whole range of sounds present in different languages, therefore only a subset of characters is used to transcribe English speech. So far, so good. Then why is it that phonetic transcription causes so much confusion?
Firstly, as the phonetic alphabet was developing and shaping up it started to attract a broader user base. Not only linguists found it to be useful, but also actors working on accents for their roles, opera singers, speech impediment specialists, foreign language students and even ordinary native speakers. Naturally, as each group uses it for different purposes, the requirements would differ as well.
Secondly, while linguists are trying their best to come to a consensus, the language itself and its pronunciation change all the time, especially with British English pronunciation where changes sometimes can be observed within a span of a few decades.
With that, let’s try and make sense of the history of the phonetic transcription and its current state of affairs. Without trying to answer all the questions, hopefully it will help you clarify quite a few. So, an extremely condensed story of transcription of British English pronunciation might go like this: UCL begat Jones, Jones begat Gimson, Gimson begat Wells. Now let’s flesh it out a bit, shall we.
University College London, or UCL, is one of the largest universities in England. It is the third oldest and the first to be entirely secular. There are many famous scientists and public figures among the university staff and alumni. Although Department of Phonetics and Linguistics might not be UCL’s main bragging chip, it proved to be the main authority in the subject of transcription of British English throughout the 20th century.
Daniel Jones, head of the Department of Phonetics at UCL, is the first actor in our story, although he relied on the works of other phoneticians, namely Henry Sweet and Paul Passy. He introduced the term “phoneme” and “vowel trapezium” diagram. In 1917 Jones published “English Pronouncing Dictionary” that used transcription system similar to what we use today. The dictionary was intended for professional use though, whereas for a wider audience this system appeared only in 1948’s “Oxford Advanced Learner’s Dictionary”.
Interestingly, the characters used for English consonants and stress marks have never been a point of contention, while main debates focused on vowels. For instance, the character set used by Jones didn’t reflect qualitative differences between long-short vowel pairs. That is, both long and short vowel in the pair would be represented by the same character with the addition of a length mark /ː/ for longer vowels. This is called “quantitative scheme” of transcription.
Enter Alfred Gimson, Jones’ student and colleague, who later also held the post of the head of the Department of Phonetics and Linguistics at UCL. Having published his “Introduction to the Pronunciation of English” in 1962, Gimson effectively put a foundation for the transcription systems most widely used today. With his updates the English vowels would now look like this:
That well could be a happy end to the story if not for the massive social shifts in England at the time. These changes had a huge effect on the conventions of pronunciation and that’s on top of the quite notable pace of change that English and its phonetics show historically. As a result, as Gimson’s representation was gaining support, it also drifted farther from the modern British pronunciation. Despite that, a certain consensus has been reached by 1990s based on Gimson’s quantitative-qualitative scheme with some adjustments for the latest trends:
- In addition to vowel pairs /iː — ɪ/ and /uː — ʊ/, sounds /i/ and /u/ were introduced to reflect realisation of these vowels at the end of the word or in front of another vowel in some cases (for instance, “city” /ˈsɪti/ or “strenuous” /ˈstrenjuəs/).
- Clusters /tju/ and /dju/ got replaced in some cases with /ʧu/ and /ʤu/ to reflect this developing trend (for instance, “dual” /ˈʤuːəl/ or “situation” /ˌsɪʧuˈeɪʃ(ə)n/). Albeit not without resistance, other instances where pronunciation shifts towards another existing phoneme get accepted. For example, “pour” and similar words are now mostly transcribed as /pɔː/ rather than /pʊə/.
- Instead of /l̩/, /m̩/, /n̩/ and /r̩/ used mostly by phoneticians, syllabic consonants became widely transcribed as /(ə)l/, /(ə)m/, /(ə)n/ and /(ə)r/ or as a superscript borrowed from American notations – /əl/, /əm/, /ən/ and /ər/.
Nevertheless, a host of other changes were left out of the update. In the interest of preserving the consensus and dialectal parallelism (primarily with the American and Australian dialects of English) some of the phonetic symbols have been redefined, that is characters themselves didn’t change, but they would imply different realisation depending on the dialect.
- Diphthongs in “mouth” and “price” are still transcribed as /aʊ/ and /aɪ/ even though they are now commonly realised closer to /æʊ/ (onset as in “cat”) and /ɑɪ/ (onset as in “car”).
- The vowel in “dress” remained /e/, although even its proponents admit that the phoneme has shifted to /ɛ/.
- Vowel /æ/ in “cat” has shifted considerably towards /a/ in British English in the past 50 years, completely losing the /ɛ/-quality it had a century ago.
- Sound /ʌ/ in “cut”, – perhaps the most glaring example, – resides on the right of the “vowel trapezium” diagram along with other back vowels, even though its realisation shifted to front-central vowel as early as the middle of the past century. The character still remains intact though and is often interpreted as an open-mid vowel with broad variability.
The current status-quo is primarily backed up by Cambridge University (and its publishing business Cambridge University Press) as well as by John Wells, a Cambridge University alumnus who nowadays is also a professor emeritus at UCL. On top of that John Wells was president of the International Phonetic Association from 2003 to 2007. 1990’s Longman Pronunciation Dictionary, that he edited, was the first English pronunciation dictionary since the last edition of Gimson’s dictionary published in 1977. In that regard Wells took over from Gimson.
As shown by the above examples, the accepted representation wasn’t perfect. One of the people to upset the apple cart was Clive Upton, dialectologist in the first place and phonetician as well. Apart from being a professor emeritus of Modern English Language at the University of Leeds, Upton worked as a pronunciation consultant for Oxford University Press. In 1995 they published his Concise Oxford Dictionary where Upton used different representation for five English phonemes:
John Wells immediately subjected Upton’s set to a close criticism:
- /e/ => /ɛ/ in “dress”. Wells seemed to agree that the shift actually has happened, but for the sake of simplicity the old character is a better choice. To be honest, with this argument the whole quantitative-qualitative scheme could be put into question. Another issue is, it is not uncommon to cause confusion among EFL/ESL students where they pronounce onset of the diphthong in “late” with the same open-mid quality as in “let” just because they see the same character in the transcription.
- /æ/ => /a/ in “cat”. Wells points out no such shift occurred in American and Australian pronunciation, therefore the change is unwarranted as it upsets dialectal parallelism.
- /ɜː/ => /əː/ in “nurse”. Wells argues that sound /ɜː/ is tense and specific, while schwa /ə/ has a higher variability. Quite possibly, variability was exactly the point Upton tried to make as in modern pronunciation /ɜː/ increasingly sounds as long schwa, without characteristic tension it had in classic RP.
- /eə/ => /ɛː/ in “square”. The argument here is it’s hardly a majority of English speakers that pronounce this phoneme as a long monophthong. It would also add unnecessary confusion for foreign learners of English.
- /aɪ/ => /ʌɪ/ in “price”. Here Wells just resigns to having no idea as to what prompted the choice of /ʌ/. Yet, Wells himself noted that realisation of /ʌ/ may vary broadly from back to front vowel. And considering Upton is a dialectologist in the first place, his choice of a character representing wider variability seems to be less puzzling.
Anyhow, Wells advocates for the updated Gimson representation, but Oxford still beg to differ. The 3rd edition of Oxford English Dictionary, albeit work in progress, still seems to rely on the Upton’s set.
Finally, a few words about American transcription systems. The United States seem to be somewhat lagging behind here as for a long time they would rely on various respelling conventions. For example, word “conjugation” may be respelled as “kon-juh-gey-shuhn”, “kŏn′jə-gā′shən”, “kanǯəge(y)šən”, etc. The first dictionary using approximation of the International Phonetic Alphabet for its phonemic transcriptions was published in America in 1944, whereas the wider acceptance of IPA gained momentum only towards the end of the century. The IPA character set used for American English is slightly different from the one used for British English, which reflects phonetic differences between the two dialects. Nevertheless, thanks to the efforts of the International Phonetic Association, both systems are fairly unified which allows to switch between the two easily.