Showing posts with label Linguistics. Show all posts
Showing posts with label Linguistics. Show all posts

16 June 2023

Global Language Loss

Global language experts estimate that, without intervention, about one language will be lost every month for the next 40 years.

Welcome back. My 2012 blog post on linguistics may have only been a concern to me--that “mom” is replacing “mother” and “no problem” is replacing “you’re welcome” (see Linguistic Longings)--but today’s topic, the loss of languages, has been described as a linguistic crisis. Reflecting its importance, the recently published study, led by researchers with the Max Planck Institute for Evolutionary Anthropology, has more than 100 contributors affiliated with institutions from around the world.

A key reason for the wide-ranging involvement is that most if not all of the contributors were involved in the development of the Grambank database, which the study introduces and relied on to answer long-standing questions about global linguistic diversity--in essence, the differences among different languages and the ways people communicate with one another.

Countries with most languages spoken in 2021 (from www.statista.com/chart/3862/countries-with-the-most-spoken-languages/).
One of those questions is what the consequences of language loss will be on our understanding of linguistic diversity.

Grambank Database
Grammar defines the rules of a language--words, sounds, how they are combined and interpreted. A language’s grammatical elements include word order, tense, comparatives (words that express ‘bigger’ or ‘smaller’) and whether the language has gendered pronouns.

There are about 7,000 spoken languages in the modern world and published grammatical descriptions for about 4,300 languages.

Grambank is the world’s largest publicly available comparative grammatical database. With more than 2,400 languages and 400,000 data points, it has encoded over half of all possible grammar information that can be extracted from existing data sources.

Language Loss
The loss of languages has occurred throughout human history. What’s new is that, due to social, political and economic pressures, the speed of loss has accelerated. The study’s co-first author from the University of Colorado at Boulder described it as if, while mapping the human genome, scientists saw the genes themselves rapidly disappearing before their eyes.

This global language loss is not evenly distributed. Among the regions at higher risk of losing indigenous languages are Aleut in Alaska, Salish languages of the Pacific Northwest, Yagua and Tariana in South America, and languages of Kuuk-Thaayorre and Wardaman in Northern Australia.

Characterizing the Loss
The comprehensiveness of Grambank allowed effective investigation of the potential loss of linguistic knowledge using a metric borrowed from the field of ecology, “functional richness.” This metric quantifies the area occupied by a species (languages in the study) by the set of features and estimates the diversity the data represent.

Computing this metric, first with all languages, and then only with languages that are not endangered, the researchers were able estimate the potential loss in structural diversity. They found that, although functional richness declines only moderately on a global scale with the loss of languages that are now under threat, the consequences of language loss vary significantly across regions.

Wrap Up
The researchers conclude that the pronounced reduction of the functional space occupied by languages, even in regions with many non-threatened languages, will undermine the ability to investigate the basic structures of language and the diverse expressions used to encode them.

Without sustained efforts to document and revitalize endangered languages, the linguistic window into human history, cognition and culture will be seriously fragmented.

Recognizing the state of language endangerment, the United Nations has declared this the International Decade of Indigenous Languages to promote language preservation, documentation and revitalization.

Thanks for stopping by.

P.S.
Study of Grambank analyses of linguistic diversity in Science Advances journal: www.science.org/doi/10.1126/sciadv.adg6175
Article on study on EurekAlert! website: www.eurekalert.org/news-releases/986975
UN International Decade of Indigenous Languages 2022-2032: www.un.org/development/desa/indigenouspeoples/indigenous-languages.html

03 January 2020

Nonverbal Exclamation Emotions

Happy 2020! And welcome back. I hope you won’t mind if I review a study published about a year ago. It’s not that I just found the study. Well, it is, sort of. The study was buried on my list of possible blog topics. I noticed it while deleting files to prepare for the new year, and I think it’s an ideal kickoff for 2020.

One of the more pleasant
nonverbal exclamations.
The topic is nonverbal exclamations, such as ohhh or oops. They communicate feelings that can be understood immediately. They are essential to recognizing emotion from vocalizations.

A team of researchers, affiliated with the University of California, Berkeley, Washington University in St. Louis and Sweden’s Stockholm University, set out to better define the relationship between these vocal bursts and emotions. For example, how many distinct kinds of emotions can be expressed? Is the recognition of emotion expressions discrete or continuous?

Collection and Initial Assessment of Vocalizations
The researchers recorded 2,032 vocal bursts by 56 male and female professional actors and non-actors from the U.S., India, Kenya and Singapore responding to emotionally evocative scenarios.

They then had more than 1,000 adults (via Amazon's Mechanical Turk) listen to and evaluate the vocal bursts for the emotions and meaning they conveyed, whether the tone was positive or negative and other characteristics.

Statistical analysis placed the vocal bursts into at least two dozen categories, including amusement, anger, awe, confusion, contempt, contentment, desire, disappointment, disgust, distress, ecstasy, elation, embarrassment, fear, interest, pain, realization, relief, sadness, surprise (positive) surprise (negative), sympathy and triumph.

Providing Contexts for Vocal Bursts
The researchers sampled YouTube video clips that evoked the 24 emotions. Vocal bursts extracted from videos (e.g., puppies being hugged, spellbinding magic tricks) were judged by 88 adults and categorized into 24 shades of emotion.

Here’s the best part. They organized all of the data into a natural language semantic space in the form of an online interactive audio map (see P.S. or figure captions for link).

Graphical depiction of online interactive audio map of emotions conveyed by nonverbal exclamations (from www.alancowen.com/vocs).
Enlarged view of top-left section of online interactive audio map; various colored spots provide audio of the gradient mix of emotions (from www.alancowen.com/vocs).
You slide your cursor over any of the categories of emotion and hear the exclamations--surprise (gasp), realization (ohhh), fear (scream). Then you find the categories are linked by gradients with continuously varying meaning. In the map’s embarrassment region, you might find a vocalization recognized as a mix of amusement, embarrassment and positive surprise.

Wrap Up
The researchers suggest that, along with linguistics applications, the map should be useful in helping teach voice-controlled digital assistants and robots to recognize human emotions based on sounds. Another possible application would be helping to identify specific emotion-related deficits in people with dementia, autism or other emotional processing disorders.

The only problem I find is the relative difficulty of examining the map on a smartphone or even a tablet rather than a laptop or desktop computer. Maybe it’s just my devices. I hope you’ll manage; it’s really cool. Thanks for stopping by.

P.S.
Study of emotions conveyed by nonverbal vocalizations in American Psychologist journal: psycnet.apa.org/doiLanding?doi=10.1037%2Famp0000399
Article on study on ScienceDaily website: www.sciencedaily.com/releases/2019/02/190205144343.htm

Interactive audio map of emotions conveyed by nonverbal vocalizations: www.alancowen.com/vocs
The interactive audio map is also included in the UC Berkeley press release: news.berkeley.edu/2019/02/04/audio-map-of-exclamations/

15 November 2019

Speaking Rate and Information Revisited

Welcome back. Some people speak faster than others, right? But as the study I blogged about a couple of years ago found, regardless of how fast people speak, they convey about the same amount of information in a given period of time (Speaking Rate and Information).

Revisiting time spent pawing
through the lexical information.
To reach that conclusion, the Brown University researcher analyzed some 2,400 two-sided telephone conversations among 543 speakers and interviews with 40 speakers. He estimated information rate from two linguistic criteria, lexical (dictionary definition) and structural (syntax). The speakers were from across the U.S., and all conversations were in English.

Being worldly wise and intrigued by languages and linguistics, you of course wonder: Do speakers of other languages also convey the same amount of information in a given period of time? 


Take Spanish. Even if you don't speak Spanish, you’ve probably heard it spoken. Do Spanish speakers convey information at the same rate as English speakers?

Language Information Rates

Well, even if you don’t wonder, a team of researchers affiliated with France’s University of Lyon, the University of Hong Kong, New Zealand’s University of Canterbury and South Korea’s Ajou University set out to learn the answer.

They gave 170 native speakers of 17 different languages (10 speakers per language) 15 semantically similar texts to read in their native language (Basque, Cantonese, Catalan, English, Finnish, French, German, Hungarian, Italian, Japanese, Korean, Mandarin, Serbian, Spanish, Thai, Turkish and Vietnamese). The speakers were instructed to familiarize themselves with the texts, then read them aloud at a comfortable pace with good pronunciation while they were recorded.

Through quantitative analysis, the researchers found that the speech rate (syllables per second) and the average information density of the syllables uttered for each language were quite different. Yet when the speaker combined the two properties, the information rate balanced. Similar amounts of information were conveyed in a given period of time (about 39 bits/second plus or minus 5 bits/second).

Languages such as Spanish had higher speech rates and lower information densities; Asian languages such as Vietnamese had slower speech rates and high information densities. 

The graphed data are the average information density (ID) and corresponding speech rates (SR) for languages noted at top. (There is one value of ID per language and as many values of SR as texts read by individual speakers.) The relationship between SR and ID is represented by the yellow straight line (linear regression) and the black curved line (locally estimated scatterplot smoothing regression). Both show SR decreases with increasing ID (from advances.sciencemag.org/content/5/9/eaaw2594).
Wrap Up
The researchers’ goal was to characterize the baseline by analyzing controlled speech instead of speech in more casual, unpredictable settings. They expect, however, that the strength of their findings would decrease along a continuum from very carefully pronounced content to very informal interactions. For the latter, understanding is heavily reliant on contextual and pragmatic factors rather than the linguistic information itself.

I’m way out of my league, but a significant change of information rate with casual conversation doesn’t seem to jive with the earlier study of English speakers, which did not control speech. People were found to converse within relatively narrow bounds of communication. The speakers either spoke quickly or provided high information content, but not both, possibly to avoid providing too much or too little information in a given period of time.

That seems reasonable for other languages, at least for most speakers. Anyway, thanks for stopping by.

P.S.
Multi-language information rate study in Science Advances journal: advances.sciencemag.org/content/5/9/eaaw2594
Articles on study on EurekAlert! and Discover websites:
www.eurekalert.org/pub_releases/2019-09/c-sir090419.php
blogs.discovermagazine.com/d-brief/2019/09/04/spoken-languages-convey-information-at-the-same-rate-study-finds/#.XXLY6IVOmUB 

26 March 2019

Reducing Age-Related Language Decline

Wait! Wait! I know what it is.
It’s on the tip of my tongue

(from www.barcelonareview.com/36/e_quizans.htm).
Welcome back. Do you ever experience tip-of-the-tongue lapses, when you just can’t recall a word you know? By the time you reach my age, you’ve learned so many words, it can be difficult to remember them all as quickly as you’d like; but there’s more to it.

An international team led by researchers from the UK’s University of Birmingham demonstrated that, while cognitive abilities decline with age, aerobic fitness can reduce those tip-of-the-tongue occurrences.

Before describing the study, I’d better point out that tip-of-the-tongue states are not a memory problem; they’re not associated with memory loss. They indicate a disruption in the two-stage process of retrieving the word meaning and phonology (sound form representations).

Relating Aerobic Fitness to Tip-of-the-Tongue States


Participants
To investigate the relationship between aerobic fitness and tip-of-the-tongue states, the researchers had 28 healthy older adults (20 women, mean age 70; 8 men, mean age 68) complete an aerobic fitness test and a language experiment, and 27 university students (19 women, mean age 23; 8 men, mean age 23) complete the language experiment only.

Aerobic Fitness Test

To measure aerobic fitness, the older participants did stationary cycling (a graded, sub-maximal aerobic fitness test on a cycle ergometer) to estimate maximal oxygen consumption (VO2max, where V - volume, O2 - oxygen). VO2max is the maximum amount of oxygen a person can utilize during intense exercise.

Language Experiment
For the language experiment, young and old participants completed the same computer-based, definition-filling task. The participants were presented 60 definitions or questions in random order (20 definitions of low frequency words, 20 definitions of easy words and 20 questions about famous people). 

Example definitions, target words and foils if participants experienced a tip-of-the-tongue (from www.ncbi.nlm.nih.gov/pmc/articles/PMC5928071/).
The definitions and questions remained on screen until the participants responded that they knew the target word, did not know the target word or had a tip-of-the-tongue experience. (The instructions read: “Usually we are sure if we know or don’t know a word. However, sometimes we feel sure we know a word but are unable to think of it. This is known as a ‘tip-of-the-tongue’ experience.”)

If participants indicated they experienced a tip-of-the-tongue state, they were asked to provide three pieces of information about its sound: (1) the initial letter or sound; (2) the final letter or sound and (3) the number of syllables. Finally, participants were asked to select the target word from a list of four words displayed on the screen or to indicate that the word they were thinking of was not in the list.

The data were analyzed using mixed effects models, an extension of linear regression models. In addition to comparing older with younger participants, the researchers performed a median split on the standardized aerobic fitness scores to generate a high-fit older adults group and a low-fit older adults group.

Results

The key findings were that older adults experienced more tip-of-the-tongue occurrences and had less access to phonological information than did the younger participants; however, the more aerobically fit the older adults were, the less likely they were to experience a tip-of-the-tongue state.

The probability that older adults experienced a tip-of-the-tongue decreased as aerobic fitness increased (from www.ncbi.nlm.nih.gov/pmc/articles/PMC5928071/).
The tip-of-the-tongue occurrence of high-fit older adults was lower than that of low-fit older adults and not much higher than that of young adults.
Tip-of-the-tongue occurrences of young, high-fit elderly and low-fit elderly participants on language experiment; error bars represent standard error of the mean (from www.ncbi.nlm.nih.gov/pmc/articles/PMC5928071/).
The study also found that young and older adults both experienced fewer tip-of-the-tongue occurrences the shorter the target words (as measured by the number of phonemes) and the larger their vocabulary size. Notably, older adults had a significantly larger vocabulary than young participants.

Wrap Up

Overall, the study demonstrated that there is a relationship between language production abilities and aerobic fitness in healthy older adults. The higher the older adults’ aerobic fitness level, the lower the probability of experiencing a tip-of-the-tongue. The results further support increased physical activity for healthy aging and optimal brain function across the life span. Go for it!

Thanks for stopping by.

P.S.

Study of aerobic fitness and language decline in Scientific Reports journal: www.ncbi.nlm.nih.gov/pmc/articles/PMC5928071/
Article on study on Berkeley Wellness website: www.berkeleywellness.com/healthy-mind/memory/article/tip-tongue-lapses-eased-exercise?s=EFA_181020_AA1&st=email&ap=ed

A version of this blog post appeared earlier on www.warrensnotice.com.

25 March 2019

Fake News Detection

The onslaught of fake news (from
firstamendmentwatch.org/countering-fake-news/).
Welcome back. So, tell me. What do you think about fake news? I mean news that’s purposely false or misleading, not news that Mr. Trump doesn’t like. Wouldn’t it be nice if fake news could be detected and removed automatically before it gets out or, at least, before it’s spread?

Well, collaborators from the University of Michigan and University of Amsterdam in the Netherlands have gone a long way toward that goal. They demonstrated that the ability to discriminate real from fake news with linguistic-based models was comparable to that of humans.


Association for Computational
Linguistics’ definition of
computational linguistics

(from www.aclweb.org/portal/).
Linguistic Approach
The researchers examined a variety of linguistic elements for algorithms of fake-news detection models. 


Two of the elements would be familiar to anyone (i.e., you and me):

- Punctuation, of which 12 types were considered.

- Readability, measured by content features, such as the number of characters, complex words, long words, number of syllables, word types and number of paragraphs, as well as different readability metrics.

Other linguistic elements considered would not be as familiar:

- Ngrams, the sequence of syllables, letters, words, phonemes or other linguistic items in a text, where the “n” in ngram signifies the number of items in the sequence (e.g., unigram-1, bigram-2)

- Psycholinguistic features, measured by words that relate linguistic behavior to psychological processes. The researchers extracted the proportions of words in different psycholinguistic categories, guided by the Linguistic Inquiry and Word Count (LIWC), the gold standard for computerized text analysis.

- Syntax features, the sequence in which words or linguistic elements are put together to form meaningful sentences. The researchers used a natural language parser, the Stanford Parser, to extract a set of features derived from rules based on context-free grammars.

Fake News Data Sets
To test algorithms with the different linguistic elements, they constructed two data sets.

One data set began with 240 verified news articles from mainstream news websites covering six domains (sports, business, entertainment, politics, technology, and education). Crowdsourcing was used to prepare shorter fake versions of the articles for which the writers tried to emulate a journalistic style.

The second data set covered celebrities and was obtained directly from the web as 250 pairs of news articles, one legitimate, the other fake. Claims made in the legitimate articles were evaluated on gossip-checking sites and other online news sources.

Excerpts from an example of legitimate and fake celebrity news (from web.eecs.umich.edu/~mihalcea/papers/perezrosas.coling18.pdf).
Detection Testing
The researchers tested the fake-news detection capability of the different linguistic features separately and in combination.

Fake-news detection performance
by two humans (A1, A2) and the
automatic linguistic system (Sys)
on the fake news data sets
(from
web.eecs.umich.edu/~mihalcea/papers/perezrosas.coling18.pdf).
They achieved the highest accuracy-- 74% on the multidomain data set and 76% on the celebrity data set--when all features were included. These results were slightly better on the multidomain data set and slightly worse on the celebrity data set than those obtained by two humans.

The best performing algorithms with the multidomain data set relied on stylistic features (i.e., punctuation and readability), followed by those that used psycholinguistic features. For the celebrity data set, the best performance was obtained using the LIWC features, followed by the ngrams and syntactic features.

Wrap Up
To improve automatic fake-news detection further, the researchers recommend incorporating meta features (e.g., number of links to and from an article, comments on the article) and features from different modalities (e.g., visual makeup of a website).

At the outset of the study, they opted for a linguistic rather than a fact-checking approach, given that automatic fact-checking against information from other sources is not straightforward, particularly for just-published news. Nevertheless, they recommend improving fact-checking approaches and integrating them with linguistic approaches.

Fake news is clearly a serious problem as evidenced by its probable effect on the last presidential election. Three cheers for any actions toward its demise. Thanks for stopping by.

P.S.
Paper on study presented at 27th International Conference on Computational Linguistics, Santa Fe, N.M., 20-26 Aug 2018: web.eecs.umich.edu/~mihalcea/papers/perezrosas.coling18.pdf
Article on study on ScienceDaily website: www.sciencedaily.com/releases/2018/08/180821112007.htm
27th International Conference on Computational Linguistics and study abstract:
coling2018.org/
arxiv.org/abs/1708.07104
Linguistic Inquiry and Word Count, Version 1.3.1, 2015: liwc.wpengine.com/
Stanford Parser (syntax): nlp.stanford.edu/software/lex-parser.shtml

A version of this blog post appeared earlier on www.warrensnotice.com.

03 March 2017

Speaking Rate and Information

Welcome back. Isn’t it time I returned to the subject of linguistics? If you don’t count Speaking in Whistles, it’s approaching five years since I voiced my concern about the use of “mom” instead of “mother” and of “no problem” instead of “you’re welcome” in Linguistic Longings.

Today, I’ll address speed: (1) Do you talk fast, slow or somewhere-in-between? (2) Do fast talkers convey more information than slow talkers in the same amount of time?

Let’s take the first, first. The answer might depend on where you’re from.

Rate of Speech
In case you missed it, about a year ago, various news media reported the results of an analysis by a mobile advertising analytics company, Marchex. Applying its “Call DNA” software to over four million consumer phone calls to businesses between 2013 and 2015, Marchex generated state-by-state rankings of the callers’ rate of speech, density of speech (wordiness) and hold times before hanging up.

While the source of the data and apparent lack of differentiation by age or gender might lead one to question their accuracy, the results are at least fun trivia.


States with fastest and slowest talkers from
www.marchex.com/2016/02/02/talkative/
Faster talkers tend to reside in the North (fastest: Oregon, Minnesota, Massachusetts), slower talkers in the South (slowest: Mississippi, Louisiana, South Carolina).

Going beyond rate of speech, Marchex found lower density (less wordy) speakers tended to reside in the Central U.S. (Oklahoma, Kansas, Wisconsin, Minnesota, Iowa), while higher density speakers were more scattered in the Northeast, Mid-Atlantic, West and Southwest (New York, California, New Jersey, Nevada, Maryland). As an example, Marchex noted that a New Yorker used 62% more words than an Iowan in the same conversation with a business.

Callers who hung up the quickest when put on hold tended to be in the Northeast, mid-Atlantic and upper Midwest (Kentucky, Ohio, North Carolina, New York, West Virginia). Callers in the Midwest and South (Louisiana, Colorado, Florida, Illinois, Minnesota) were most patient or had nothing more pressing to do.

OK, but do those fast-talking Oregonians convey more information than those slow-talking Mississippians?

Information Content of Speech
A recently published study by a Brown University researcher found that, regardless of how fast people speak, they convey about the same amount of information in a given period of time.

To reach this conclusion, the researcher analyzed two collections of conversations. He focused first on the Switchboard Corpus, about 2400 two-sided telephone conversations among 543 speakers (302 male, 241 female) from all areas of the United States. He then replicated the results using the Buckeye Corpus, 40 speakers (20 old, 20 young, 20 male, 20 female) from Columbus, Ohio, conversing freely with an interviewer.
 

Pawing through the
lexical information.
The researcher estimated speech rate based on the actual and expected durations of words, calculated from mathematical measures of a word together with the previous and following words. He estimated information rate from calculations of two linguistic criteria, lexical (dictionary definition) and structural (syntax).


Introduction to Linguistics video:
www.youtube.com/watch?v=Q9LpTZkQeZs
Linking Speed to Content
The study found that, compared to slow speakers, fast speakers were likely to produce lower lexical information (i.e., use less informative words) as well as lower structural information. For example, fast speakers were more likely to use active voice (e.g., I will do it) than passive voice (e.g., It will be done by me).

The findings suggest that people converse within relatively narrow bounds of communication data to avoid providing too much or too little information in a given period of time. On average, speakers either speak quickly or provide high information content, but not both.

Wrap Up
One finding of interest is male speakers provided more information than did female speakers for a given rate of speech. Although untested in this study, the researcher judged that such gender differences might follow from sociolinguistic factors. Could it be that males are more concerned with speaking than with being understood?

Thanks for stopping by.

P.S.
Marchex analysis of consumer phone calls: www.marchex.com/2016/02/02/talkative/
Article on Marchex analysis on The Atlantic website: www.theatlantic.com/entertainment/archive/2016/02/speaking-fast-and-slow/459393/
Brown University study in Cognition journal:
www.sciencedirect.com/science/article/pii/S0010027716302888

Article on Brown University study on Science Daily website:
www.sciencedaily.com/releases/2017/01/170117140005.htm
Switchboard Corpus: catalog.ldc.upenn.edu/ldc97s62
Buckeye Corpus: buckeyecorpus.osu.edu/

20 October 2015

Whistling Addendum

My grandmother.
Seeing the older men and women whistling Turkish and Sylbo in the videos listed in last Friday’s blog post (Speaking in Whistles) reminded me of my grandmother. As I wrote some time ago, that warm, elegant, soft-spoken little woman taught me how to whistle loudly with two fingers (Music Time--The Background).

If you missed last Friday's videos, here’s a sample of my reminders: 

Woman whistling Turkish with one finger, right hand. (www.youtube.com/watch?v=bQf38Ybo1IY)
Woman whistling Turkish with one finger, left hand. (www.youtube.com/watch?v=bQf38Ybo1IY)
Woman whistling Sylbo, no fingers. (www.youtube.com/watch?v=PgEmSb0cKBg)
Woman whistling Sylbo, one finger. OK, she’s young, but this video wasn’t listed in last Friday’s blog post. (www.youtube.com/watch?v=2PyNuOJaDCs)
Nowadays, given the Internet, you don’t need a grandmother to teach you to whistle loudly. As you’d expect, there are a variety of instructional videos. Although my grandmother taught me to use one finger from each hand, I soon advanced to using two fingers from one hand (fingers shaped in an OK sign), as shown here: 

Whistling with two fingers, one hand. (www.youtube.com/watch?v=G8Oz_ELAjNg)
Like the young woman in this video, I place my fingers atop my tongue. (www.youtube.com/watch?v=G8Oz_ELAjNg)
I mention placing my fingers on top of my tongue, because I was astonished to see that nearly every video teaches fingers should be placed beneath the tongue, whether using fingers from one or two hands.

This video, like most, places fingers beneath the tongue. Unlike most, the whistler uses two fingers from each hand, which seems a bit much. (www.youtube.com/watch?v=mYpmyE1fliE)
If you’d like to pursue this further, wikiHow is a good starting point (despite the recommended finger-tongue position).

Graphic from Wikihow instructions on whistling. (www.wikihow.com/Whistle-With-Your-Fingers)
I would be remiss if I didn’t mention that my brother, like countless others, learned to whistle loudly without the aid of fingers. I didn’t even glance at instructional videos of that technique, but obviously, whistling loudly, hands free, without fingers in one’s mouth has advantages in addition to being more hygienic. If I were starting over, I’d take that route.

16 October 2015

Speaking in Whistles

Welcome back. I missed another one, whistling languages. Fortunately a friend heard about them on NPR last month and mentioned it to me. When I searched, I found the topic was in the news a month or so before the NPR story but that the topic was well documented years ago.

Why the renewed interest? The reports were triggered by the publication of a new study that's definitely interesting. Still, I suspect the real reason for media attention is the topic itself. It’s fascinating. 


Whistling Languages

A whistling language is just that--a spoken language in a different form, whistling. It’s like writing in addition to speaking. 


Whistling Sylbo, La Gomera,
Canary Islands. (multiple websites)
Though relatively rare and in some cases gone or dying out, whistling languages are encountered globally--Alaska, Mexico, Brazil, China, Turkey, France, Ethiopia, Canary Islands, Oceania and elsewhere. They arose primarily for communicating over long distances or because of terrain or other causes of isolation, such as dense forest.

They’re most common in tonal languages, which cover some 70% of world languages. Whistled tonal languages rely primarily on whistle tone, length, and stress; segmental distinctions of the spoken language are mostly lost. In contrast, whistled atonal languages rely more on articulatory features of speech. Variations in whistle pitch represent variations in timbre, and certain consonants can be pronounced to modify the sound.

Whether emulating a tonal or atonal language, all whistled languages convey speech information by varying the frequency of a simple waveform as a function of time, generally with minimal dynamic variations.

Whistling Languages and Brain Hemispheres

The research that generated attention was conducted by investigators from Germany’s Ruhr University Bochum and the German Aerospace Center (DLR) Bonn.

Earlier research has shown that the brain’s left hemisphere normally handles language processing, including atonal and tonal languages, click consonants, writing and sign languages. The brain’s right hemisphere is specialized to address acoustic properties--spectral cues, pitch, melodic lines, stress and intonation pattern cues. This latest research sought to determine how left and right hemisphere superiorities might change with a whistling language.

The investigators focused on whistled Turkish, which uses the full lexical and syntactic information of spoken Turkish. They tested the comprehension of identical information, spoken versus whistled, with whistle-speakers of Northeast Turkey. This was done by delivering speech sounds through headphones to the test participants’ left or right ears.

Overall, the participants reported more often perceiving spoken syllables when presented to the right ear (left hemisphere), yet they heard whistled sounds equally well on both sides. In essence, the study showed that whistled Turkish relies on a balanced contribution of the hemispheres; the left because whistled Turkish is indeed a language, the right because understanding a whistled language requires auditory specializations.

Wrap Up

Although whistling languages were new to me, there’s a wide assortment of reference material--research papers, popular articles, textbooks and videos. Whistling languages might well be pre-historic and were apparently mentioned in 16th century literature if not earlier.


Learning whistling Sylbo, La Gomera,
Canary Islands. (multiple websites)
I should note that, although the research reviewed here was on whistling Turkish, whistling Sylbo (or Silbo), the whistled language of the Canary Island La Gomera, was inscribed on UNESCO’s Representative List of the Intangible Cultural Heritage of Humanity in 2009. Sylbo is reportedly the only whistled language that is fully developed and practiced by a large community (more than 22,000 inhabitants). It has been taught in schools since 1999 and is understood by almost all islanders.

So, what do you think? Was the media attention due to the topic or the research? Either way, I hope you found the post interesting. Thanks for stopping by.

P.S.

Whistled Turkish study in Current Biology journal:
www.cell.com/current-biology/abstract/S0960-9822%2815%2900794-0
NPR and other selected articles on whistled Turkish study:
www.npr.org/sections/parallels/2015/09/26/443434027/in-a-turkish-village-a-conversation-with-whistles-not-words
www.sciencedaily.com/releases/2015/08/150817131955.htm
www.dailymail.co.uk/sciencetech/article-3201530/Listen-Turkish-whistling-language-scientists-say-unique-uses-sides-brain.html
www.newyorker.com/tech/elements/the-whistled-language-of-northern-turkey
Example YouTube videos:
Whistling Languages in Kuskoy, Turkey, 5:19 min, from the Deutsche Welle’s former European Journal; uploaded on 29 Jul 2010: www.youtube.com/watch?v=bQf38Ybo1IY
The Last Speakers of the Lost Whistling Language, Sylbo (La Gomera, Spanish Canary Islands), 4:20 min, from Time; published on 21 May 2013: www.youtube.com/watch?v=C0CIRCjoICA
Whistled language of the island of La Gomera (Canary Islands), the Silbo Gomero, 10:20 min, from UNESCO; uploaded on 25 Sep 2009: www.youtube.com/watch?v=PgEmSb0cKBg
Monograph: Whistled Languages: A Worldwide Inquiry on Human Whistled Speech, 2015th Edition: link.springer.com/book/10.1007%2F978-3-662-45837-2
Wikipedia background on whistled languages (see citations):
en.wikipedia.org/wiki/Whistled_language
Reference which notes a possible pre-historic origin and mention in 16th century literature: silbo-gomero.com/Compilation/CompilationPreface.html
UNESCO inscription of whistled Silbo Gomero:
www.unesco.org/culture/ich/RL/00172

13 January 2015

Political Correctness Addendum

I illustrated last Friday’s blog post, Political Correctness Revisited, with a photograph of a sign proclaiming that political correctness was the downfall of American society. There have been so many other assertions--some humorous, some offensive and some that might have begun on websites that warn about black helicopters. Today’s addendum offers a handful of the less fervent.

This graphic has political correctness as the core of a liberal’s brain. The other brain compartments are equally condemning though pretty funny. (Multiple websites)
Among definitions of “politically correct,” this was one of the more benign. (Multiple websites)
Though noted more for his support for gun rights, the late actor, Charleton Heston, also apparently spoke against political correctness. (Multiple websites)
Ben Carson, who’s gone from medicine to conservative politics, has also apparently taken a stance against political correctness. (Multiple websites but probably www.teapartycommunity.com/blog/4152)
While this tweet by RG3, quarterback for the Washington Redskins, was clearly stated and promoted by conservative media, it’s not entirely clear what he was actually tweeting about. (Compiled by Media Research Center and seen on multiple websites.)
How could anyone object to this comment about political correctness from www.someecards.com?
This grievance was obviously voiced by someone pining for the good old days. (Multiple websites)

09 January 2015

Political Correctness Revisited

I’m back and I hope you are, too. Welcome. While I was gone, I noticed that our TV provider, Dish, had a falling out with Fox and removed Fox News and Business channels. I noticed it quicker when Dish dropped CNN; dropping CBS was short lived. I don’t watch Fox News much, but it’s a hoot to tune in occasionally and hear the litany of what the president is doing wrong.
 

Advertised sentiment about political
correctness. (Photo from
smm110861.wordpress.com/2013/12/27/)
Although it’s been a while since I’ve heard political correctness disparaged on Fox, a recent study showed that the adoption of political correctness (PC) in the corporate work environment could yield a significant benefit. If Fox reported that finding, I feel bad that I missed the smirk and reference to elitist academics.

Anyway, PC--the attempt to avoid language or behavior that could offend a particular group of people and which no doubt can go too far--was found to produce a no-cost payoff in creativity.

Enhanced Creativity

Collaborating investigators from Cornell, the University of California, Berkeley, Washington University in St. Louis and Vanderbilt ran tests with 582 participants. Groups of three were told to be “politically correct” or “polite,” while other groups received no instructions.

All groups then spent 10 minutes brainstorming business ideas. The creative output of each group was measured by the number of ideas generated and the relative novelty of the ideas.

While you might guess that creativity surges when all constraints are removed, the results demonstrated the opposite. Imposing a norm to be politically correct--a norm that set clear expectations for how men and women should interact--increased the creative output of mixed gender groups.

The investigators reasoned that men and women both experience uncertainty when asked to generate ideas as members of a mixed-gender group--men may fear offending women; women may fear having their ideas devalued or rejected. PC promotes rather than suppresses expression of ideas by reducing uncertainty and signaling that the group is predictable enough to risk sharing more ideas and more novel ideas.

Wrap Up


At least 15 years ago, I participated in a mixed-gender meeting involving military and civilian government personnel. The male Air Force officer leading the meeting kept dropping the F-word. It wasn’t my place to say anything, especially when no one seemed to flinch; yet I did ask a female Air Force officer about it after the meeting. She said it’s so common, she doesn’t pay attention.

Despite her experience, I remember that meeting because, in my 20 years of government meetings, it was the only mixed-gender meeting where the F-word or just about any questionable language was used.

Maybe that’s changed now. As I wrote in my Linguistic Longings post: Linguistic change is inevitable, whether it’s vocabulary, sentence structure or pronunciations. I was writing about the use of “mom” instead of “mother” and “no problem” instead of “you’re welcome,” but I suppose it’s much the same.

PC encompasses far more than use of certain words, of course; and again, it can easily go overboard. As the study shows, however, there’s much to be gained by, well, being polite. What do you think? 


I wonder if Dish will be giving rebates for all these dropped channels. Thanks for stopping by.

P.S.


Political correctness study in Administrative Science Quarterly:
asq.sagepub.com/content/early/2014/12/15/0001839214563975.abstract

Articles on study on Cornell Chronicle and Science Daily websites:
www.news.cornell.edu/stories/2014/11/pc-workplace-boosts-creativity-male-female-teams
www.sciencedaily.com/releases/2014/11/141104183610.htm

19 June 2012

No-Problem Photo Addendum

Instead of grumbling about a "no problem" response to "thank you," I thought I’d take this opportunity to illustrate instances of when "no problem" is appropriate. 
Road blocked by two flocks of sheep? No problem. Just
wait until they move the sheep. Xinjiang, China, 1982.
Stuck again crossing the muddy stream? No problem.
Just find a farmer with a tractor. Syria, 1983.
Too much traffic on the unpaved city street? No problem.
Just keep driving; they’ll move. Syria, early 1980s.
Baskets too big to get your arms around? No problem.
Just use your head. Bangladesh, 1981.
Outmatched in a fight? No problem.
Just call, "Maaa!" Warren and his
big brother, mid-1940s.

15 June 2012

Linguistic Longings

Welcome back. Permit me to step away from the lighter issues I normally write about to address what some might consider indicators of a failing society. Cutting to the chase, I’m troubled because, in American English, (1) “mom” is replacing “mother” and, worse, (2) “no problem” is replacing “you’re welcome.” 
 
Concerns such as these keep me awake at night, or they would keep me awake if I didn’t fall asleep so quickly.
 
“Mother” or “Mom”
 
The first time I heard an adult say, “My mom…” was during a staff meeting. Although that meeting was held in the mid-1980s, I remember it as clearly as if it were yesterday. Why, I wondered, did he use such a personal or juvenile term? 
 
My brother and I called our mother “mom” or “ma.” When we were young, “ma” was generally expressed, “maaaa.” When speaking about her to anyone besides our father (“dad,” I suppose), we would always refer to her as “mother.” 
Warren (L) and family--“mother” and “father” to you.

Put it this way, when I was young, nobody I grew up with, perhaps no one in the entire United States, would ever say “my mom” or “your mom” or “mom” to anyone above the age of approximately five. (This does allow for residents of Alaska and Hawaii, neither of which was yet a state.)

I asked our son and his friends about this change. They opined that “mother” is much too formal. That suggests the friends I grew up with thought my mother was too formal. Nope. Never happened. She would squirt them with a seltzer bottle just as quickly as she would squirt me.

I’ve seen statements differentiating the two terms: “mother” gave birth to you; “mom” raised you. Oh, please! Save these sappy definitions for greeting cards (not Tulip Fantasy cards of course). Anyway, I’m sure there were moments our mom wished our mother hadn’t given birth to us.

“You’re Welcome” or “No Problem”

Regarding my second concern, I cannot remember the first time my “thank you” drew a “no problem” response. Years before I retired, instant messages at work were already using “np.” No doubt you’ve experienced it and are aware that it’s becoming more and more common.

Don’t think there hasn’t been resistance, at times hostile. Unlike mother versus mom, discussion regarding the propriety of “no problem” is peppered across the Internet, going back at least five years, probably much longer. If you search, you’ll find pros and cons, whys and who cares, the relative newness of “you’re welcome”--as if that mattered--and comparisons with other languages.

Comparing to other languages isn’t very convincing when “no-problem” defenders point to “de nada,” the usual Spanish response to gracias (thank you). De nada is more equivalent to “for nothing” or “think nothing of it.” Wouldn’t that sound much nicer than “no problem,” especially in a Spanish accent?

Wrap Up
 
Am I too late? It seems that mom has already replaced mother, and I suspect the argument about “no problem” won’t be won or lost; the change will just occur. Linguistic change is inevitable, whether it’s vocabulary, sentence structure or pronunciations (www.nsf.gov/news/special_reports/linguistics/change.jsp). 
 
If “you’re welcome” has become too formal, at least for those with moms, I’d better find something else that would keep me up at night; you know, if I didn’t fall asleep so quickly.
 
Thanks for stopping by. (Please respond, “You’re welcome” or “De nada.”)