Dan Flickinger, a senior researcher at Stanford University, explains the grammatical phenomenon of vanishing words in this CAS Oslo lunch seminar.
When an English speaker looks at the sentence ‘We gave books to Kim and magazines to Chiang,’ the brain fills in the blanks. Obviously, the sentence is a more efficient way of saying ‘We gave books to Kim, and we gave magazines to Chiang.’
It’s a phenomenon known as ‘gapping,’ referring to the gap left behind when superfluous words are removed from a sentence without disturbing its grammatical integrity.
But while such sentences are for most readers easy to understand, they are ‘extremely inconvenient’ for scholars such as Dan Flickinger, a senior researcher at Stanford University.
In this edition of the Centre for Advanced Study’s (CAS Oslo) lunch seminar series this autumn, Flickinger explains some of the challenges that he and the other computational linguists are wrestling with as they work to create formal models of grammar of natural languages.
Such models could greatly benefit the accuracy of translation software and educational tools, among other uses. But ‘stumbling blocks’ such as gapping are complicating that work -- how do you come up with a rule that teaches a computer what the brain just ‘gets’?