My feeling is that there is already widespread knowledge ABOUT endangered languages, as the new UNESCO Decade on the same topic shows. What's missing is not facts about different categories of endangerment, or numbers about X versus Y, but intelligent ideas about how to solve the problem (if it is really an existential problem) of how to maintain or sustain or revitalize endangered languages. The problem is huge and often poorly explained. We need good arguments about what it means to benefit from the content from these languages and embrace a green, inclusive, yet exponentially data-driven future.
At the heart of the upcoming UNESCO decade of endangered + revitalizing language projects, there’s one obvious yet curious fact: we can extract a “language” from its community setting of dialog and narrative flows, and manipulate it as an autonomous piece of knowledge. How does a finite code of signs and rules of syntax and pronunciation compete with — or complete — our everyday talk and texting?
A chunk of structured knowledge called “Quechua” for example, contains standardized information about a given language. Not how to speak or use it — that is a learnt ability. …
Beyond the current trend for unmanned rocket-launches aimed at the sub-orbital, orbital and lunar orbital market and more interestingly the remote target of Mars, the possibility of broader spacefaring as an integral part of our planetary destiny is now on the table for the post-2050 generation.
Some fear that space will become weaponized and evolve into an off-world battlefield between rival Earth interests. Others see it as a new terrain for engagement where nations and enterprises can thrive and compete together under the rule of law. …
Poetry is language in free fall — no one is there to be held to account for its meaning. This is why machine verse can sound so convincing: it can be synthesized from anything in the canon, and challenges us to use our imaginations to make sense of it. Just like all poetry.
The deeper interest of the neural AI program Deep-speare, then, lies in what is beyond the fact that any number of poems, sayings, proverbs, and formulaic language entities can be generated from data stashes by tweaking the weightings inside an algorithm. …
Back in the 1960s, media theorist Marshall McLuhan expounded a simple story about the evolution of media technology: (alphabetic) writing, print, photography, film, radio and TV are all extensions of our natural sensorium. Alphabetic technology, for example, along with Chappe’s telegraph and similar devices, was a visual (hence inspectable) extension of human speech’s natural capacity to produce aural messages. TV (along with microscopes, telescopes and X-rays) was an extension of our visual capacity to view events, while radio and telephones were an extension of our mouths and eardrums to make distant contacts.
In this tale, the history of technology recounts…
“Greater is he who prophesies than he who speaks in tongues” Paul of Tarsus, I Cor. 14.5
You can look back ten thousand years and try to understand how we got here and what kind of cycles and patterns underlie our historical development. Writers such as Arnold Toynbee or today’s Peter Turchin and his school of data-driven cliodynamics take this approach. Call it deep history.
Or you can scry forwards and imagine what it would be like for humans to communicate in ten thousand years’ time. In 1984, the eminent semiotician Thomas Sebeok was asked by the US government to…
My name is Ozmandias, King of Kings;
Look on my Works, ye Mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.