Today’s New Scientist brings with it the vastly uninformative claim that…
“A STATISTICAL method that picks out the most significant words in a book could help scholars decode ancient texts like the Voynich manuscript – or even messages from aliens.”
True, it “could” – but you have to say that the overwhelming probability is that it (just like most other methods before it) will fall flat on its hairy statistical arse when applied to truly difficult things. And what are the real chances we’d be able to decode a putative alien language based purely on some statistician’s semantic say-so? Practically nil, I confidently predict: come on, people – decode dolphin language first and we’ll be convinced you’re onto something. (Hint: it has 200 words for “tasty” and 600 words for “fish”).
Honestly, statisticians like these simply don’t ‘get’ the Voynich Manuscript at all – they misread the EVA transcription (for example) as a regular, smooth surface for analysis, while completely failing to see that the VMs is an object that has been constructed to mislead, not to communicate.
As Alfred Korzybski noted long ago (but few now bother to consider), “the map is not the territory“. *sigh*
It seems to me that too many decipherers treat the Voynich as if it’s just a long string of text(as the statistical method described by New Scientist might), instead of a physical artifact which bears the marks of many owners. Most decryption attempts– like that last one involving hedgehog kidney recipes– don’t seem to even take the illustrations into account.
I find it kind of interesting.
In the article linked by the New Scientist, they never mention the Voynich MS.
Is this an association made by the columnist, or is there other work from these
authors available?
One thing worth looking at, would be whether the VMs text has such significant words
or not.
I just thought I should point out that Cipher Mysteries would not condone eating hedgehogs, even if that translation did ultimately (much to my surprise) turn out to be correct. 😮
Generally, the tricky issue of VMs “words” gets treated far too superficially. I think that there is plenty of evidence demonstrating that the chances of there being a trivial 1-to-1 mapping of Voynichese-to-text are dwindlingly small – conversely, the presence of any kind of cryptographic confounding / obfuscation stage (however simple) in the middle would shoot holes in any similar kind of semantic-y statistical test.
Nick,
I realise that your interest, at present – in Sept 2023 – are in a very different subject, but I wonder if you – or perhaps a guest author? – could find some time to bring the rest of us up to speed on what’s happening in the statistical-AI side of Voynich studies.
For myself, I get the general impression that quite a few researchers are now doing nothing but preparing texts, doing optical scans and are pinning their hopes on what they regard as a more scientific approach. But as I say, that’s only a general impression and it would be great to know more.