You might instead ask: “Was the author of the Voynich Manuscript a nymphomaniac lesbian from Baden Baden obsessed with clysters?”

Or how about: “Was the author of the Voynich Manuscript a medieval psychoactive drugs harvester from (the place now known as) Milton Keynes?”

Or: “Was the author of the Voynich Manuscript a Somalian Humiliatus obsessed with mis-shapen vegetables starting with the letter ‘A’, writing down the results of a six-year-long trek through the Amazon rainforest in a perversely private language?”

The answers to these are, errrm, no, no, and no (respectively).

When the Voynich Manuscript contains so many unexplained points of data (a thousand? Ten thousand?), why on earth should I or anyone else spend more than a minimal amount of time evaluating a Voynich theory that seems to attempt to join together just two of them with what can only be described as the flimsiest of thread?

What – a – waste – of – time – that – would – be.

As I wrote before, I think we have four foundational challenges to tackle before we can get ourselves into a position where we can understand Voynichese properly, regardless of what Voynichese actually is:

* Task #1: Transcribing Voynichese into a reliable raw transcription e.g. EVA qokeedy
* Task #2: Parsing the raw transcription to determine the fundamental units (its tokens) e.g. [qo][k][ee][dy]
* Task #3: Clustering the pages / folios into groups that behave differently e.g. Currier A vs Currier B
* Task #4: Normalizing the clusters i.e. understanding how to map text in one cluster onto text in another cluster

This post relates to Task #2, parsing Voynichese.

Parsing Voynichese

Many recent Voynichese researchers seem to have forgotten (or, rather, perhaps never even knew) that the point of the EVA transcription alphabet wasn’t to define the actual / only / perfect alphabet for Voynichese. Rather, it was designed to break the deadlock that had occurred: circa 1995, just about every Voynich researcher had a different idea about how Voynichese should be parsed.

Twenty years on, and we still haven’t got any consensus (let alone proof) about even a single one of the many parsing issues:
* Is EVA qo two characters or one?
* Is EVA ee two characters or one?
* Is EVA ii two characters or one?
* Is EVA iin three characters or two or one?
* Is EVA aiin four characters or three or two or one?
…and so forth.

And so the big point of EVA was to try to provide a parse-neutral stroke transcription that everyone could work on and agree on even if they happened to disagree about just everything else. (Which, as it happens, they tend to do.)

The Wrong Kind Of Success

What happened next was that as far as meeting the challenge of getting people to talk a common ‘research language’ together, EVA succeeded wildly. It even became the de facto standard when writing up papers on the subject: few technical Voynich Manuscript articles have been published since that don’t mention (for example) “daiin daiin” or “qotedy qotedy”.

However, the long-hoped-for debate about trying to settle the numerous parsing-related questions simply never happened, leaving Voynichese even more talked about than before but just as unresolved as ever. And so I think it is fair to say that EVA achieved quite the wrong kind of success.

By which I mean: the right kind of success would be where we could say anything definitive (however small) about the way that Voynichese works. And just about the smallest proof would be something tangible about what groups of letters constitute a functional token.

For example, it would be easy to assert that EVA ‘qo’ acts as a functional token, and that all the instances of (for example) ‘qa’ are very likely copying mistakes or transcription mistakes. (Admittedly, a good few o/a instances are ambiguous to the point that you just can’t reasonably decide based on the scans we have). To my eyes, this qo-is-a-token proposition seems extremely likely. But nobody has ever proved it: in fact, it almost seems that nobody has got round to trying to prove anything that ‘simple’ (or, rather, ‘simple-sounding’).

Proof And Puddings

What almost nobody seems to want to say is that it is extremely difficult to construct a really sound statistical argument for even something as basic as this. The old saying goes that “the proof of the pudding is in the eating” (though the word ‘proof’ here is actually a linguistic fossil, meaning ‘test’): but in statistics, the normal case is that most attempts at proof quickly make a right pudding out of it.

As a reasonably-sized community of often-vocal researchers, it is surely a sad admission that we haven’t yet put together a proper statistical testing framework for questions about parsing. Perhaps what we all need to do with Voynichese is to construct a template for statistical tests for testing basic – and when I say ‘basic’ I really do mean unbelievably basic – propositions. What would this look like?

For example: for the qo-is-a-token proposition, the null hypothesis could be that q and o are weakly dependent (and hence the differences are deliberate and not due to copying errors), while the alternative hypothesis could be that q and o are strongly dependent (and hence the differences are instead due to copying errors): but what is the p-value in this case? Incidentally:

* For A pages, the counts are: (qo 1063) (qk 14) (qe 7) (q 5) (qch 1) (qp 1) (qckh 1), i.e. 29/1092 = 2.66% non-qo cases.
* For B pages, the counts are: (qo 4049) (qe 55) (qckh 8) (qcth 8) (q 8) (qa 6) (qch 3) (qk 3) (qt 2) (qcph 2) (ql 1) (qp 1) (qf 1), i.e. 98/4147 = 2.36% non-qo cases.

But in order to calculate the p-value here, we would need to be able to estimate the Voynich Manuscript’s copying error rate…

Voynichese Copying Error Rate

In the past, I’ve estimated Voynichese error rates (whether in the original copying or in the transcription to EVA) at between 1% and 2% (i.e. a mistake every 50-100 glyphs). This was based on a number of different metrics, such as the qo-to-q[^o] ratio, the ain-to-oin ratio, the aiin-to-oiin ratio, the air-to-oir ratio, e.g.:

A pages:
* (aiin 1238) (oiin 110) i.e. 8.2% (I suspect that Takeshi Takahashi may have systematically over-reported these, but that’s a matter for another blog post).
* (ain 241) (oin 5) i.e. 2.0% error rate if o is incorrect there
* (air 114) (oir 3) i.e. 2.6% error rate

B pages:
* (aiin 2304) (oiin 69) i.e. 2.9% error rate
* (ain 1403) (oin 18) i.e. 1.2% error rate
* (air 376) (oir 6) i.e. 1.6% error rate

It’s a fact of life that ciphertexts get miscopied (even printed ciphers suffer from this, as Tony Gaffney has reported in the past), so it seems unlikely that the Voynich Manuscript’s text would have a copying error rate as low as 0.1% (i.e. a mistake every 1000 glyphs). At the same time, an error rate as high as 5% (i.e. every 20 glyphs) would arguably seem too high. But if the answer is somewhere in the middle, where is it? And is it different for Hand 1 and Hand 2 etc?

More generally, is there any better way for us to estimate Voynichese’s error rate? Why isn’t this something that researchers are actively debating? How can we make progress with this?

(Structure + Errors) or (Natural Variation)?

This is arguably the core of a big debate that nobody is (yet) having. Is it the case that (a) Voynichese is actually strongly structured but most of the deviations we see are copying and/or transcription errors, or that (b) Voynichese is weakly structured, with the bulk of the deviations arising from other, more natural and “language-like” processes? I think this cuts far deeper to the real issue than the typical is-it-a-language-or-a-cipher superficial bun-fight that normally passes for debate.

Incidentally, a big problem with entropy studies (and indeed with statistical studies in general) is that they tend to over-report the exceptions to the rule: for something like qo, it is easy to look at the instances of qa and conclude that these are ‘obviously’ strongly-meaningful alternatives to the linguistically-conventional qo. But from the strongly-structured point of view, they look well-nigh indistinguishable from copying errors. How can we test these two ideas?

Perhaps we might consider a statistical study that uses this kind of p-value analysis to assess the likeliest level of copying error? Or alternatively, we might consider whether linguistic hypotheses necessarily imply a lower practical bound for the error rate (and whether we can calculate this lower bound). Something to think about, anyway.

All in all, EVA has been a huge support for us all, but I do suspect that more recently it may have closed some people’s eyes to the difficulties both with the process of transcription and with the nature of a document that (there is very strong evidence indeed) was itself copied. Alfred Korzybski famously wrote, “A map is not the territory it represents”: similarly, we must not let possession of a transcription give us false confidence that we fully understand the processes by which the original shapes ended up on the page.

As I see it, there are four foundational tasks that need to be done to wrangle Voynichese into a properly usable form:

* Task #1: Transcribing Voynichese text into a reliable computer-readable raw transcription e.g. EVA qokeedy
* Task #2: Parsing the raw transcription to determine Voynichese’s fundamental units (its tokens) e.g. [qo][k][ee][dy]
* Task #3: Clustering the pages / folios into groups where the text shares distinct features e.g. Currier A vs Currier B
* Task #4: Normalizing the clusters e.g. how A tokens / patterns map to B tokens / patterns, etc

I plan to tackle these four areas in separate posts, to try to build up a substantive conversation on each topic in turn.

Takahashi’s EVA transcription

Rene Zandbergen points out that, of all the different “EVA” transcriptions that appear interleaved in the EVA interlinear file, “the only one that was really done in EVA was the one from Takeshi. He did not use the fully extended EVA, which was probably not yet available at that time. All other transcriptions have been translated from Currier, FSG etc to EVA.

This is very true, and is the main reason why Takeshi Takahashi’s transcription is the one most researchers tend to use. Yet aside from not using extended EVA, there are a fair few idiosyncratic things Takeshi did that reduce its reliability, e.g. as Torsten Timm points outTakahashi reads sometimes ikh where other transcriptions read ckh“.

So the first thing to note is that the EVA interlinear transcription file’s interlinearity arguably doesn’t actually help us much at all. In fact, until such time as multiple genuinely EVA transcriptions get put in there, its interlinearity is more of an archaeological historical burden than something that gives researchers any kind of noticeable statistical gain.

What this suggests to me is that, given the high quality of the scans we now have, we really should be able to collectively determine a single ‘omega’ stroke transcription: and even where any ambiguity remains (see below), we really ought to be able to capture that ambiguity within the EVA 2.0 transcriptions itself.

EVA, Voyn-101, and NEVA

The Voyn-101 transcription used a glyph-based Voynichese transcription alphabet derived by the late Glen Claston, who invested an enormous amount of his time to produce a far more all-encompassing transcription style than EVA did. GC was convinced that many (apparently incidental) differences in the ways letter shapes were put on the page might encipher different meanings or tokens in the plaintext, and so ought to be captured in a transcription.

So in many ways we already have a better transcription, even if it is one very much tied to the glyph-based frame of reference that GC was convinced Voynichese used (he firmly believed in Leonell Strong’s attempted decryption).

Yet some aspects of Voynichese writing slipped through the holes in GC’s otherwise finely-meshed net, e.g. the scribal flourishes on word-final EVA n shapes, a feature that I flagged in Curse back in 2006. And I would be unsurprised if the same were to hold true for word-final -ir shapes.

All the same, GC’s work on v101 could very well be a better starting point for EVA 2.0 than Takeshi’s EVA. Philip Neal writes: “if people are interested in collaborating on a next generation transcription scheme, I think v101/NEVA could fairly easily be transformed into a fully stroke-based transcription which could serve as the starting point.

EVA, spaces, and spatiality

For Philip Neal, one key aspect of Voynichese that EVA neglects is measurements of “the space above and below the characters – text above, blank space above etc.

To which Rene adds that “for every character (or stroke) its coordinates need to be recorded separately”, for the reason that “we have a lot of data to do ‘language’ statistics, but no numerical data to do ‘hand’ statistics. This would, however, be solved by […having] the locations of all symbols recorded plus, of course their sizes. Where possible also slant angles.

The issue of what constitutes a space (EVA .) or a half-space (EVA ,) has also not been properly defined. To get around this, Rene suggests that we should physically measure all spaces in our transcription and then use a software filter to transform that (perhaps relative to the size of the glyphs around it) into a space (or indeed half-space) as we think fit.

To which I’d point out that there are also many places where spaces and/or half-spaces seem suspect for other reasons. For example, it would not surprise me if spaces around many free-standing ‘or’ groups (such as the famous “space transposition” sequence “or or oro r”) are not actually spaces at all. So it could well be that there would be context-dependent space-recognition algorithms / filters that we might very well want to use.

Though this at first sounds like a great deal of work to be contemplating, Rene is undaunted. To make it work, he thinks that “[a] number of basics should be agreed, including the use of a consistent ‘coordinate system’. Again, there is a solution by Jason Davies [i.e.], but I think that it should be based on the latest series of scans at the Beinecke (they are much flatter). My proposal would be to base it on the pixel coordinates.

For me, even though a lot of this would be nice things to have (and I will be very interested to see Philip’s analysis of tall gallows, long-tailed characters and space between lines), the #1 frustration about EVA is still the inconsistencies and problems of the raw transcription itself.

Though it would be good to find a way of redesigning EVA 2.0 to take these into account, perhaps it would be better to find a way to stage delivery of these features (hopefully via OCR!), just so we don’t end up designing something so complicated that it never actually gets done. 🙁

EVA and Neal Keys

One interesting (if arguably somewhat disconcerting) feature of Voynichese was pointed out by Philip Neal some years ago. He noted that where Voynichese words end in a gallows character, they almost always appear on the top line of a page (sometimes the top line of a paragraph). Moreover, these had a strong preference for being single-leg gallows (EVA p and EVA f); and also for appearing in nearby pairs with a short, often anomalous-looking stretch of text between them. And they also tend to occur about 2/3rds of the way across the line in which they fall.

Rather than call these “top-line-preferring-single-leg-gallows-preferring-2/3rd-along-the-top-line-preferring-anomalous-text-fragments“, I called these “Neal Keys”. This term is something which other researchers (particularly linguists) ever since have taken objection with, because it superficially sounds as though it is presupposing that this is a cryptographic mechanism. From my point of view, those same researchers didn’t object too loudly when cryptologist Prescott Currier called his Voynichese text clusters “languages”: so perhaps on balance we’re even, OK?

I only mention this because I think that EVA 2.0 ought to include a way of flagging likely Neal Keys, so that researchers can filter them in or out when they carry out their analyses.

EVA and ambiguity

As I discussed previously, one problem with EVA is that it doesn’t admit to any uncertainty: by which I mean that once a Voynichese word has been transcribed into EVA, it is (almost always) then assumed to be 100% correct by all the people and programmes that subsequently read it. Yet we now have good enough scans to be able to tell that this is simply not true, insofar as there are a good number of words that do not conform to EVA’s model for Voynichese text, and for which just about any transcription attempt will probably be unsatisfactory.

For example, the word at the start of the fourth line on f2r:

Here, the first part could possibly be “sh” or “sho”, while the second part could possibly be “aiidy” or “aiily”: in both cases, however, any transcriber attempting to reduce it to EVA would be far from certain.

Currently, the most honest way to transcribe this in EVA would be “sh*,aii*y” (where ‘*’ indicates “don’t know / illegible”). But this is an option that isn’t taken as often as it should.

I suspect that in cases like this, EVA should be extended to try to capture the uncertainty. One possible way would be to include a percentage value that an alternate reading is correct. In this example, the EVA transcription could be “sh!{40%=o},aiid{40%=*}y”, where “!{40%=o}” would mean “the most likely reading is that there is no character there (i.e. ‘!’), but there is a 40% chance that the character should be ‘o'”.

For those cases where two or more EVA characters are involved (e.g. where there is ambiguity between EVA ch and EVA ee), the EVA string would instead look like “ee{30%=ch}”. And on those occasions where there is a choice between a single letter and a letter pair, this could be transcribed as “!e{30%=ch}”.

For me, the point about transcribing with ambiguity is that it allows people doing modelling experiments to filter out words that are ambiguous (i.e. by including a [discard words containing any ambiguous glyphs] check box). Whatever’s going on in those words, it would almost always be better to ignore them rather than to include them.

EVA and Metadata

Rene points out that the metadata “were added to the interlinear file, but this is indeed independent from EVA. It is part of the file format, and could equally be used in files using Currier, v101 etc.” So we shouldn’t confuse the usefulness of EVA with its metadata.

In many ways, though, what we would really like to have in the EVA metadata is some really definitive clustering information: though the pages currently have A and B, there are (without any real doubt) numerous more finely-grained clusters, that have yet to be determined in a completely rigorous and transparent (open-sourced) way. However, that is Task #3, which I hope to return to shortly.

In some ways, the kind of useful clustering I’m describing here is a kind of high-level “final transcription” feature, i.e. of how the transcription might well look much further down the line. So perhaps any talk of transcription

How to deliver EVA 2.0?

Rene Zandbergen is in no doubt that EVA 2.0 should not be in an interlinear file, but in a shared online database. There is indeed a lot to be said for having a cloud database containing a definitive transcription that we all share, extend, mutually review, and write programmes to access (say, via RESTful commands).

It would be particularly good if the accessors to it included a large number of basic filtering options: by page, folio, quire, recto/verso, Currier language, [not] first words, [not] last words, [not] first lines, [not] labels, [not] key-like texts, [not] Neal Keys, regexps, and so forth – a bit like on steroids. 🙂

It would also be sensible if this included open-source (and peer-reviewed) code for calculating statistics – raw instance counts, post-parse statistics, per-section percentages, 1st and 2nd order entropy calculations, etc.

Many of these I built into my JavaScript Voynichese state machine from 2003: there, I wrote a simple script to convert the interlinear file into JavaScript (developers now would typically use JSON or I-JSON).

However, this brings into play the questions of boundaries (how far should this database go?), collaboration (who should make this database), methodology (what language or platform should it use?), and also of resources (who should pay for it?).

One of the strongest reasons for EVA’s success was its simplicity: and given the long (and complex) shopping list we appear to have, it’s very hard to see how EVA 2.0 will be able to compete with that. But perhaps we collectively have no choice now.

In the Voynich research world, several transcriptions of the Voynich Manuscript’s baffling text have been made. Arguably the most influential of these is EVA: this originally stood for “European Voynich Alphabet”, but was later de-Europeanized into “Extensible Voynich Alphabet”.

The Good Things About EVA

EVA has two key aspects that make it particularly well-adapted to Voynich research. Firstly, the vast majority of Voynichese words transcribed into EVA are pronouncable (e.g. daiin, qochedy, chodain, etc): this makes them easy to remember and to work with. Secondly, it is a stroke-based transcription: even though there are countless ways in which the inidvidual strokes could possibly be joined together into glyphs (e.g. ch, ee, ii, iin) or parsed into possible tokens (e.g. qo, ol, dy), EVA does not try to make that distinction – it is “parse-neutral”.

Thanks to these two aspects, EVA has become the central means by which Voynich researchers trying to understand its textual mysteries converse. In those terms, it is a hugely successful design.

The Not-So-Good Things About EVA

In retrospect, some features of EVA’s design are quite clunky:
* Using ‘s’ to code both for the freestanding ‘s’-shaped glyph and for the left-hand half of ‘sh’
* Having two ways of coding ligatures (either with round brackets or with upper-case letters)
* Having so many extended characters, many of which are for shapes that appear exactly once

There are other EVA design limitations that prevent various types of stroke from being captured:
* Having only limited ways of encoding the various ‘sh’ “plumes” (this particularly annoyed Glen Claston)
* Having no way of encoding the various ‘s’ flourishes (this also annoyed Glen)
* Having no way of encoding various different ‘-v’ flourishes (this continues to annoy me)

You also run into various annoying inconsistences when you try to use the interlinear transcription:
* Some transcribers use extended characters for weirdoes, while others use no extended characters at all
* Directional tags such as R (radial) and C (circular) aren’t always used consistently
* Currier language (A / B) isn’t recorded for all pages
* Not all transcribers use the ‘,’ (half-space) character
* What one transcriber considers a space or half-space, another leaves out completely

These issues have led some researchers to either make their own transcriptions (such as Glen Claston’s v101 transcription), or to propose modifications to EVA (such as Philip Neal’s little-known ‘NEVA’, which is a kind of hybrid, diacriticalised EVA, mapped backwards from Glen Claston’s transcription).

However, there are arguably even bigger problems to contend with.

The Problem With EVA

The first big problem with EVA is that in lots of cases, Voynichese just doesn’t want to play ball with EVA’s nice neat transcription model. If we look at the following word (it’s right at the start of the fourth line on f2r), you should immediately see the problem:

The various EVA transcribers tried gamely to encode this (they tried “chaindy”, “*aiidy”, and “shaiidy”), but the only thing you can be certain of is that they’re probably all wrong. Because of the number of difficult cases such as this, EVA should perhaps have included a mechanism to let you flag an entire word as unreliable, so that people trying to draw inferences from EVA could filter it out before it messes up their stats.

(There’s a good chance that this particular word was miscopied or emended: you’d need to do a proper codicological analysis to figure out what was going on here, which is a complex and difficult activity that’s not high up on anyone’s list of things to do.)

The second big problem with EVA is that of low quality. This is (I believe) because almost all of the EVA transcriptions were done from the Beinecke’s ancient (read: horrible, nasty, monochrome) CopyFlo printouts, i.e. long before the Beinecke released even the first digital image scan of the Voynich Manuscript’s pages. Though many CopyFlo pages are nice and clean, there are still plenty of places where you can’t easily tell ‘o’ from ‘a’, ‘o’ from ‘y’, ‘ee’ from ‘ch’, ‘r’ from ‘s’, ‘q’ from ‘l’, or even ‘ch’ from ‘sh’.

And so there are often wide discrepancies between the various transcriptions. For example, looking at the second line of page f24r:

…this was transcribed as:

qotaiin.char.odai!n.okaiikhal.oky-{plant} --[Takahashi]
qotaiin.eear.odaiin.okai*!!al.oky-{plant} --[Currier, updated by Voynich mailing list members]
qotaiin.char.odai!n.okaickhal.oky-{plant} --[First Study Group]

In this specific instance, the Currier transcription is clearly the least accurate of the three: and even though the First Study Group transcription seems closer than Takeshi Takahashi’s transcription here, the latter is frequently more reliable elsewhere.

The third big problem with EVA is that Voynich researchers (typically newer ones) often treat it as if it is final (it isn’t); or as if it is a perfect representation of Voynichese (it isn’t).

The EVA transcription is often unable to reflect what is on the page, and even though the transcribers have done their best to map between the two as best they can, in many instances there is no answer that is definitively correct.

The fourth big problem with EVA is that it is in need of an overhaul, because there is a huge appetite for running statistical experiments on a transcription, and the way it has ended up is often not a good fit for that.

It might be better now to produce not an interlinear EVA transcription (i.e. with different people’s transcriptions interleaved), but a single collective transcription BUT where words or letters that don’t quite fit the EVA paradigm are also tagged as ambiguous (e.g. places where the glyph has ended up in limbo halfway betwen ‘a’ and ‘o’).

What Is The Point Of EVA?

It seems to me that the biggest problem of all is this: that almost everyone has forgotten that the whole point of EVA wasn’t to close down discussion about transcription, but rather to enable people to work collaboratively even though just about every Voynich researcher has a different idea about how the individual shapes should be grouped and interpreted.

Somewhere along the line, people have stopped caring about the unresolved issue of how to parse Voynichese (e.g. to determine whether ‘ee’ is one letter or two), and just got on with doing experiments using EVA but without understanding its limitations and/or scope.

EVA was socially constructive, in that it allowed people with wildly different opinions about how Voynichese works to discuss things with each other in a shared language. However, it also inadvertantly helped promote an inclusive accommodation whereby people stopped thinking about trying to resolve difficult issues (such as working out the correct way to parse the transcription).

But until we can start find out a way to resolve such utterly foundational issues, experiments on the EVA transcription will continue to give misleading and confounded results. The big paradox is therefore that while the EVA transcription has helped people discuss Voynichese, it hasn’t yet managed to help people advance knowledge about how Voynichese actually works beyond a very superficial level. *sigh*

For far too long, Voynich researchers have (in my opinion) tried to use statistical analysis as a thousand-ton wrecking ball, i.e. to knock down the whole Voynich edifice in a single giant swing. Find the perfect statistical experiment, runs the train of thought, and all Voynichese’s skittles will clatter down. Strrrrike!

But… even a tiny amount of reflection should be enough to show that this isn’t going to work: the intricacies and contingencies of Voynichese shout out loud that there will be no single key to unlock this door. Right now, the tests that get run give results that are – at best – like peering through multiple layers of net curtains. We do see vague silhouettes, but nothing genuinely useful appears.

Whether you think Voynichese is a language, a cipher system, or even a generated text doesn’t really matter. We all face the same initial problem: how to make Voynichese tractable, by which I mean how to flatten it (i.e. regularize it) to the point where the kind of tests people run do stand a good chance of returning results that are genuinely revealing.

A staging point model

How instead, then, should we approach Voynichese?

The answer is perhaps embarrassingly obvious and straightforward: we should collectively design and implement statistical experiments that help us move towards a series of staging posts.

Each of the models on the right (parsing model, clustering model, and inter-cluster maps) should be driven by clear-headed statistical analysis, and would help us iterate towards the staging points on the left (parsed transcription, clustered parsed transcription, final transcription).

What I’m specifically asserting here is that researchers who perform statistical experiments on the raw stroke transcription in the mistaken belief that this is as good as a final transcription are simply wasting their time: there are too many confounding curtains in the way to ever see clearly.

The Curse, statistically

A decade ago, I first talked about “The Curse of the Voynich”: my book’s title was a way of expressing the idea that there was something about the way the Voynich Manuscript was constructed that makes fools of people who try to solve it.

Interestingly, it might well be that the diagram above explains what the Curse actually is: that all the while people treat the raw (unparsed, unclustered, unnormalized) transcription as if it were the final (parsed, clustered, normalized) transcription, their statistical experiments will continue to be confounded in multiple ways, and will show them nothing useful.

Back in 2006, I reasoned (in The Curse of the Voynich) that if the nine-rosette page’s circular city with a castle at the top…

…represented Milan (one of only three cities renowned for their circular shape), then the presence of swallowtail merlons on the drawing implied it must have been drawn after 1450, when the rebuilding of the old Porta Giovia castle (that was wrecked during the Ambrosian Republic) by Francesco Sforza as [what is now known as] the Castello Sforzesco began.

Ten Years Later, A Challenge

However, Mark Knowles recently challenged me on this: how was I so sure that the older castle on the site didn’t also have swallowtail merlons?

While writing Curse, for the history of Milan I mainly relied on the collection of essays and drawings in Vergilio Vercelloni’s excellent “Atlante Storico di Milano, Città di Lombardia”, such as these two pictures from Milano fantastica, in “Historia Evangelica et actos apostolorum cum alijs illorum temporum eventibus cum figuris crebioribus delineatis”, circa 1380:

…and this old favourite (which Boucheron notes [p.199] is a copy probably made between 1456 and 1472 of an original made in the 1420s)…

On the surface, it seemed from these as though I had done enough. But coming back to it, might I have been too hasty? I decided to fetch down my copies of Evelyn Welch’s “Art and Authority in Renaissance Milan” and Patrick Boucheron’s “Le Pouvoir de Bâtir” from the book overflow in the attic and have another look…

Revisiting Milan’s Merlons

What did I find? Well: firstly, tucked away in a corner of a drawing by Galvano Fiamma (in the 1330s) of a view of Milan (reproduced as Plate IIa at the back of Boucheron’s book), the city walls appear to have some swallowtail merlons (look just inside the two outermost towers and you should see them):

And in a corner of a drawing by Anovelo da Imbonate depicting and celebrating the 1395 investiture of Gian Galeazzo Visconti (reproduced in Welch p.24), I noticed a tiny detail that I hadn’t picked up on before… yet more swallowtail merlons:

Then, when I looked at other miniatures by the same Anovelo da Imbonate, I found two other (admittedly stylized) depictions of Milan by him that also unmistakeably have swallowtail merlons:

So it would seem that Milan’s city walls may well have had swallowtail merlons prior to 1450. The problem is that the city walls aren’t the same as the Porta Giovia castle walls (built from 1358, according to Corio): and I don’t think we know enough to say whether or not the castle itself had swallowtail merlons. It’s debatable whether the drawing of the 1395 investiture (which took place in the Porta Giovia castle) depicts the castle itself having swallowtail merlons: I just don’t know.

But the short version of the long answer is that because the Porta Giovia castle was only built from 1358-1372 (or thereabouts), we can’t rely on texts written before then (such as Galvano Fiamma’s). And there seems quite good reason to suspect (the Massajo drawing notwithstanding) that the Porta Giovia castle may well have had swallowtail merlons when it was used for the Visconti investiture in 1395. But I don’t know for certain, sorry. 🙁

There are texts that might give us an answer: for example, the (1437) “De Laudibus Mediolanensium urbis panegyricus” by Pier Candido Decembrio (mentioned in Boucheron p.74), or Bernardino Corio’s “Storia di Milano”. There are plenty of documents Boucheron cites in footnotes (pp.202-205), including “Lavori ai castelli di Bellinzona nel periodo visconteo”, Bolletino della Svizzera italiana, XXV, 1903, pp.101-104 (which I’ll leave for another day). But it’s obviously quite a lot of work. 🙁

Finally, I should perhaps add that a few details by Anovelo da Imbonate have an intriguingly Voynichian feel:

Though there were plenty of other miniature artists active in the Visconti court in Milan in the decades up to 1447, parallels between their art and the Voynich Manuscript’s drawings haven’t been explored much to date. Perhaps this is a deficiency in our collective Art Historical view that should be rectified. 🙂

Whether we like it or not, history as practised nowadays is a tower built upon textuality, upon the implicit evidentiality striped within and through texts. Even archaeology (of all but the obscenely distant past) and Art History rely heavily on texts for their reconstructions.

Alternative, explicitly visual approaches to history have lost the battle to control the locus of meaning. The mid-twentieth century Warburg/Saxl/Panofsky dream that highly evolved iconography/iconology might be able to surgically extract the inner semantic life of symbols from their drab syntatical carapaces now seems hopelessly over-optimistic, fit only for the Hollywood cartoons of Dan Brown novels. Sorry, but Text won.

What, then, are contemporary historians to make of the Voynich Manuscript, a barque adrift in a wine-dark sea of textlessness? In VoynichLand, we have letters, letters everywhere, and not a jot for them to read: and without close reading’s robotic exoskeleton to work with, where could such a text-centric generation of scholars begin?

Well, given that the Voynich Manuscript’s text-like writing has so failed yielded nothing of obvious substance to linguists or cryptologists (apart from long lists of things that they are sure it is not), historians are only comfortably left with a single door leading to the disco floor…

“Step #1. Start with the pictures.”

Yes, they could indeed start with the pictures: the Voynich’s beguiling, misleading, and crisply non-religious images. These contain plants that are real, distorted, imaginary, and/or impossible; strange circular diagrams; oddly-posed nymphs arranged in tubes and pools; and curious map-like diagrams. They famously lead everywhere and nowhere simultaneously, like a bad mirror-room fight-scene in 1960s Avengers TV episodes.

Without the comforting crutch of referentiality to lean on, we can’t tell whether a given picture happens to parallel one of the plants in Ulisse Aldrovandi’s famous (so-called) “alchemical herbals” (which unfortunately seem to be neither alchemical nor particularly herbal); or whether we’re just imagining that it echoes a specific plant in this week’s interesting Arabic book of wonders; or whether its roots were drawn from a dried sample but its body was imagined; or whether a different one of the remaining three hundred and eighty post-rationalizations that have been made for that page happens to hold true.

But on the bright side, it’s not as if we’re talking about a set of drawings that has previously made fools of just about everyone who has tried to form a sensible opinion about them, right? [*hollow laugh*]

So, “start with the pictures” it is. But what should we do then? Again, there seems little choice:

“Step #2. Find a telling detail.”

In my opinion, here’s where it all start to go wrong: where the road leads only to a cliff-edge, and one that has a sizeable drop below it into the sea.

The elephant-in-the-room question here is this: if looking for telling details is such a good idea, why is it that more then a century’s worth of looking for telling details has revealed practically nothing?

Is it because everyone who has ever looked at the Voynich Manuscript has been stupid, or inexperienced, or foolish, or delusional, or crazy, or marginal, or naive? Because that’s essentially what would need to be true for your own contribution to bring a new bottle to the party, if all you’re going to do yourself is look for telling details.

The thing that almost nobody seems to grasp is that we collectively have already applied an extraordinary amount of eyeballs at this issue.

Even though the Voynich’s imagery has been seen and ‘closely read’ for over a century by all manner of people, to date this has – in terms of finding the single telling detail that can place even part of it within an illustrative or semantic tradition – achieved nothing, zilch, nada.

Incidentally, this leads (I think) to one of only two basic constructional models: (a) the drawings in the Voynich Manuscript are from a self-contained culture whose internal frame of reference sits quite apart from anything we’re used to looking at [a suggestion which I’m certain the palaeography refutes completely]; or (b) the process of making the drawings for the Voynich Manuscript somehow consciously stripped out their referentiality.

But I’m not imagining for a moment that what I’m pointing out will stop anyone else from reinventing this same square wheel: all I’m saying is that this is how people approach the Voynich Manuscript, and why they then get themselves into a one-way tangle.

“Step #3. Draw a big conclusion.”

Finally, this is the point in the chain of the argument where the cart rolls properly over the cliff: though it’s a long way down, at least gravity’s accelerative force means anybody in it won’t have very long to wait before the sea comes up to meet them (relatively speaking).

How is it that anyone can comfortably draw a step #3 macro-conclusion from the itty-bitty (and horrendously uncertain) detail they latched onto in step #2? As proofs go, this step is completely contingent on at least three different things:
(a) on perfect identification of the detail itself,
(b) on perfect correlation with essentially the same thing but in an external tradition, and
(c) on the logical presumption that this is necessarily the only feasible explanation for the correlation

Each of these three would be extremely difficult to prove on its own, never mind when all three are required to be true at the same time for their sum to be true.

In my experience, when people put forward a Voynich manuscript macro-conclusion based on local correlation with some micro-detail they have noticed, they almost always haven’t noticed how weakly supported their overall argument is. Not only that, but why is it – given the image-rich source their external tradition normally is – that they can typically only point to a single image in it that supports their claimed correlation? That is fairly bankrupt, intellectually speaking.

How can we fix this issue?

This is a really hard problem. Art History tends to furnish historians with the illusion that they can use its conceptual tricks and technical ‘flow’ to tackle the Voynich Manuscript one single isolated detail at a time, but this isn’t really true in any useful sense.

A picture is a connected constellation of techniques, formed not only of ways of expressing things, but also of ways of seeing things. And so it’s a mystery why there should be such an otherness to the Voynich Manuscript’s drawings that deconstructing any part of it leaves us with next to nothing in our hands.

Part of this problem is easy to spot, insofar as there are plenty of places where we still can’t tell content from decoration from elaboration from emendation. Even a cursory look at pages such as the nine-rosette page or f116v should elicit the conclusion that they are made up of multiple layers, i.e. multiple codicological contributions.

For me, until someone uses tricks such as DNA analysis and Raman imaging to properly analyze the manuscript’s codicological layers, internal construction, and/or the original bifolio order of each of the sections, too many people will continue trying to read not “the unreadable”, but “the not-yet readable”: all of which will continue to lead to all manner of foolish reasoning and conclusions, as it has done for many decades.

I really want you understand that this isn’t because people are inherently foolish: rather, it’s because they almost all want to kid themselves that they can draw a solid macro-conclusion from an isolated and uncertain micro-similarity. And all the while that this continues to be the collective research norm, I have little doubt that we’re going to get nowhere.

Alexandra Marraccini’s presentation

You can see the slides and the draft article accompanying Alexandra Marracini’s recent talk here (courtesy of

The core of Marraccini’s argument seems to reduce to this: that if one or more of the circular castle roundels in the Voynich Manuscript’s nine-rosette foldout is in fact the same flattened city that appears in BL Sloane MS 4016 f.8v and/or Vat.Chig. F.VII 158 f.12r and/or BNF Lat 6823 f.13r (the first two of which also have a little dragon in one herbal root), then we might be able to place the Voynich Manuscript in one branch of the Tractatus de Herbis tradition (all of which derive from Firenze Biblioteca dipartemental e di Botanica MS 106).

Even though this is arguably a reasonable starting point for future investigation, I’m not yet seeing a lot of methodological ‘air’ between what she’s doing and the mass of detail-driven Voynich single-image theories Marraccini would doubtless wish to distance herself from. The structural weakness of their arguments are still – to a very large degree – her argument’s weakness too.

Going forward, this amounts to a theoretical lacuna which I think she might do well to address: that there is no obvious historical / analytical methodology to apply here that satisfactorily bridges the gap between micro-similarities and macro-conclusion in the absence of accompanying texts. OK, pointing to an absence is perhaps a bit more of a problematique than most historians these days are comfortable with, but I’m only the messenger here, sorry.

Anyway, there’s a nice transcription of the Q&A session she gave after her presentation (courtesy of VViews) here, which I’m sure many Voynich researchers will find interesting.

Oddly, though, the questions from an audience Voynichero with my 2006 book “The Curse of the Voynich” in mind were almost exactly the opposite of what I would myself have asked (had I been there). The single most important question is: why is your argument structurally any better than all the other similar arguments that have been put forward?

So, what is missing here?

The answer to this certainly isn’t working hypotheses about the Voynich Manuscript, because there’s no obvious shortage of those. Even the suggestion that there might be some stemmatic relation (however vague and ill-defined) between the drawings in Voynich Manuscript and BL Sloane MS 4016 has been floating around for some years.

Instead, what I think is missing is a whole set of evidential basics: for example, physical data and associated reasoning that tell us without almost no doubt which paints were original (answer: not many of them) and which were added later; or (perhaps more importantly) what the original bifolio nesting order was.

With these to work with, we could reject many, many incorrect hypotheses: and we might – with just a little bit of luck – possibly be able to use one or two as fixed points to pivot the whole discourse round, like an Archimedean Lever.

The alternative, sadly, is a long sequence of more badly-structured arguments, Groundhog Day-stylee. Even if my ice-carving technique has got stupendously good, it would be nice to have a change, right?

Well, here’s a thing. The Thirteenth Oxford Medieval Graduate Conference, to be held in a month’s time at Merton College (31st March 2017 to 1st April 2017) on the theme of “Time : Aspects and Approaches”, has a Voynich-themed paper in its Manuscripts and Archives session on the second day (11:30am to 1:00pm).

This is “Asphalt and Bitumen, Sodom and Gomorrah: Placing Yale’s Voynich Manuscript on the Herbal Timeline“, presented by Alexandra Marraccini of the University of Chicago. The description runs like this:

Yale Beinecke MS 408, colloquially known as the Voynich manuscript, is largely untouched by modern manuscript scholars. Written in an unreadable cipher or language, and of Italianate origin, but also dated to Rudolphine court circles, the manuscript is often treated as a scholarly pariah. This paper attempts to give the Voynich manuscript context for serious iconographic debate using a case study of Salernian and Pseudo- Apuleian herbals and their stemmae. Treating images of the flattened cities of Sodom and Gommorah from Vatican Chig. F VII 158, BL Sloane 4016, and several other exempla from the Bodleian and beyond, this essays situates the Voynich iconography, both in otherwise unidentified foldouts and in the manuscript’s explicitly plant-based portion, within the tradition of Northern Italian herbals of the 14th-15th centuries, which also had strong alchemical and astrological ties. In anchoring the Voynich images to the dateable and traceable herbal manuscript timeline, this paper attempts to re-situate the manuscript as approachable in a truly scholarly context, and to re-characterise it, no longer as an ahistorical artefact, but as an object rooted in a pictorial tradition tied to a particular place and time.

BL Sloane 4016 is a similar-looking herbal that Voynich researchers know well. Most famously, Alan Touwaide wrote a 500-page scholarly commentary on it (as mentioned in Rene’s summary of Touwaide’s chapter in the recent Yale facsimile). It dates to the 1440s in Lombardy, and even has a frog (‘rana’) on folio 81:

Marracini herself is an art historian who previously graduated from Yale, and who has an almost impossibly perfect set of research interests:

Her research focuses on Late Medieval and Early Modern scientific images, particularly alchemical and medical material, in England, Scotland, Germany, and the Netherlands. Her interests in the field also include book history and manuscript studies, Late Antique material culture, and the historiography of art, particularly in Warburgian contexts. Currently, she is writing on the history of Hermetic-scientific images and diagrams, and her work on Elias Ashmole’s copies of the Ripley Scrolls is forthcoming in the journal Abraxas.

All of which looks almost too good to be true. It’s just a shame her presentation falls on April Fool’s Day, so we’re bound to have people claiming that she doesn’t really exist and it’s all a conspiracy etc. 😉

Voynich researchers without a significant maths grounding are often intimidated by the concept of entropy. But all it is is an aggregate measure of how [in]effectively you can predict the next token in a sequence, given a preceding context of a certain size. The more predictable tokens are (on average), the smaller the entropy: the more unpredictable they are, the larger the entropy.

For example, if the first order (i.e. no context at all) entropy measurement of a certain text was 3.0 bits, then it would have almost exactly the same average information content-ness per character as a random series of eight different digits (e.g. 1-8). This is because entropy is a log2 value, and log2(8) = 3. (Of course, what is usually the case is that some letters are more frequent than others: but entropy is the bottom line figure averaged out over the whole text you’re interested in.)

And the same goes for second order entropy, with the only difference being that because we always know there what the preceding letter or token was, we can make a more effective guess as to what the next letter or token will be. For example, if we know the previous English letter was ‘q’, then there is a very high chance that the next letter will be ‘u’, and a far lower chance that the next letter will be, say, ‘k’. (Unless it just happens to be a text about the current Mayor of London with all the spaces removed.)

And so it should proceed beyond that: the longer the preceding context, the more effectively you should be to predict the next letter, and so the lower the entropy value.

As always, there are practical difficulties to consider (e.g. what to do across page boundaries, how to handle free-standing labels, whether to filter out key-like sequences, etc) in order to normalize the sequence you’re working with, but that’s basically as far as you can go with the concept of entropy without having to define the maths behind it a little more formally.

Voynich Entropy

However, even a moment’s thought should be sufficient to throw up the flaw in using entropy as a mathematical torch to try to cast light on the Voynich Manuscript’s “Voynichese” text… that because we don’t yet know what makes up a single token, we don’t know whether or not the entropy values we get are telling us anything interesting.

EVA transcriptions are closer to stroke based than to glyph based: so it makes little (or indeed no) sense to calculate entropy values for EVA. And as for people who claim to be able to read EVA off the page as, say, mirrored Hebrew… I don’t think so. :-/

But what is the correct mapping or grouping for EVA, i.e. the set of rules you should apply to EVA to turn it into the set of tokens that will give us genuine results? Nobody knows. And, oddly, nobody seems to be even asking any more. Which doesn’t bode well.

All the same, entropy does sometimes yield us interesting glimpses inside the Voynichese engine. For example, looking at the Currier A pages only in the Takahashi transcription and using ch/sh/cth/ckh/cfh/cph as tokens (which is a pretty basic glyphifying starting point), you get [“h1” = first order entropy, “h2” = second order entropy]:

63667 input tokens, 56222 output tokens, h1 = 4.95, h2 = 4.03

This has a first order information content of 56222 x 4.95 = 278299 bits, and a second order information content of (56222-1) x 4.03 = 226571 bits.

If you then also replace all the occurrences of ain/aiin/aiiin/oin/oiin/oiiin with their own tokens, you get:

63667 input tokens, 51562 output tokens, h1 = 5.21, h2 = 4.01

This has a first order information content of 51562 x 5.21 = 268638 bits, and a second order information content of (51562-1) x 4.01 = 206760 bits. What is interesting here is that even though the h1 value increases a fair bit (as you’d expect from extending the post-parsed alphabet with additional tokens), the h2 value decreases very slightly, which I find a bit surprising.

And if, continuing in this vein, you also convert air/aiir/aiiir/sain/saiin/saiiin/dain/daiin/daiiin to glyphs, you get:

63667 input tokens, 50387 output tokens, h1 = 5.49, h2 = 4.04

This has a first order information content of 50387 x 5.49 = 276625 bits, and a second order information content of (50387-1) x 4.04 = 203559 bits. Again what I find interesting is that once again the h1 value increases a fair bit, but the h2 value barely moves.

And so it does seem to me that Voynich entropy may yet prove to be a useful tool in determining what is going on with all the different possible parsings. For example, I do wonder if there might be a practical way of exhaustively / hillclimbingly determining the particular parsing / grouping that maximises the post-parsed h1:h2 ratio for Voynichese. I don’t believe anyone has yet succeeded in doing this, so there may be plenty of room for good new work here – just a thought! 🙂

Voynich Parsing

To me, the confounding beauty of Voynichese is that all the while we cannot even parse it into tokens, the vast modern cryptological toolbox normally at our disposal does us no good.

Even so, it’s obvious (I think) that ch and sh are both tokens: this is largely because EVA was designed to be able to cope with strikethrough gallows characters (e.g. cth, ckh etc) without multiplying the number of glyphs excessively.

However, if you ask whether or not qo, ee, eee, ii, iii, dy, etc should be treated as tokens, you’ll get a wide range of responses. And as for ar, or, al, ol, am etc, you won’t get a typical linguistic researcher to throw away their precious vowel to gain a token, but it wouldn’t surprise me if they were wrong there.

The Language Gap

The Voynich Manuscript throws into sharp relief a shortcoming of our statistical toolbox: specifically, its excessive reliance on our having previously modelled the text stream accurately and reliably.

But if the first giant hurdle we face is parsing it, what kind of conceptual or technical tools should we be using to do this? And on an even more basic level, what kind of language should we as researchers use to try to collaborate on toppling this first statue? As problems go, this is a precursor both to cryptology and to linguistic analysis.

As far as cipher people and linguist people go: in general, both groups usually assume (wrongly) that all the heavy lifting has been done by the time they get a transcription in their hands. But I think there is ample reason to conclude that we’re not yet in the cinema, but are still stuck in the foyer, all the while there is a world of difference between a stroke transcription and a parsed transcription that few seem comfortable to acknowledge.

In some ways, it’s the shortest of distances from [Ethel Voynich] to [Ethel Merman], so why not “Voynich, The Musical“? Close your eyes, imagine a Broadway stage, take out a mortgage to get yourself a semi-affordable seat, spill a drink on your leg, and you’re as good as there…


Act One, Scene One

It’s 1912. A single spotlight illuminates an old trunk in the middle of an otherwise empty wooden stage: there’s dust in the air. We hear slow, sustained violins off-stage, harbingers of the big discovery that is about to happen.

WILFRID appears stage right. He is well dressed (though a little tweedy for our modern tastes), and wears small round glasses. He looks in the prime of his life – there’s a vigour and physical excitement to him. He approaches the trunk, opens it, takes out an old book and peers inside it. As his eyes grow ever wider, the violins swell, and he sings his first number “Friends To The End”.


This never happened – I wasn’t here.
There was never a trunk (that was junk), isn’t this queer?
I conjured a castle, to hide Jesuit lies…
While the customer’s king, I’ll say anything (however unwise).

[Chorus] But you, you were always real
Even if you made me feel
Like an antiquarian schlemiel –
I couldn’t comprehend.
But I knew, I knew when I met
My ugly duckling Juliet
With your strange alphabet
We’d be friends to the end…
Friends to the end.

Act One, Scene Two

Back in London, WILFRID hesitantly shows his newly-acquired manuscript to his wife ETHEL: he thinks it’s going to make them rich. However, ETHEL cannot believe that he has wasted money on something as unbelievably stupid as a book that nobody can read. To make her feelings on the matter completely clear, she sings her angry opening number “Down the drain”.


Little naked women
Standing round or swimming
What is this you’re bringing
To our house?
You can’t read a word of it
Written by a heretic
I can’t see the benefit
To man or mouse

[Chorus] You put good money / Down the drain
Buying enciphered / Castles in Spain
Were those nymphs fogging / Your revolutionary brain?
Or has their writing sent you / Completely insane?

Act One, Scene Three

WILFRID has moved to New York, and is trying (unsuccessfully) to convince wealthy American collectors to buy his unreadable manuscript. Though his sales patter normally charms the birds down off the trees, he’s finding it difficult to find anyone with any affinity for this unusual artefact. His song “It’s No Use” documents his ongoing struggle.


There’s jazz and money in the air
The excitement of a New World at play
New rules, new wealth, new clothes, new hair
America strides into a brand new day

You, sir, with your spats and suits
Your garden parties and Egyptiana
Might I interest you in this book’s strange roots
And its hard-to-pin-down flora and fauna?

[Chorus] It’s no use
My duckling’s no swan
I’ve cooked my goose
My big chance has gone
I’ll find no willing
Who’ll pay more than a shilling
They’re too mercantile

Act One, Scene Four

It’s 1930 in New York. WILFRID is dying, having never been able to sell his “Roger Bacon” manuscript. ETHEL brings his beloved manuscript to him, so that he can see it one last time. WILFRID sings a song to both of them: “It’s Time To Say Goodbye”.


Perhaps I was wrong / To hope for the best
To follow every wastrel clue / Like a man possessed
Why can’t anybody else / See what I see?
Are they put off by mere / Indecipherability?

[Chorus] It’s time to say goodbye
To the woman I have loved
And greet the naked angels
Hovering above
I’ve seen them for years
Sitting on my shelves
Filling every page of
Quires eleven and twelve