I’ve had some interesting emails recently from Patrick O’Neil, who I’m very pleased to report has picked up my long-dropped baton on Mauritian treasure hunting in the period between the two world wars. By uncovering a number of sources I was unaware of (and with only minor assists from me), Patrick has started to reconstruct a side of this story that was previously only hinted at.

“Treasure Hunting in Mauritius”

An article titled “Treasure Hunting in Mauritius” by James Hornell in the 9th April 1932 edition of “The Sphere” (p.57) vividly sets the scene. The famous Mauritian treasure is none other than Surcouf’s, its author asserts, hidden in a cave “on a lonely stretch of coast”, but never retrieved:

A few years ago a plan giving clues to the position of the cave was brought to Mauritius – once a death-bed legacy ! A syndicate [the “Klondyke Syndicate”] of well-known local people was formed to follow up the clues. It was easy enough to identify the narrow gully or gorge in the cliffs up which Surcouf had carried his loot ; two of the stones marked on the plan were located, one with what seemed to have a letter or figure roughly carved upon it. Hope ran high, but now a check was registered ; no trace of a cave could be found ; it was thought that a landslide had taken place covering the mouth. Trial cuts were made here and there but no progress was made, and when the money subscribed ran out, the work ceased.

Hornell goes on to tell us how the search for the treasure then went ‘hi-tech’:

At a later date negotiations were opened with an engineering firm in England, and under agreement of equal shares to each party, an electrical divining instrument, designed to locate metal, was brought into use. Unfortunately the only spot where the pointer became agitated was over a surface of solid rock. From this it was inferred that the landslip had covered the entrance to the cave to a great depth, and that the spot indicated was above the end where the treasure lay. Weary weeks passed in cutting a way down through dense basalt rock of extreme hardness. Ten feet down they went without success ; then to twenty and on to thirty ; at about thirty-five they struck earth, and this raised their hopes to fever pitch. Immediate success was assured – the earth-filled cave was reached at last ! The island authorities were notified and a detachment of armed police went excitedly down to the gully to afford protection, So sure was the engineer of success that a motor lorry was requisitioned to convey the gold and jewels to the bank.

What happened next? Sadly, the dismal punchline is all too easy to predict:

Alas ! Hard rock soon reappeared and the electric indicator still encouraged further effort downwards in the same spot. Two months later when the depth of the shaft had reached over fifty feet I was invited by the engineer to visit the place. I could but admire his tenacity of purpose in face of prolonged disappointment and his patience in laboriously cutting a shaft through virgin rock with chisel and crowbar, afraid as he was to use explosives.

Incidentally, if you’re wondering who the author of this piece was, I’m pretty sure he was the “internationally well known fish expert and colonial adviser based in India” (and former Director of Fisheries in Madras) James Hornell F.L.S. F.R.A.I. (1865-1949), who wrote numerous books on fish, fishing, fishermen, and fishing boats all the way from Britain to Oceania. And did I mention he was interested in fish?

Anyway, even though this helps us glimpse the big picture, we are still left with many questions. For example: where was this site?

“A spot known as Klondyke”

Helpfully, a column in The Daily News of 13th April 1926 reporting on this story describes the location (albeit somewhat inexactly):

The scene of the search is a spot known as Klondyke, on the west coast of Mauritius, in the Black River district, and the treasure, which has come to be spoken of as the Klondyke treasure, is believed to have been secreted there between 1780 and 1800 by the Chevalier de Nageon, a noted privateer.

Unlike the version of the article that was reproduced in the Brisbane Telegraph (which I typed up here), this includes a rough-and-ready map, with a piratical “X” to mark the (approximate) spot:

The article continues:

A number of attempts have been made, at intervals since 1880, to find the treasure, and excavations were made in accordance with instructions sent to a Mauritian from one of his relatives in Brittany.

Then the Chevalier de Nageon’s own plan was said to have been found, and a company was formed to begin regular diggings.

Some stonework and other clues tallying with the plan were brought to light from time to time, but nothing else happened, and the shares of the Klondyke Company – held by about a score of persons – became temporarily worthless.

But by the end of last December these shares were selling at 5,000 rupees (about £375) each. This was because Captain Russell had come across new indications which gave rise to the highest hopes.

(Incidentally, has anyone ever actually seen a physical share in the Klondyke Company? I’m sure I’m not alone in wanting to have one framed on my wall.)

There was also a mention at the article’s end that “the native diggers, as I hear today, are feverishly excited concerning yet another treasure, supposed to have been hidden by the same Chevalier de Nageon at Pointe Vacoa, Grand Port“. As you’d of course expect, “(f)abulous figures are mentioned in this latest story”: the Mauritian treasure hunting ‘virus’ is one that constantly mutates…

To be continued…

In Part 2, I’ll go on to look a little more closely at the Liverpool engineering company and their strange machine…

I’ve just had some nice emails from Belgian writer Dirk Huylebrouck, whose article on the mysterious cryptograms in Moustier church is (today) appearing in popular Belgian science magazine EOS. Dirk’s article has some great photographs, and even includes some insights from codebreaker Jarl van Eycke (whom readers here may know as jarlve and/or from the deservedly famous Zodiac decryption). It’s a nice piece of work, well illustrated and well laid out, one which I heartily recommend to all my Belgian readers.

And here’s the article…

And even better, here’s a PDF of Dirk’s English translation of his own article. Modestly, he asked whether I would prefer to edit it a little: but it’s actually a very clear and entertaining read just as it is, and all that my well-meaning edits would probably achieve would be to lose both his voice and the article’s charm.

Dirk also suggested that I might like to include images of his photos of Moustier’s twin altars here, simply because there are so few of these on the Internet. I am of course more than happy to oblige (click on this for a decent-sized resolution, both images are (c) 2022 Dirk Huylebrouck):

Finally, the interesting bit…

If I have even a small criticism, it would be that the article gives perhaps a little too much space to Rudy Cambier’s Nostradamus-based Moustier theory (which I covered here back in 2013). But in the end you can’t deny it’s a Belgian theory, so I guess Dirk had plenty of reason to indulge it just a little for EOS. 😉

Of course, even though his article captures much of the spirit of the Moustier cryptograms, Dirk is such an eminently sensible chap that he doesn’t hazard his own (inevitably doomed) guess as to what is going on. To be fair, Jarl’s crypto insights – for example, that there are far fewer letters repeated on each line of carved text than you might generally expect – do seem to run directly counter to just about any natural language hypothesis you might have about any possible underlying text. So perhaps there are fewer sustainable (supposedly) “common-sense” readings here than you might otherwise think.

The article also highlight’s Philippe Connart’s suggestion of a possible link with 10th century lettering found at the abbey of Saint-Amand-les-Eaux (40km away). All the same, that does make me wonder whether what we are looking at in Moustier might be an (imperfect) copy of a much older carving, which itself had become worn and illegible over time, leaving us at least “twice removed” from the original.

Dirk’s collaborator Evelyn Bastien (who translated his article into French) felt compelled to add the following open question: “Even if it’s an ‘exercise’, wasn’t it just as easy to engrave a simple prayer, rather than incoherent letters?” This was in response to a theory I once proposed that the ragged nature of the Moustier lettershapes suggested that it might just have been some kind of mason’s practice. And Evelyn’s point is sensible and well-made, because that theory implies a double mysterynot only did someone carve mysterious letters, but also someone else erected those same mysterious letters in their church.

But to be fair, that double mystery is what sits right at the heart of the Moustier enigma: for the challenge to our minds isn’t just that someone made the cryptograms at all, it’s also that a church community then valued them sufficiently to celebrate its own faith right alongside those cryptograms.

In the end, it’s entirely true that all attempts so far to resolve one of this linked pair of durable mysteries have thrown little (or indeed no) light on the other one. But maybe EOS’ readers will prove to have some interesting ideas that have so far evaded us all… here’s hoping!

Here’s a suggestion for a Voynich Manuscript paper that I think might well be revealing: taking raking IR images of f116v. But why would anyone want to do that?

Multispectral imaging

Since about 2006, I’ve been encouraging people to take multispectral images of the Voynich Manuscript, i.e. to capture images of the manuscript at a wide variety of wavelengths, so not just visible light.

My interest here is seeing if there are technical ways we can separate out the codicological layers that make up f116v. To my eyes, there seem to be two or three different hands at play there, so it would make sense if we could at least partially figure out what the original layer there looked like (before the other layer was placed on top, I guess at least a century later).

And in fact one group did attempt multispectral scanning, though with only a limited set of wavelengths, and without reaching any firm conclusions. (They seem not to have published their results, though I did once stumble across some of their test images lying around on the Beinecke’s webserver.)

The Zen of seeing nothing

Interestingly, one of that group’s images of f116v was taken at 940nm (“MB940IR”), which is an infrared frequency (hence “IR”). This revealed… nothing. But in what I think is potentially an interesting way.

Here’s what it looks like (hopefully you remember the michitonese at the top of f116v):

Main banks Transmissive

That’s right! At 940nm, the text is invisible. Which is, of course, totally useless for normal imaging. For why on earth would you want to image something at a wavelength where you can’t see any detail?

Raking Light

The interesting thing about this is that one kind of imaging where you’d want the text itself to be as invisible as possible is when you’re doing raking illumination, i.e. where you shine an illuminating light parallel to the surface. At the edges of penstrokes (if you’re looking really closely) at high-ish magnification, you should be able to use this to see the shadows of the edge of the indentations left by the original quill pen.

And so I’ve long wondered whether it might be possible to use a 940nm filter (and a non-LED light source) and a microscope / camera on a stand to try to image the depth of the penstrokes in the words on f116v. (You’d also need to use an imaging device with the RG/GB Bayer filter flipped off the top of the image sensor; or a specialist b&w imaging sensor; or an old-fashioned film camera, horror of horrors!)

What this might tell us

Is this possible? I think it is. But might it really be able to help us separate out the two or more hands I believe are layered in f116v? Though I can’t prove it, I strongly suspect it might well be.

Why? Because vellum hardens over time. In the first few years or so after manufacture, I’m sure that vellum offers a lithe and supple writing support, that would actually be quite nice to write on. However, fast forward from then to a century or so later, and that same piece of vellum is going to be harder, drier, more rigid, slippier, scrapier – in short, much less fun to write on.

And as a result, I strongly suspect that if there are two significantly time-separated codicological layers on f116v, then they should show very different writing indentation styles. And so my hope is that taking raking IR images might possibly help us visualise at least some of the layering that’s going on on f116v, because I reckon each of these 2+ hands should have its own indentation style.

Will this actually work? I’m quietly confident it will, but… even so, I’d have to admit that it’s a bit of a lottery. Yet it’s probably something that many should be able to test without a lot of fuss or expense. Does anyone want to give this a go? Sounds to me like there should be a good paper to be had there from learning from the experience, even if nothing solid emerges about the Voynich Manuscript.

Anyone who spends time looking at Voynichese should quickly see that, rare characters aside, its glyphs fall into several different “families” / patterns:

  • q[o]
  • e/ch/sh
  • aiin/aiir
  • ar/or/al/ol/am/om
  • d/y
  • …and the four “gallows characters” k/t/f/p.

The members of these families not only look alike, they often also function alike: it’s very much the case that glyphs within these families either group together (e.g. y/dy) or replace each other (e.g. e/ee/eee/ch/sh).

For me, one of the most enigmatic glyph pairs is the gallows pair EVA k and EVA t. Rather than be seduced by their similarities, my suggestion here is to use statistics to try to tease their two behaviours apart. It may sound trivial, but how do EVA k and EVA t differ; and what do those differences tell us?

The raw numbers

Putting strikethrough gallows (e.g. EVA ckh) to one side for the moment, the raw k/t instance frequencies for my preferred three subcorpora are:

  • Herbal A: (k 3.83%) (t 3.28%)
  • Q13: (k 5.38%) (t 2.27%)
  • Q20: (k 5.19%) (t 2.76%)

Clearly, the ratio of k:t is much higher on Currier B pages than on Currier A pages. Even if we discount the super-common Currier B words qokey, qokeey, qokedy, qokeedy, qokaiin, a large disparity between k and t still remains:

  • Q13: (k 4.3%) (t 2.46%)
  • Q20: (k 4.58%) (t 2.89%)

In fact, this k:t ratio only approaches (rough) parity with the Herbal A k:t ratio if we first discount every single word beginning with qok- in Currier B:

  • Q13: (k 2.71%) (t 2.41%)
  • Q20: (k 3.57%) (t 2.86%)

So there seems to be a hint here that removing all the qok- words may move Currier B’s statistics a lot closer to Currier A’s statistics. Note that the raw qok/qot ratios are quite different in Herbal A and Q13/Q20 (qok is particularly strong in Q13), suggesting that “qok” in Herbal A has a ‘natural’ meaning and “qok” in Q13/Q20 has a different, far more emphasised (and possibly special) meaning, reflecting the high instance counts for qok- words in Currier B pages:

  • Herbal A: (qok 0.79%) (qot 0.68%)
  • Q13: (qok 3.04%) (qot 0.74%)
  • Q20: (qok 1.84%) (qot 0.70%)

Difference between ok/yk and ot/yt

If we put ckh, cth and all qok- words to one side, the numbers for ok/yk and ot/yt are also intriguing:

  • Herbal A: (ok 1.38%) (ot 1.31%) (yk 0.51%) (yt 0.48%)
  • Q13: (ok 1.07%) (ot 0.91%) (yk 0.17%) (yt 0.12%)
  • Q20: (ok 1.53%) (ot 1.47%) (yk 0.19%) (yt 0.14%)

What I find interesting here is that the ok:ot and yk:yt ratios are just about identical with the k:t ratios from Herbal A. Consequently, I suspect that whatever k and t are expressing in Currier A, they are – once you go past the qok-related stuff in Currier B – probably expressing the same thing in Currier B.

As always, there are many possible reasons why the k instance count and the t instance count should follow a single ratio: but I’m consciously trying not to get caught up in those kinds of details here. The fact that k-counts are consistently that little bit higher than t-counts in several different contexts is a good enough result to be starting from here.

Might something have been added here?

From the above, I can’t help but wonder whether EVA qok- words in Currier B pages might be part of a specific mechanism that was added to the basic Currier A system.

Specifically, I’m wondering whether EVA qok- might be the Currier B mechanism for signalling the start of a number or numeral? This isn’t a fully-formed theory yet, but I thought I’d float this idea regardless. Something to think about, certainly.

As a further speculation, might EVA qok- be the B addition for cardinal numbers (1, 2, 3, etc) and EVA qot- be the B addition for ordinal numbers (1st, 2nd, 3rd, etc)? It’s something I don’t remember seeing suggested anywhere. (Please correct me if I’m wrong!)

So: do I think there’s room for an interesting paper on EVA k/t? Yes I do!

It’s well-known that the distribution of Voynichese page-initial (and indeed paragraph-initial) glyphs is, unlike the rest of the text, strongly dominated by gallows characters. But what is less widely known is that something really fishy is going on with the distribution of all the other line-initial glyphs too.

As far as I know, nobody has yet given this behaviour the in-depth attention it properly deserves, which is why I think it would make a good subject for a paper. Though it perhaps needs a catchier name than “Line-Initial But Not Paragraph-Initial Glyph” (LIBNPIG) statistics (so please feel free to come up with a better name or acronym).

Though you might reasonably ask: isn’t this just another side of the whole constellation of LAAFU (“line as a functional unit”) behaviours?

Well, yes and no. “LAAFU” is a shorthand mainly used by some Voynich researchers to signal their despair at the unknowableness of why certain glyphs seem to ‘prefer’ different positions within a line. So yes, LIBNPIG behaviour is a kind of LAAFU behaviour: but no, that doesn’t mean it can’t be understood. (Or at least carefully quantified and tortured on a statistical rack.)

LIBNPIG Observations

How do we know that something funky is going on with LIBNPIGs?

LIBNPIG ‘tells’ are perhaps most visible in Q20 (Quire #20). For example, even though EVA daiin is common in Currier A pages (you may recall that it’s one of the ‘Big Three’ A-words – daiin / chol / chor), it’s far less common in Currier B pages: however, when it does occur in Q20, it is frequently in a LIBNPIG position. In fact, this is true of all word-initial EVA d- words in Q20, which you can see here (scroll to the bottom).

Similarly, if you look at EVA s- words (ignoring sh- words, which is a particularly annoying EVA artifact, *sigh*) in Q20, you should also see that these appear far more often line-initially than they should.

Is that all? No. The same is true of EVA y- words in Q20 too, but this pattern is additionally true in Herbal B pages. Note that this also seems to be true of some Herbal A pages, but EVA y- words in Herbal A appear to work quite differently to my eyes. (Though I’d advise looking for yourself, & form your own opinion.)

Curiously, even though paragraph-initial words so strongly favour gallows characters, LIBNPIG words seem to abhor gallows characters, a behaviour which is in itself quite suggestive and/or mysterious.

Conversely, if you go looking for LIBNPIG EVA ch- and sh- words, I believe you’re far more likely to instead find them clustering at the second word on a line. Note that Emma May Smith (with Marco Ponzi) looked at this back in 2017, though more from a word-based perspective (even though the first two words on a line in Q20 are often fairly odd-looking). The concern for me is more that these behaviours mean that Voynich word dictionaries (and indeed all word analyses) based on line-initial words are unreliable.

So, what is going on in Q20 (in particular) that is making LIBNPIG words prefer d- / s- / y- so much? I guess this really is the starting point of the paper I’m suggesting here.

Vertical keys?

The notion that the first column of glyphs might have some kind of special meaning is far from new. In fact, there is evidence suggesting this in the manuscript itself on page f66r, where you can clearly see a column of glyphs (though admittedly there is also a column of freestanding words to its left). This is a curious item to find in a manuscript.

But might all (or, at least, many) pages of Voynichese text contain vertical keys inserted as a single line-initial glyph at the start of lines? Philip Neal speculated about this possibility many years ago, causing me to (occasionally) refer to these as “vertical Neal keys”. A vertical key might conceivably be used for many things, such as inserting an (enciphered) page title, or even a folio number or page number: though it’s easy to argue that the relatively narrow range of glyphs we see appearing here probably rule this out.

In “The Curse of the Voynich” (2006), I speculated instead that a glyph inserted at the start of a line might form part of some kind of transposition cipher. The suggestion there was that a second glyph (say, a k-gallows) might act as a token to use the glyph (or some function of that glyph) inserted at the start of the same line. This would be a fairly simple crypto ‘hack’ that would make codebreakers’ jobs difficult.

There are many other possible accounts one can devise. For example, it’s possible that the first glyph on a non-paragraph-initial might function as a kind of catchword, to link the end of one line with the start of the next. Alternatively, it might be telling the reader how to join the text at the end of the preceding line with the text at the start of the current line. Or it might have some kind of crypto token function (e.g. selecting a dictionary). Or it might be a numbering scheme. Or it might be a marker for some funky line transposition scheme. Or a null. Or… one of a hundred other things (if not more).

If all these speculations seem somewhat ungrounded, it’s almost certainly because the basic groundwork to build a sensible discussion of LIBNPIG behaviour upon hasn’t yet been done. Which is your job. 🙂

LIBNPIG Groundwork

What needs doing? For a start, you’d need to build up a solid statistical comparison of paragraph-initial glyphs and LIBNPIG glyphs, along with paragraph-second glyphs and LSBNPS (line-second-but-not-paragraph-second) glyphs, for paragraph text in each of Herbal A, Herbal B, Q13 and Q20 (I would suggest).

With those results in hand, there are some basic hypotheses you might want to try testing:

  • Is there any statistical correlation between a LIBNPIG glyph and the glyph immediately following it? Oddly, it seems that nobody has yet tried to test this – yet if there isn’t (as visually seems to be the case), then I think it’s safe to say that something is provably wrong with all naive text readings.
  • Is there a correlation between a LIBNPIG glyph and the previous line’s end-glyph?
  • Is there a correlation between a LIBNPIG glyph and the following word’s start glyph?
  • Do paragraph-initial second words behave the same way as LIBNPIG second words?
  • Might LIBNPIG glyphs simply be nulls? Might they be chosen just to look nice? Or do they have some genuinely meaningful content?
  • How does all this work for paragraph text in each of the major sections of the Voynich Manuscript? e.g. Herbal A, Herbal B, Q13, Q20
  • (I’m sure you can devise plenty of your own hypotheses here!)

Ultimately, what we would like to know is what LIBNPIG behaviours tell us about how the start of Voynichese lines have to be parsed – for if there is no statistical correlation between a line-initial glyph and the glyph following it, this cannot be a language behaviour.

Even though we can all see numerous LAAFU behaviours, it seems that few Voynich researchers have yet accepted them solidly enough to affect the way they actually think about Voynichese. But perhaps it is time that this changed: and perhaps LIBNPIG will be the thing that causes them to change how they think.

Here’s a second paper suggestion for the virtual Voynich conference being held later this year: this focuses on creatively visualising the differences between Currier A and Currier B.

A vs B, what?

“Currier A” and “Currier B” are the names Voynich researchers use to denote the main two categories of Voynichese text, in honour of Prescott Currier, the WWII American codebreaker who first made the distinction between the two visible in the 1970s.

Currier himself called the two types of Voynichese “A” and “B”, and described them as “languages”, even though he was aware some people might well misinterpret the term. (Spoiler alert: yes, many people did.) He didn’t do this with a specific theory about the manuscript’s text: it’s essentially an observation that the text on different pages work in very different ways.

Crucially, he identified a series of Voynich glyph groupings that appeared in one “language” but not the other: thanks to the availability of transcriptions, further research in the half century since has identified numerous other patterns and textual behaviours that Currier himself would agree are A/B “tells”.

Interesting vs Insightful

But… this is kind of missing the point of what Voynich researchers should be trying to do. The observation that A and B differ is certainly interesting, but it’s not really insightful: by which I mean the fact that there is a difference doesn’t cast much of a light on what kind of difference that difference is.

For example, if A and B are (say) dialects of the same underlying language (as many people simply believe without proof – though to be fair, the two do share many, many features), then we should really be able to find a way to map between the two. Yet when I tried to do this, I had no obvious luck.

Similarly, if A and B are expressions of entirely different (plaintext) languages, the two should really not have so many glyph structures in common. Yet they plainly do.

Complicating things further is the fact that A and B themselves are simplications of a much more nuanced position. Rene Zandbergen has suggested that there seem to be a number of intermediate stages between “pure” A and “pure” B, which has been taken by some as evidence that the Voynich writing system “evolved” over time. Glen Claston (Tim Rayhel) was adamant that he could largely reconstruct the order of the pages based on the development of the writing system (basically, as it morphed from A to B).

Others have suggested yet more nuanced accounts: for example, I proposed in “The Curse of the Voynich” (2006) that part of the Voynichese writing system might well use a “verbose cipher” mechanism, where groups of glyphs (such as EVA ol / or / al / or / aiin / qo / ee / eee / etc) encipher single letters in the plaintext. This would imply that many of the glyph structures shared between A & B are simply artifacts of what cryptologists call the “covertext”: and hence if we want to look at the differences between A and B in a meaningful way, we would have to specifically look beneath the covertext – something which I suspect few Voynich researchers have traditionally done.

Types of Account

As a result, the A/B division sits atop many types of account for the nature of what A and B share, e.g.

  • a shared language
  • a shared linguistic heritage
  • a shared verbose cipher, etc

It also rest upon many different accounts of what A and B ultimately are, e.g.:

  • two related lost / private languages
  • a single evolving orthography wrapped around a lost / private language
  • a single evolving language
  • a single evolving shorthand / cipher system, etc

The difficulty with all of these accounts is that they are often held more for ideological or quasi-religious reasons (i.e. as points of faith, or as assumed start-points) than as “strong hypotheses weakly held”. The uncomfortable truth is that, as far as I know, nobody has yet tried to map out the chains of logical argumentation that move forwards from observational evidence / data to these accounts. Researchers almost always move in the reverse direction, i.e. from account to the evidence, rather than from evidence to explanation.

And when the primary mode of debate is arguing backwards, nobody normally gets anywhere. This seems to be a long-standing difficulty with cipher mysteries (particularly when treasure hunters get involved).

EVA as a Research Template

If Voynich researchers are so heavily invested in a given type of account (e.g. Baxian linguistic accounts, autocopying accounts, etc), how can we ever make progress? Fortunately, we do have a workable template in the success of EVA.

The problem researchers faced was that, historically, different transcriptions of the Voynich were built on very specific readings of Voynichese: the transcriber’s assumptions about how Voynichese worked became necessarily embedded in their transcription. If you were then trying to work with that transcription but disagreed with the transcriber’s assumptions, it would be very frustrating indeed.

EVA was instead designed as a stroke-based alphabet, to try to capture what was on the page without first imposing a heavy-duty model of how it ought to work on top of it. Though EVA too had problems (some more annoying than others), it provided a great way for researchers to collaborate about Voynichese despite their ideological differences about how the Voynichese strokes should be parsed.

With the A/B division, the key component that seems to be missing is a collaborative way of talking about the functional differences between A and B. And so I think the challenge boils down to this: how can we talk about the functional differences between Currier A and Currier B while remaining account-neutral?

Visualising the Differences

To my mind, the primary thing that seems to be missing is a way of visualising the functional differences between A and B. Various types of visualisation strategies suggest themselves:

  • Contact tables (e.g. which glyph follows which other glyph), both for normal parsing styles and for verbose parsing groupings – this is a centuries-old codebreaking hack
  • Model dramatisation (e.g. internal word structure model diagrams, showing the transition probabilities between parsed glyphs or parsed groups of glyphs)
  • Category dramatisation (e.g. highlighting text according to its “A-ness” or its “B-ness”)

My suspicion has long been that ‘raw’ glyph contact tables will probably not prove very helpful: this is because these would not show any difference between “qo-” contacts and “o-” contacts (because they both seem like “o-” to contact tables). So even if you don’t “buy in” to a full-on verbose cipher layer, I expect you would need some kind of glyph pre-grouping for contact tables to not get lost in the noise.

You can use whatever visualisation strategies / techniques you like: but bear in mind the kind of things we would collectively like to take away from this visualisation:

  • How can someone who doesn’t grasp all the nuances of Voynichese ‘get’ A-ness and B-ness?
  • How do A-ness and B-ness “flow” into each other / evolve?
  • Are there sections of B that are still basically A?
  • How similar are “common section A” pages to “common section B” pages?
  • Is there any relationship between A-ness / B-ness and the different scribal hands? etc

Problems to Overcome

There are a number of technical hurdles that need jumping over before you can design a proper analysis:

  • Possibilism
  • Normalising A vs B
  • First glyphs on lines
  • Working with spaces
  • Corpus choice

Historically, too much argumentation has gone into “possibilism”, i.e. considering a glyph pattern to be “shared” because it appears at least once in both A and B: but if a given pattern occurs (say) ten times more often in B than A, then the fact that it appears at all in A would be particularly weak evidence that it is sharing the same thing in both A and B. In fact, I’m sure that there are plenty of statistical disparities between A and B to work with: so it would be unwise to limit any study purely to features that appear in one but not the other.

There is also a problem with normalising A text with B text. Even though there seems to be a significant band of common ground between the two, a small number of high-frequency common words might be distorting the overall statistics, e.g. EVA daiin / chol / chor in A pages and EVA qokey / qokeey / qol in B pages. I suspect that these (or groups similar to them) would need to be removed (or their effect reduced) in order to normalise the two sets of statistics to better identify their common ground.

Note that I am deeply suspicious of statistics that rely on the first glyph of each line. For example, even though EVA daiin appears in both A and B pages, there are some B pages where it appears primarily as the first word on different lines (e.g. f103v, f108v, f113v, all in Q20). So I think there is good reason to suspect that the first letter of all lines is (in some not-yet-properly-defined way) unreliable and should not be used to contribute to overall statistics. (Dealing properly with that would require a paper on its own… to be covered in a separate post).

Working with spaces (specifically half-spaces) is a problem: because of ambiguities in the text (which may be deliberate, from scribal arbitrariness, from transcriber arbitrariness, etc), Voynich transcription is far from an exact science. My suggested mitigation would be to avoid working with sections that have uncertain spacing and labels.

Finally: because of labelese, astro labels and pharma labels, corpus choice is also problematic. Personally, I would recommend limiting analysis of A pages to Herbal A only, and B pages to Q13 and Q20 (and preferably keeping those separate). There is probably as much to be learnt from analysing the differences between Q13’s B text and Q20’s B text as from the net differences between A and B.

If you hadn’t already heard, a Voynich Manuscript-themed virtual conference has recently been announced for 30th November to 1st December 2022: and its organisers have put out a call for papers.

Me, I have at least twenty ideas for topics, all of which I think could/should/would move the state of research forward. But my plan is actually to write up as many of them as I can in posts here, and let people freely take them to develop as their own, or (my preference) to form impromptu collaborations (via the comments section here, or via a thread on voynich.ninja, whatever works for you) to jointly pitch to the organisers.

I’ll start with what I think is the most obvious topic: DNA gathering analysis. I’ll explain how this works…

Quires vs Gatherings

Though some people like to oppose it, by 2022 Voynich researchers really should have fully accepted the idea that many of the Voynich’s bifolios have, over the centuries, ended up in a different nesting/facing order to their original nesting/facing order. There is so much supporting evidence that points towards this, not least of which is the arbitrary & confused interleaving of Herbal A and Herbal B bifolios.

Consequently, there is essentially zero doubt that the Voynich Manuscript is not in its original ‘alpha’ state. Moreover, good codicological evidence suggests that the original alpha state was not (bound) quires but instead (unbound) gatherings, because the quire numbering seems to have been added after an intermediate shuffling stage.

The big codicological challenge, then, is to work out how bifolios were originally grouped together (into gatherings), and how bifolios within each gathering were nested – i.e. the original ‘alpha’ state of the Voynich Manuscript.

Yet without being able to decrypt its text, we have only secondary clues to work with, such as tiny (and often contested) contact transfers. And because many of the (heavy) paint contact transfers (such as the heavy blue colour) seem to have happened much later in the manuscript’s lifetime, many of the contact transfers probably don’t tell us anything about the original state of the manuscript.

In Chapter 4 “Jumbled Jigsaws” (pp.51-71) of my (2006) book “The Curse of the Voynich”, I did my best to use a whole range of types of clue to reconstruct parts of the original folio nesting/facing order. Even so, this was always an uphill struggle, simply because we collectively had no properly solid physical forensic evidence to move this forward in what you might consider a systematic way.

From Gatherings to Vellum Sheets

However, a completely different way of looking at a manuscript is purely in terms of its material production: how were the pages in a gathering made up?

If a vellum manuscript is not a palimpsest (i.e. using previously-used vellum that has been scraped clean), it would typically have started as a large vellum sheet, which would then have been folded down and cut with a knife or shears or early scissors into the desired form. Given the unusual foldout super-wide folios we see in the Voynich Manuscript, I suspect there is almost no chance that these sheets were pre-cut.

As such, the normal process (e.g. for book-like sections) would have been to fold a sheet in half, then in half again, and then cut along the edges (leaving the gutter fold edge intact) to form a small eight-page gathering. This is almost certainly what happened when the Voynich Manuscript was made, i.e. it was built up over time using a series of eight-page gatherings, each from a single sheet.

It’s also important to remember that vellum was never cheap (and it took most of the fifteenth century for the price of paper to become anything less than a luxury item too). Hence even larger fold-out sheets would have not been immune from this financial pressure: so where possible, what remained of a vellum sheet after a foldout had been removed would typically have had to have been used as a bifolio.

The reason this is important is that where bifolios of a gathering were formed from a single sheet of vellum, they would all necessarily share the same DNA. And so this is where the science-y bit comes in.

Enter the DNA Dragon

Essentially, if you can take a DNA swab (and who in the world hasn’t now done this?) of each of the Voynich Manuscript’s bifolios, you should be able to match them together. There is then a very high probability that these matches would – in almost all cases – tell you what the original gatherings were.

The collection procedure appears – from this 2017 New Scientist article – to be painfully simple: identify the least handled (and text-free and paint-free) parts of each bifolio, and use a rubber eraser to take a small amount of DNA from the surface. Other researchers (most famously Timothy Stinson) are trying to build up horizontal macro-collections of medieval vellum DNA: but because the Voynich Manuscript is not (yet) readable, a micro-collection of the DNA in its bifolios would offer a very different analytical ‘turn’.

Though DNA has famously been used for many types of forensic analysis (there are entire television channels devoted to this), determining the original gathering order of an enciphered manuscript is not yet – as far as I know – one of them. But it could be!

Finally: once the gatherings have been matched, close examination (typically microscopic) to determine the hair / flesh side of each bifolio should help further reduce the possible number of facing permutations within each gathering. Remember, the normal practice throughout the history of vellum was that a folded gathering or quire will almost always end up in a flesh-facing-flesh and skin-facing-skin state.

Why is this Important?

As far as understanding the codicology of an otherwise unreadable document goes, DNA gathering matching would be hugely important: it would give clarity on the construction sequence of every single section of the Voynich Manuscript. This, in turn, would cast a revealing light on contentious issues of document construction and sectioning that have bedeviled researchers for years.

This would include not only the relationship of Herbal A bifolios to Herbal B bifolios (a debate going at least back to Prescott Currier), but also the more modern debates about Q13A vs Q13B, Q20A vs Q20B, and the relationship between Herbal A and the various Pharma A pages.

The biggest winners from reconstructing the manuscript’s alpha state would be researchers looking to find meaning and structure in the text. As it is, they’re trying to infer patterns from a document that appears to have been arbitrarily shuffled multiple times in its history. Along these lines, there’s a chance we might be able to use this to uncover a block-level match between a section and an external (unencrypted) text, which is something I have long proposed as a possible way in to the cipher system.

There is also a strong likelihood that folio numbers might well be encrypted (e.g. in the top line of text) – historically, many complicated cipher systems have been decrypted by first identifying their underlying number system, so this too is an entirely possible direct outcome of this kind of research. It would additionally make sense for anyone trying to understand the different scribal hands to be able to situate those contributions relative to the manuscript’s alpha state rather than to its final (omega) state.

In those few sections where we have already been able to reconstruct the manuscript’s alpha state (e.g. Q9), we have uncovered additional symmetries and patterns that were not obviously visible in the shuffled state. Imagine how much more we would be able to uncover if we could reconstruct the alpha state of the entire manuscript!

So… Why Haven’t You Done This Already, Nick?

I’ve been trying for years, really I have. And through that time this basic proposal has received a ton of negativity and push-back from otherwise smart people (who I think really should have known better).

But the times they are (always) a-changing, so maybe it’s now the right time for someone else completely to try knocking at broadly this same door. And if they do, perhaps they’ll find it already open and waiting for them. A moment’s thought should highlight that there’s certainly a great deal – in fact, an almost uniquely large amount – of new, basic stuff to be learnt about the Voynich Manuscript’s construction here.

Yet at the same I would caution that if you look at the list of proposed topic areas for the conference, this kind of physical analysis doesn’t really fit the organisers’ submission model at all. After first submitting a 1-2-page abstract by 30th June 2022, allowing only five weeks after acceptance (20th July 2022) to write a 5-9 page paper seems a bit hasty and superficial, as if the organisers aren’t actually expecting anybody to submit anything particularly worthwhile. But perhaps they have their specific reasons, what do I know?

(But then again, maybe you’d be best off phoning your aunt who works at the History Channel and get an in with a TV documentary-making company. If film-makers can squeeze nine series out of “The Curse of Oak Island”, you’d have thought they’d be all over this like a rash, right? Right?)

Can I ask for a little help? I turns out that my archive-related skills are as good as useless when it comes to locating the living: and I’d really like to contact members of the Kendall family to ask them about Captain Charles Hansford Kendall (1904-1949). I don’t know whether either of his two sons (Charles Hansford Kendall Jr and John Atterbury Kendall) are alive, but they or their children would be the first I’d like to ask.

I’m also interested in finding out more about Admiral Henry Samuel Kendall, CHK’s older brother: I wonder whether he was one of the people Commander George W. Hoover was talking about with “Project Durante“.

If anyone wants to contact me directly (e.g. for confidentiality about living people), please feel free to email me: I’m nickpelling at nickpelling dot com.

PS: Captain Charles Hansford Kendall’s obituary in the Washington Evening Star of 28th August 1949 noted that “[s]ince 1946 he had been chief of staff to the commander of the Lakehurst (N. J.) Naval Air Station”, which I’d point out isn’t quite the same as the role he had in his Associated Press obituary.

The Kendall Family Tree (incomplete!)

  • Henry Simon Kendall (16 November 1855 – 28 November 1935)
    married 26 Jun 1895 to Emily Carter Sclater (16 April 1865 – 16 March 1938)
    • Henry Samuel Kendall [Admiral] (19 April 1896 – 7 November 1963)
      married 31 December 1918 to Evelyn Leroy (b. 1895 to Oscor H Leroy & Janet Puroin)
      “In the early part of his career he had been a Naval aviator and also assigned to the Bureau of Aeronautics of the Navy Department in D.C.” [boydbooks]
    • Elizabeth Kendall (5 December 1897 – 5 March 1988)
      married 20 Mar 1919 to Dr/Dean Paul Stilwell McKibben (1886-1941)
      • Paul Stilwell McKibben (4 December 1919 – 21 May 2002)
        married 29 Jun 1946 to Susan Jane Boland
        • Barbara McKibben Varon
        • Bill McKibben
        • George McKibben
        • Paul McKibben
      • Richard Kendall McKibben (20 February 1921 – 2011)
        Married 23 September 1950 to Patricia Marie Luer
        • Dea McKibben
      • Elizabeth Thresher McKibben (28 November 1922)
      • John Hansford McKibben (28 August 1928 – 1999)
        Married 11 November 1957 to Costance Stephens (1933-)
    • Richard Carter “Carter” Kendall (20 March 1901 – 25 October 1962)
      married 13 April 1931 Margaret Elizabeth Williamson (1910 – 1972)
      • Margaret Emilie “Peggylee” Kendall (8 February 1932 – 2020)
        married Robert J. Fulmer (30 July 1931 – 17 February 2009)
        • Erich Fulmer
          married Mary Lou Hughes (of Cortlandt Manor, NY)
        • Margaret Fulmer
          married Jeffery Wolf
      • Deborah Logan Kendall (29 February 1944 – 10 August 2005)
        married 4 September 1971 to James Robert Glatfelter (1948-2015)
      • David Raleigh Kendall (8 March 1947 – )
    • Charles Hansford Kendall (17 July 1904 – 26 August 1949)
      married 16 Jul 1938 to Boudinot Atterbury Oberge (1917 – 20 February 1996)
      • Charles Hansford Kendall Jr (13 June 1939 – )
        married 15 November 1969 to Gloria Anne Gicker (daughter of Mr & Mrs James M Gicker of Overbrook)
        • ?
      • John Atterbury Kendall (29 January 1943 – )
        married December 1967 to Ann New (June 1930?) (now Ann New Kendall [Pike?])
        • ?

While continuing my trawl for all things to do with Captain Charles Hansford Kendall USN (1904-1949), I found (courtesy of the Social Register, Philadelphia, 1949) that “Kendall Capt Chas H-USN” died in a Naval Hospital ( (Philadelphia’s swizzy Art Deco-styled US Naval Hospital, demolished in 2001) on 26th August 1949.

I also found his (brief) obituary in the Philadelphia Inquirer (Sunday 29th August 1949 edition, p.26):

Capt. Charles H. Kendall

Capt. Charles H. Kendall, of 923 Old Manoa rd., Penfield, an experimental officer at the Lakehurst (N. J.) Naval Air Station, died Friday at Philadelphia Naval Hospital. He was 45. Captain Kendall commanded a division of destroyers in the Pacific during the Second World War and was graduated in 1928 from the U. S. Naval Academy. He is survived by his wife, Mrs. Boudinot Oberge Kendall, formerly of Haverford; two sons, Charles, Jr.. 10, and John, 6. Funeral services will be held tomorrow at the Lakehurst station.

The Courier-Journal from Louisville, Kentucky had a little extra to add in its 28th August 1949 issue (p.20):

Naval Officer Dies.

Philadelphia, Aug. 27 (AP) Capt. Charles H. Kendall (U.S.N.), 46, died at Philadelphia Naval Hospital yesterday after a period of hospitalization. His widow, the former Miss Boudinot Oberge of Haverford, Pa., said he had command of a division of destroyers in the Pacific during World War II.

(The same basic story – apparently from Associated Press – also appears in the Asbury Park Press, N.J., 28th August 1949, p.2.) All of which is broadly what I expected (though I must admit that the “period of hospitalization” mentioned is somewhat intriguing).

Ukmyh Kipzy Puern

However, the interesting thing of the day was that while idly looking for Naval Hospitals, I then stumbled upon this image, which I simply had no choice but to share with you:

Ukmyh Kipzy Puern – October 1918

Ukmyh Kipzy Puern” – what kind of language is that?, you may (very reasonably) ask. Well, this was the front cover of the monthly magazine of the U.S Naval Cable Censor Office, San Francisco, California. Written in Bently’s Telegraphic Code (you can download the 1921 version here), this telegraphically encodes the real title – “The Monthly Gob“. (Of course it does.) Remember, if a word was more than five letters long, it was charged as two words.

The catalogue notes add “The cartoon, and the face mask drawn in upper right, may reflect countermeasures against the 1918-19 influenza epidemic”. So it seems relatively little has changed in a century or so, hohum. :-/

It turns out that I’m far from the only person to have dived headlong down the Project Helios rabbit-hole. While reading through ballooninghistory.com’s “Who’s Who” pages of balloon-related people (originally compiled by Robert Recks), I found J. Gordon Vaeth’s entry on the V page.

  • s: Officer in the U.S.Navy, LTA Command.
  • l: 1947-50, FAI-Balloon Commission; 1947-50, U.S.Represenative of the In’tl. League of Aeronauts; 1947, Naval research Coordinator of Project “Helios” (cluster strato-balloon, 100 science projects); 1948-50, Naval research Coordinator of Project “Skyhook” (cluster strato-balloon, cosmic ray sampling); 1955-56, Originator of Project-FATSO. The first manned airborne Telescopic & Spectroscopic Observatory.
  • l: Author of many articles on sport & scientific ballooning; Author of “200 Miles Up” on atmospheric research by balloon, 1956; Author of “Graf Zeppelin,” 1958.
  • a: 3000 Tennyson St; Washington, DC 20015.
  • r: Correspondence.

Which explains exactly how Vaeth was able to include information on Project Helios that I’ve not found elsewhere in his Epilogue section of “They Sailed The Skies” – it’s because he worked on it, of course.

I then wondered – as historians do – which institution or library J. Gordon Vaeth (b. 1921, d. 2012) left his papers to. And it didn’t take long to find them in the Smithsonian: and, yes, this includes a folder on Project Helios.

Nosing around the Smithsonian Archives quickly led me to Senior Curator David H. DeVorkin’s papers, which (again) include Project Helios:

Project Helios ONR Files – Chronology of project Helios file, photocopied letters, and index cards

I guess I would already have known this if I had patiently waited for my copy of DeVorkin’s book (1989) “Race to the Stratosphere” to land on my doorstep before blogging. *sigh* And, nicely, the Smithsonian has a collection of photographs that did and didn’t make it into Race to the Stratosphere.

Finally, there’s also a Project Skyhook collection there from the ONR (containing Project Helios papers) – and Vera Simons’ papers also mention Project Helios (though the timing may possibly be slightly off).

So, whereas I thought last night that I might have hit a brick wall in this research thread, today it seems that I instead have the shoulders of several giants to clamber onto. Which is nice.

All the same, it’s looking very much as though I’m going to have to physically go to the Smithsonian to read up on all this. But maybe I should ask David DeVorkin if there’s anything big I’ve missed here…