I’ve just watched the National Geographic / Naked Science documentary on the Voynich Manuscript, courtesy of a Stateside friend (thanks!). Regular Cipher Mysteries readers will already know how my review of it is supposed to go – ‘that, despite a few inaccuracies, it was great to see the Voynich Manuscript being brought to a popular audience‘.

But actually, the whole thing made me utterly furious: it was like watching yourself being airbrushed out of a family photograph. Let me get this straight: I researched the history like crazy, reasoned my way to the mid-15th century, stuck my neck out by writing the first properly new book on the Voynich for 30 years, talked with the documentary producers, sent lists of Voynich details for them to look at, got asked to fly out to Austria (though they later withdrew that at the last minute without explanation), kept confidences when asked, etc.

And then, once the film-makers got the radiocarbon dating in their hands, my Milan/Venice Averlino/Filarete theory became the last man standing (Voynich theory-wise). So why did it not get even a passing mention, when just before the end, they thought to edit in a map of Northern Italy with swallowtail-merloned castles and the narrator starts (apropros of nothing) to wonder what will be found in the archives “between Milan and Venice”. Perhaps I’m just being a bit shallow here, but that did feel particularly shabby on their part.

However pleased I am for Edith Sherwood that her Leonardo-made-the-Voynich-so-he-did nonsense merited both screentime and an angelic child actor pretending to be young Leonardo, the fact remains that it was guff before the radiocarbon dating (and arguably double guff afterwards): while much the same goes for all the Dee/Kelly hoax rubbish, which has accreted support more from its longstandingness than anything approaching evidence.

Perhaps the worst thing is that we’re all now supposed to bow down to the radiocarbon dating and start trawling the archives for candidates in the 1404-1438 timeframe. Yet even Rene Zandbergen himself has supplied the evidence for a pretty convincing terminus post quem: MS Vat Gr 1291 was completely unknown in Italy before being bought by Bartolomeo Malipiero as Bishop of Brescia, and so its stylistics could not sensibly have influenced the Voynich before 1457. In fact, 1465 – when the manuscript was carried from Brescia to Rome and became much better known – might even be a more sensible TPQ. And that’s without the cipher alphabet dating (post-1455 or so) and the parallel hatching dating (post-1440 if Florence, post-1450 if elsewhere in Italy).

And I’ll leave you with another thought: a couple of seconds after hearing the Beinecke’s Paula Zyats say “I don’t see any corrections”, the following image got edited in – a part of the f17r marginalia that looks to my eyes precisely like an emendation.

Voynich Manuscript f17r marginalia

Really, what am I supposed to think? *sigh*

I think that there will always be films based around codes because they give screenwriters such an “easy in”. Just saying “code” conjures up…

  • Dark secrets (e.g. heresy undermining The Church, free energy undermining The Market, occult powers, any old stuff really)
  • Powerful interests (usually multiple conspiracies fighting each other behind the scenes for control of ‘The Secret’)
  • A central McGuffin that is small enough to be concealed, smuggled and fought over in hand-to-hand combat in an implausible (often underground) location
  • Highly motivated central character(s) who, though technically prepared for the challenge, rapidly find themselves out of their depth in every other sense
  • (and so on)

Do you need to know much more than the title to construct the film poster? No wonder film companies like codes so much! I’m immediately reminded of Mercury Rising (which I saw again recently, and sort-of enjoyed), but also the 2010’s The 7th Dimension which is just being premiered. Personally, I tend to avoid films where any of the main characters are billed as “hackers” like the plague, but then again that might be because I’m a computer programmer by trade – if I was a dentist, perhaps I’d have done the same for Marathon Man, who knows? 🙂

Anyhow… an historical cipher-themed film I’m genuinely looking forward to is The Thomas Beale Cipher, a short animated film by Andrew S. Allen that has already premiered and should be out on the international festival circuit during the rest of 2010. Details are (probably deliberately) sketchy right now (for example, the YouTube sampler video for the film has been withdrawn), but there is a Facebook page to whet your appetite a little – I’ll let you know of any screening dates.

Not quite so high on my list of upcoming historical code films to look for is The Ancient Code, to be distributed by Warner Brothers. This has apparently been in production for a couple of years, and gives every impression – from the fairly PR-centric set of materials on the website – of being a set of talking heads talking their heads off about different aspects of holism, surrounded by faux-psychedelic video editing effects. It’s true that the film-makers have  assembled a fairly, well, eclectic set of heads to do their talking thang: but it’s hard to see how Nick Pope, Tim Wallace-Murphy, Philip Gardiner, and even Johnny Ball (the former children’s TV presenter who you may recall attracting some criticism in late 2009 for his particularly colourful denial of climate change) amount to a gigantic hill of gravitas re ancient wisdom and codes concealed through the ages. But all the same, I’ll continue to try hard to swim against the tide of unconvincingness the film seems to be sweating from every pore, so wish me luck in that endeavour. 🙂

And finally… clicking onwards from The Ancient Code’s not-so-ancient website leads us neatly to Philip Gardiner’s upcoming The James Bond Code, which struggles valiantly to connect Ian Fleming’s (anti)hero James Bond with alchemy and gnosis (apparently via voodoo and numerology). There’s even a fan music video based on the book, which frankly even I’m struggling to grasp any kind of rationale for. Doubtless there’s some kind of film optioning angle going on here too. Honestly, would anyone apart from a rather fevered PR hack call this “The Thinking Man’s Da Vinci Code“? [Nope, not a hope, sorry.] But all the same, there you have it, so feel free to add it (or not) to your own personal list of, errm, ‘hysterical historical hypotheses’, along with Gavin Menzies’ 1421 & 1434, Edith Sherwood’s “Leonardo da Vinci wrote the Voynich Manuscript” theory, etc.

The Internet is a strange thing, a virtual photographer’s jacket crammed with countless pockets of enthusiasts. For example, you beautiful cipher mysteries fans circulate within one bijou (but nicely-appointed) pocket, while the massed legions of Slashdot fans have a Tardis-style hyperzoom lens pocket all of their own. But… what would happen if these two worlds collided?

A chance to find out came in December 2009, when Edith Sherwood’s The-Voynich-Manuscript-was-made-by-Leonardo-da-Vinci-so-it-was website got picked up by Slashdot. From the 4900 overspill visits Cipher Mysteries got at the time, I estimated that she must have had “(say) 30000 or more” visits. This was probably about right, because in the few days since the same thing happened to Cipher Mysteries last weekend, its visit counter has lurched up by 38,000+. The onslaught started on Saturday night, when at its peak the Cipher Mysteries server was getting a new visitor roughly every second. By late Sunday, however, the story had finally slid off the bottom of the Slashdot front page (which only ever lists the ten most recent news items), at which point the tsunami turned into merely a large river. 🙂

According to the server logs, my Slashdotted Chaocipher page was read in 132 countries (USA 52%, Canada 8%, UK 7.5%, Australia 5.4%, etc), while US Slashdotters were mainly from California, Texas, New York, Washington, followed by another long tail. And OK, I know it’s a biased sample, but it was nice to see Internet Explorer in less than 8% of the browsers. One long-standing stereotype did fall by the wayside, though: there was a relative absence of trolls leaving snarky comments. Might Slashdot be *gasp* growing up? 😉

Actually, the nicest thing about the whole episode for me was that Moshe Rubin’s brother in Florida was unbelievably impressed when he saw Moshe’s name pop up on Slashdot. I know it’s only a small thing, but I’m really pleased for the guy, he deserves credit for his hard work and persistence bringing the Chaocipher out into the light.

* * * * * * *

Some quick follow-up thoughts on the Chaocipher…

It strikes me that Byrne’s neologism “Chaocipher” was remarkably prescient for 1918, because the whole idea of “chaos theory” – as per Wikipedia, “the behavior of dynamical systems that are highly sensitive to initial conditions“, AKA ‘the butterfly effect’ – had not long before that been started by Henri Poincaré. The French mathematician had shown that the classical three-body problem sometimes yielded tricksy outcomes that never converged (i.e. to a collision) nor diverged (i.e. to increasing distance from each other), but where the three bodies were somehow trapped in a dynamically constrained yet utterly mad-looking (OK, he actually said ‘nonperiodic’) manner. Yet after this promising beginning in the 1880s, the ‘chaos’ concept’s journey onwards was a particularly arduous (and non-obvious) one: even though people noticed the signatures of this odd behaviour in many different contexts, they had no comfortable vocabulary to describe it until well after Benoit Mandelbrot and Edward Lorenz in the 1960s.

And so I find it neatly uncanny that the Chaocipher appropriates the “chaos” word 50 years earlier than it should, while at the same time exactly demonstrating the properties that contemporary mathematicians now ascribe to it (i.e. “deterministic chaos”). As the cipher’s twizzling steps subtly mangle the order of the letters on the two rotors, both the error propagation and the cipher system complexity sharply ramp up over time, in a (quite literally) chaotic way: to my eyes, Byrne’s Chaocipher is no less artful and pleasing than any Mandelbrot set I’ve ever seen. However, because its mechanism was not disclosed until this year (2010), it is perhaps best thought of part of the secret history of applied chaos: by way of comparison, the earliest paper on “chaotic cryptography” I’ve found was Baptista’s “Cryptography with chaos” in Physics Letters A (1998) [mentioned online here].

So, it might be that as the full story behind the Chaocipher emerges from Byrne’s papers, we’ll discover that he cleverly applied Poincaré’s and Hadamard’s ideas to cryptography: but – between you and me –  I somehow doubt that this is what really happened. In my mind, there’s something both ham-fistedly mathematical and deviously mechanical about the Chaocipher, that makes its mongrelly combination of Alberti’s cipher wheel and movable circular type something that could (in principle, at least) have been devised any time since about 1465. All the same, I think that the single aspect of the Chaocipher that most makes it resemble an out-of-place artifact is that it is a pure algorithm made solid – a bit like a programming hack devised by someone who had never seen a computer. Perhaps programming is closer to carpentry than we think!

Without doubt, the Chaocipher lies just outside the rigid mathematical confines of the cipher development path laid down by the sequence of crytographers since Alberti: and so for me, the most inspiring lesson to be learned from it is that genius need take only a single step sideways to become utterly unrecognizable to the mainstream. Thinking again about the Voynich Manuscript’s cipher, might that too merely stand a single conceptual step beyond our tightly-blinkered mental range? Furthermore, might that also ultimately turn out to be part of the same secret history of applied chaos? It’s certainly an interesting thought…

Edith Sherwood very kindly left an interesting comment on my “Voynich Manuscript – the State of Play” post, which I thought was far too good to leave dangling in a mere margin. She wrote:-

If you read the 14C dating of the Vinland Map by the U of Arizona, you will find that they calculate the SD of individual results from the scatter of separate runs from that average, or from the counting statistical error, which ever was larger. They report their Average fraction of modern F value together with a SD for each measurement:

  • 0.9588 ± 0.014
  • 0.9507 ± 0.0035
  • 0.9353 ± 0.006
  • 0.9412 ± 0.003
  • 0.9310 ± 0.008

F (weighted average) = 0.9434 ± 0.0033, or a 2SD range of 0.9368 – 0.9500

Radiocarbon age = 467 ± 27 BP.

You will note that 4 of the 5 F values that were used to compute the mean, from which the final age of the parchment was calculated, lie outside this 2SD range!

The U of A states: The error is a standard deviation deduced from the scatter of the five individual measurements from their mean value.

According to the Wikipedia radiocarbon article:
‘Radiocarbon dating laboratories generally report an uncertainty for each date. Traditionally this included only the statistical counting uncertainty. However, some laboratories supplied an “error multiplier” that could be multiplied by the uncertainty to account for other sources of error in the measuring process.’

The U of A quotes this Wikipedia article on their web site.

It appears that the U of Arizona used only the statistical counting error to computing the SD for the Vinland Map. They may have treated their measurements on the Voynich Manuscript the same way. As their SD represents only their counting error and not the overall error associated with the totality of the data, a realistic SD could be substantially larger.

A SD for the Vinland map that is a reasonable fit to all their data is:

F (weighted average) = 0.9434 ± 0.011 ( the SD computed from the 5 F values).

Or a radiocarbon age = 467 ± 90 BP instead of 467 ± 27 BP.

I appreciate that the U of A adjust their errors in processing the samples from their 13C/12C measurements, but this approach does not appear to be adequate. It would be nice if they had supplied their results with an “error multiplier”. They are performing a complex series of operations on minute samples that may be easily contaminated.

I suggest that this modified interpretation of the U of A’s results for the Vinland Map be confirmed because a similar analysis for the Voynich Manuscript might yield a SD significantly larger than they quote. I would also suggest that your bloggers read the results obtained for 14C dating by the U of A for samples of parchment of known age from Florence. These results are given at the very end of their article, after the references. You and your bloggers should have something concrete to discuss.

So… what do I think?

The reason that this is provocative is that if Edith’s statistical reasoning is right, then there would a substantial widening of the date range, far more (because of the turbulence in the calibration curve’s coverage of the late fifteenth century and sixteenth century) than merely the (90/27) = 3.3333x widening suggested by the numbers.

All the same, I’d say that what the U of A researchers did with the Vinland Map wasn’t so much statistical sampling (for which the errors would indeed accumulate if not actually multiply) but cross-procedural calibration – by which I mean they experimentally tried out different treatment/processing regimes on what was essentially the same sample. That is, they seem to have been using the test as a means not only to date the Vinland Map but also as an opportunity to validate that their own brand of processing and radiocarbon dating could ever be a pragmatically useful means to date similar objects.

However, pretty much as Edith points out with their calibrating-the-calibration appendix, the central problem with relying solely on radiocarbon results to date any one-off object remains: that it is subject to contamination or systematic uncertainties which may (as in Table 2’s sample #4) move it far out of the proposed date ranges, even when it falls (as the VM and the VMs apparently do) in one of the less wiggly ranges on the calibration curve. Had the Vinland Map actually been made 50 years later, it would have been a particularly problematic poster (session) child: luckily for them, though, the pin landed in a spot not too far from the date suggested by the history.

By comparison, the Voynich Manuscript presents a quite different sampling challenge. Its four samples were taken from a document which (a) was probably written in several phases over a period of time (as implied by the subtle evolution in the handwriting and cipher system), and (b) subsequently had its bifolios reordered, whether deliberately by the author (as Glen Claston believes) or  by someone unable to make sense of it (as I believe). This provides an historical superstructure within which the statistical reasoning would need to be performed: even though Rene Zandbergen tends to disagree with me over this, my position is that unless you have demonstrably special sampling circumstances, the statistical reasoning involved in radiocarbon dating is not independent of the historical reasoning… the two logical structures interact. I’m a logician by training (many years ago), so I try to stay alert to the limits of any given logical system – and I think dating the VMs sits astride that fuzzy edge.

For the Vinland Map, I suspect that the real answer lies inbetween the two: that while 467 ± 27 BP may well be slightly too optimistic (relative to the amount of experience the U of A had with this kind of test at that time), 467 ± 90 BP is probably far too pessimistic – they used multiple processes specifically to try to reduce the overall error, not to increase it. For the Voynich Manuscript, though, I really can’t say: a lot of radiocarbon has flowed under their bridge since the Vinland Map test was carried out, so the U of A’s processual expertise has doubtless increased significantly – yet I suspect it isn’t as straightforward a sampling problem as some might think. We shall see (hopefully soon!)… =:-o

It’s been an interesting day: Edith Sherwood’s Voynich website got Slashdotted – given that Cipher Mysteries picked up 4900 visitors from that tsunami of geeky clicks, edithsherwood.com itself must have had (say) 30000 or more.

And then (just now), ORF released a teaser press release for next week’s “DAS VOYNICH-RÄTSEL” documentary to their (German-language) website. So, the real big news of the day is that the Austrian film-makers are certain that the VMS is even older than previously thought (though they don’t say by how much, or in comparison to which theory). The page says that they also took a number of ink and paint samples for analysis, and examined a number of key sections under UV light for erasures / emendations (all of which is good, exactly the kind of thing I hoped they’d do).

And here’s a site which is even more specific as to the date range and place revealed by the documentary: that it was made between 1404 and 1438 (in the “flat” part of the radiocarbon dating curve, hence the tight range), and in Northern Italy (probably or certainly?). Prepare yourself for the massed onslaught of Voynich doubters to disagree…

So, might the VMs actually turn out to be by Cicco Simonetta in his early days in the Sforza roving Chancellery? Marcello Simonetta would be pleased, but it’s still early days (I thought I’d be the first to mention it)…

PS: is it just me, or can anyone else still hear Steve Ekwall saying “It’S oLdEr ThAn YoU tHiNk“?

UPDATE: see the follow-up post “Voynich Manuscript – the state of play” for more on the Austrian documentary

Here’s a quick Voynich Manuscript palaeographic puzzle for you. A couple of months ago, I discussed Edith Sherwood’s suggestion that the third letter in the piece of marginalia on f116v was a Florentine “x”, as per Leonardo da Vinci’s quasi-shorthand. I also proposed that the topmost line there might have read “por le bon simon s…

Going over this again just now, I did a bit of cut-and-paste-and-contrast-enhance in a graphics editor to see if I could read the next few letters:-


OK, I’m still reasonably happy with “por le bon simon s…“, but what then? Right now, I suspect that this last word begins “sint…” (and is possibly “sintpeter“?) – could it be that this is the surname of the intended recipient? Of course, in the Bible, St Peter’s name was originally Simon, so “simon sintpeter” may or may not be particularly informative – but it could be a start, all the same.

But then again, the “n” and/or “t” of the “sint” could equally well have been emended by a well-meaning later owner: and the last few letters could be read as “ifer“, depending on whether or not the mark above the word is in the same ink. Where are those multispectral scans when you need them? Bah!

Feel free to add your own alternate readings below! 🙂

A new day dawns, bringing with it a nice email from Augusto Buonafalce in response to my post on Leonardo da Vinci’s ‘x’-like abbreviation for ‘ver (as recently mentioned by Edith Sherwood).


Augusto points out that if you remove the plain diagonal line in the reflected version, what remains appears to be similar to a ‘b’… but isn’t. In the 15th century “mercantesca” script, this particular ‘b’-like shape was used to denote ‘v’: Leone Battista Alberti suggested (in his Tuscan grammar) that this shape should be more widely employed to help tell ‘v’ apart from ‘u’. Specifically, Alberti’s ‘b’-like shape looked like this:-


Now, mercantesca hasn’t really been discussed in the context of the Voynich Manuscript before (Google returns no useful hits, while even the old VMs mailing list archives appear to be silent), which is something of a shame – for if arch-Florentines such as Leonardo, Alberti, and even Michaelangelo used it, mercantesca must surely have been as close to the beating heart of the Quattrocento project as the much-touted (but very different) ‘humanist hand’.

(The ‘humanist hand’, you may recall, is an upright, formal script that was a conscious revival of an earlier script – which is why dating the Voynich Manuscript based on supposed similarities with the the humanist hand alone is so contentious.)

While the formal humanist hand was used mainly for writing in Latin, the informal mercantesca (which flourished from 1350 to 1550, peaking around 1450-1500) was used mainly for writing in the vulgar tongue: when written well, it is sometimes called ‘bella mercantesca’.

There’s a reasonable literature on this which a Voynich researcher with palaeographic leanings ought to have at least a reasonable look through.:-

  • Orlanelli, G. ‘Osservazioni sulla scrittura mercantesca nel secoli XIV e XV’, in Studi in onore di Riccardo Filangieri (Naples 1959) I, pp.445-460
  • Irene Ceccherini (Firenze): La Genesi della Scrittura Mercantesca. (summary of 2005 poster session)
  • Albert Derolez, The palaeography of Gothic manuscript books (2003)

Having said that, it is perhaps the 45 volumes of the CMD (the Catalogue of Dated Manuscripts) produced over the last 50 years that need checking here, particularly the CMDIt (the Italian section), I suspect. A proper palaeography research challenge is something I’ve been meaning to post about for a while: but that’s definitely a job for another day…

Stuff to be thinking about! 🙂

A new day brings a new Google Adwords campaign from Edith Sherwood (Edith, please just email me instead, it’ll get the word out far quicker), though this time not promoting another angle on her Leonardo-made-the-Voynich-Manuscript hypothesis… but rather a transposition cipher Voynichese hypothesis. Specifically, she proposes that the Voynich Manuscript may well be Italian written in a simple (i.e. ‘monoalphabetic’) substitution cipher, but also anagrammed to make it difficult to read.

Anagram ciphers have a long (though usually fairly marginal) history: Roger Bacon is widely believed to have used one to hide the recipe for gunpowder (here’s a 2002 post I made on it), though it’s not quite as clear an example as is sometimes claimed. And if you scale that up by a factor of 100, you get the arbitrary horrors of William Romaine Newbold’s anagrammed Voynich ‘decipherment’ *shudder*.

More recently, Philip Neal has wondered whether there might be some kind of letter-sorting anagram cipher at play in the VMs: but acknowledges that this suggestion does suffer from various practical problems. I also pointed out in my book that Leonardo da Vinci and Antonio Averlino (‘Filarete’) both used syllable transposition ciphers, and that in 1467 Alberti mentioned other (now lost) kinds of transposition ciphers: a recent post here discussed the history of transposition ciphers in a little more detail.

So: let’s now look at what Edith Sherwood proposes (which is, at least, a type of cryptography consistent with the VMs’ mid-Quattrocento art history dating, unlike many of the more exotic ciphering systems that have been put forward in the past), and see how far we get…

Though her starting point was the EVA letter assignments (with a few Currier glyphs thrown in), she then finessed the letter-choices slightly to fit in with the pharma plant label examples she picked: and there you have it (apart from H, J, K, Q, X, Y, Z and possibly F, which are all missing). All you’d have to do, then, is to anagram the rest of the text for yourself, sell the book rights, and retire to a sea-breezy Caribbean island.


Might Edith Sherwood be onto something with all this? No, not a hope: for example, the letter instance distribution is just plain wrong for Italian, never mind the eight or so missing letters. As with Brumbaugh’s wobbly label-driven decipherment attempts, I somehow doubt you would ever find two plausible adjacent words in the main body of the text. Also: what would a sensible Italian anagram of “qoteedy” (“volteebg”) be?

Her plants are also a little wobbly: soy beans, for example, were only introduced into Europe in the eighteenth century… “galioss” is a bit of a loose fit for galiopsi (not “galiospi”, according to “The Botanical Garden of Padua” on my bookshelf), etc.

As an aside, I rather doubt that she has managed to crack the top line of f116v: “povere leter rimon mist(e) ispero”, “Plain letter reassemble mixed inspire” (in rather crinkly Italian).

All the same, it is a positive step forward, insofar as it indicates that people are now starting to think in terms of Quattrocento dating and the likely presence of non-substitution-cipher mechanisms, both of which are key first steps without which you’ll very probably get nowhere.

Edith Sherwood recently flagged the “sun-face” at the middle of f68v1 as being a representation of Apollo, and that this “could indicate an association with Roman mythology“. Certainly, the face is tilted slightly upward and is linked with the sun, both features you might (naively, iconologically) expect to point to Apollo. If only Voynich research was that simple! Let’s start by taking a look at the sun-face in context, in particular the paints….


Here, the red-coloured contact transfer (from f69r) at the bottom left clearly happened after the pages were rebound in the wrong order [f68v1’s “sun-face” initially sat beside f67r1’s “moon-face”], bringing to my mind the bloodstain imagined on the Sarajevo Haggadah by Geraldine Brooks in her novel “People of the Book” (which I’ll review here shortly). There are also “blue-edge” paint transfers (also from f69r) at 11.30, 12.00, 3.00, and 3.30, as well as some contact-transferred green “pipe-ends” at 10.30, 11, and 1 o’clock.

Given that the dirty black-blue paint on f68v1 appears to be identical to the one used on f69r, it seems extremely likely to me that the blue and green paints on both pages were later additions, whereas f68v1’s far paler yellow paint (which is covered over by the blue in a number of places) gives the distinct impression of being original. The ‘alpha’ (i.e. original) state of the page was therefore very likely to be just the drawings and the yellow paint only. If you snip away all the distracting blue paint in a a picture editor, you’d get something like this:-


With all the distracting blue paint removed, we can start to see more clearly what was being drawn. For instance, we can see the lines marking the front and back of the neck: and once we see those, we can see the wobbly line marking the back of the head (inside the circle). However, this appears to me to go over the dotted “headband” – and so the headband was apparently drawn first.

There is also a curious small loop where the head’s left ear would be, partially disguised by the rays, which I find reminiscent of the kind of stubby metal loops you see on astrolabes.

I therefore argue that this codicological evidence suggests that the alpha state of the image was probably a circle with a dotted arc that has been made to look as though it is a headband (when a face was added) – and so I would say that any resemblance to Apollo is very probably incidental to the real meaning of the page.

Dotted lines seem to have a particular resonance for the VMs’ author in several other places, and I have long suggested that these might very well indicate that meaningful information has been visually encoded. My guess here is that this was the briefest of sketches to allude to some kind of 15th century solar instrument – not an astrolabe, but something broadly similar.

To me, all this exemplifies the problem with looking for iconographic matches on the VMs’ sleek surface: in most cases, the basic codicological study (that ought to precede any searching for meaning) seems not to have been done – far too often, people skip to the chase without really looking at the page first.

Oh well! 😮

Edith Sherwood, everyone’s favourite Leonardo-wrote-the-Voynich-so-he-did theorist, has posted up an extensive (and fascinating) new article focusing mainly on the depictions of the sun, moon and stars in the Voynich Manuscript: the starting point of her journey is the striking similarity between suns and moons in the VMs’ “astronomical” Quire 9 and a sun/moon pair on a particular Afro-Portuguese ivory horn (#101) carved between 1495 and 1521. Essentially, the question she tries to tackle is: what on earth connects these two very disparate objects?

afro-portuguese-horn-101Afro-Portuguese Horn #101 (from Edith Sherwood’s site)

Unsurprisingly, she starts by linking the sun with the Visconti raza symbol (as per p.61 of my “The Curse of the Voynich”): but, even better, continues by connecting the sun/moon pair to two copies of Dante’s Commedia, as posted up by long-time Tarot researcher Robert V. O’Neill in Chapter 14 of his online article “Dante’s Commedia and the Tarot”.  O’Neill suggests connections between the Commedia manuscript illustrations (Sherwood describes these as 14th century “woodcuts”, probably a typo) and the designs found on early Tarot cards, in particular his Figure 37 (“late 14th century”) and Figure 39 (“mid 14th century”), though unfortunately he doesn’t give MS references for them. To all of which I would also add the probable connection between the circular arrays of VMs zodiac nymphs and Dante’s description of concentric rows of angels in Heaven (as per pp.36-37 of “The Curse”).

At first glance, Sherwood’s proposed iconographic connection between the Visconti-Sforza Tarot sun/moon, the carved ivory sun/moon, and the VMs sun/moon (essentially, though the carved ivory and the VMs were unlikely to be directly connected, they both had the Visconti-Sforza Tarot as a shared ancestor) seems perfectly reasonable. In fact, it almost amounts to an excellent example of the kind of “Voynich Research 2.0” 14th-century-centred art history I blogged about recently.


The problem with this is that it presupposes  a circa 1500 (basically, Leonardo-friendly) date for the VMs, without noting that there is an alternative  (and, given the 15th century quire numbers, I would say more likely) diffusion sequence that doesn’t rely on the Tarot at all. Remember, the similarities noted were between the VMs and the Commedia illustrations, not the Visconti-Sforza Tarot per se:-


In her article, Edith Sherwood also makes a number of other fascinating observations and comparisons (to do with Apollo, with the water nymphs, and with the parallel hatching) which I’d really like to blog about in more detail, but quite frankly those will have to wait for another day.

Finally, Leonardo was anything but a child when he reached Milan in 1481 (when Sherwood suggests he probably first saw the Tarot), so her parallel claim that Leonardo can only have made the VMs as a (brilliant) child doesn’t really seem to stack up with her proposed Tarot connection anyway.

If you look at the VMs with truly open art historical eyes (as Sherwood set out to do), I think you will almost inevitably reach a certain position: it’s mid-Quattrocento Northern Italian, with its cryptographic roots in Milan, its intellectual roots in Florence, its stylistic roots in Venice, and its philosophical roots in Dante. Oh, and it was written by a secrets-obsessed right-hander with a far greater command of cryptography than Leonardo da Vinci ever had (Chapter 6 of “The Curse” has a detailed critique of Leonardo’s limited cryptography).

PS: I found Sherwood’s article through Google Adwords “Voynich written by a lefty?“: but if you want me to look at your Voynich site, please just email a link to me, it’s much cheaper (and quicker). 🙂