The Cipher Mysteries stats to date: 568 posts – 16 static pages – 100,155 visits – roughly 190,000 page loads – 1775 comments – 115 subscribers. Thank you all very much for your continued interest, support, comments (both appreciative and snarky), tweets, and off-line posts and notes (always interesting). Just so you know, most referrals continue to come from Google with only one Slashdot-style traffic superspike (2nd-3rd December 2009, 5000 visits in one day), while the Akismet anti-spam plugin has caught 16,151 spam messages with only 9 false positives (mainly comments from hotmail.com).

It’s also nice to see more cipher mystery bloggers popping up: a big tip of the hat to Elmar VogtElias Schwerdtfeger, Julian Bunn, Diane O’Donovan, Rich SantaColoma, Moshe Rubin, etc (in no particular order). Good luck to all of them (blogging is surprisingly hard work).

As for Cipher Mysteries itself, there’s plenty I’d like to fix: the front page needs a makeover to make it more useful for first-time visitors; the web hosting performance has slowly worsened (but moving such a large database to a new host is proving time-consuming and difficult); and I have a slow-burn plan to get the visit count to 1,000,000 and beyond. But I’m busier than ever in my real-world work, so all in good time, eh?

I have several weeks’ worth of draft posts to finish, and am still working away on an entirely new historical cipher mystery that I suspect will ultimately be more surprising and revealing than the Voynich Manuscript: but I’ll leave that as a ncie surprise for another day. Thanks you all again! 🙂

Self-professed Voynich skeptic Elmar Vogt has been fairly quiet of late: turns out that he has been preparing his own substantial analysis on his “Voynich Thoughts” website of the Voynich Manuscript’s teasingly hard-to-read marginalia, (with Elias Schwerdtfeger’s notes on the zodiac marginalia appended). Given that Voynich marginalia are pretty much my specialist subject, the question I’m sure you want answered is: how did the boy Vogt do?

Well… it’s immediately clear he’s thorough, insofar as he stepped sequentially through all the word-like groups of letters in the major Voynich marginalia to try to work out what each letter could feasibly be; and from that built up a kind of Brumbaugh-like matrix of combinatorial possibilities for each one for readers to shuffle to find sensible-looking readings. However, it also has to be said that for all of this careful (and obviously prolonged) effort, he managed to get… precisely nowhere at all.

You see, we’ve endured nearly a century’s worth of careful, rational people looking at these few lines of text and being unable to read them, from Newbold’s “michiton oladabas“, through Marcin Ciura’s mirrored “sa b’adalo No Tich’im“, and all the way to my own [top line] “por le bon simon sint…“. Worse still, nobody has even been able to convincingly argue the case for what the author(s) was/were trying to achieve with these confused-looking marginalia, which can easily be read as containing fragments of French, Occitan, German, Latin, Voynichese (and indeed of pretty much any other language you can think of).

And the explanation for this? Well, we Voynich researchers simply love explanations… which is why we have so many of them to choose from (even if none of them stands up to close scrutiny):-

  1. Pen trials?
  2. A joke (oh, and by the way, the joke is on us)?
  3. A hoax?
  4. A cipher key?
  5. Enciphered text?
  6. Some kind of vaguely polyglot text in an otherwise unknown language?

How can we escape this analysis paralysis? Where are those pesky intellectual historians when you actually need them?

I suspect that what is at play here is an implicit palaeographic fallacy: specifically the long-standing (but false) notion that palaeographers try to read individual words (when actually they don’t). Individual word and letter instances suffer from accidents, smudges, blurs, deletions, transfers, rubbing off, corrections, emendations: however, a person’s hand (the way that they construct letters) is surprisingly constant, and is normally able to be located within a reasonably well-defined space of historic hands – Gothic, semi-Gothic, hybrida, mercantesca, Humanist, etc. Hence, the real problem here is arguably that this palaeographic starting point has failed to be determined.

Hence, I would say that looking at individual words is arguably the last thing you should be doing: instead, you should be trying to understand (a) how individual letters are formed, and (b) which particular letter instances are most reliable. From there, you should try to categorise the hand, which should additionally give you some clue as to where it is from and what language it is: and only then should you pass the challenge off from palaeography to historical linguistics (i.e. try to read it). And so I would say that attempting to read the marginalia without first understanding the marginalia hand is like trying to do a triple-jump but omitting both the hop and the skip parts, i.e. you’ll fall well short of where you want to get to.

So let’s buck a hundred years’ worth of trend and try instead to do this properly: let’s simply concentrate on the letter ‘a’ and and see where it takes us.

f116v-letter-a

To my eyes, I think that a[5], a[6], and a[7] show no obvious signs of emendation and are also consistently formed as if by the same hand. Furthermore, it seems to me that these are each formed from two continuous strokes, both starting from the middle of the top arch of the ‘a’. That is, the writer first executes a heavy c-like down-and-around curved stroke (below, red), lifts up the pen, places it back on the starting point, and then writes a ‘Z’-like up-down-up zigzaggy stroke (below, blue) to complete the whole ‘a’ shape. You can see from the thickness and shape of the blue stroke that the writer is right-handed: while you can see from the weight discontinuity and slight pooling of ink in the middle of the top line exactly where the two strokes join up. I think this gives us a reasonable basis for believing what the writer’s core stroke technique is (and, just as importantly, what it probably isn’t).

f116v-letter-a-analysis

What this tells us (I think) is that we should be a little uncertain about a[4], (which doesn’t have an obviously well-formed “pointy head”) and very uncertain about a[1], a[2], and a[3] (none of which really rings true).

My take on all this is that I think a well-meaning VMs owner tried hard to read the (by then very faded) marginalia, but probably did not know the language it was written in, leaving the page in a worse mess than what it was before they started. Specifically: though “maria” shouts original to me, “oladaba8” shouts emendation just as strongly. Moreover, the former also looks to my eyes like “iron gall ink + quill”, while the latter looks like “carbon ink + metal nib”.

Refining this just a little bit, I’d also point out that if you also look at the two ascender loops in “oladaba8″, I would argue that the first (‘l’) loop is probably original, while the second (‘b’) loop is structured quite wrongly, and is therefore probably an emendation. And that’s within the same word!

The corollary is simply that I think it highly likely that any no amount of careful reading would untie this pervasively tangled skein if taken at face value: and hence that, for all his persistence and careful application of logic, Elmar has fallen victim to the oldest intellectual trap in the book – of pointing his powerful critical apparatus in quite the wrong direction. Sorry, Elmar my old mate, but you’ve got to be dead careful with these ancient curses, really you have. 🙂

As promised (though a little later than planned), here’s the transcript of the second IM session I ran at the 2009 Voynich Summer Camp in Budapest. Not quite as meaty as the first IM session, but some OK stuff in there all the same. Enjoy!

[11:56:09] NP: Okeydokey, ready when you are
[11:56:18] vc: Okedykokedy
[11:56:27] NP: 🙂
[11:56:35] vc: We are.
[11:56:35] NP: I think that’s on f113r
[11:56:40] vc: …
[11:56:45] NP: 🙂
[11:56:55] NP: So… how has it all gone?
[11:57:12] NP: Tell me what you now think about the VMs that you didn’t before?
[11:57:27] vc: It should be simple.
[11:57:36] vc: The solution should be simple.
[11:57:41] NP: but…
[11:58:07] vc: But …
[11:58:33] vc: The verbose cipher still permits us a lot of possibilities.
[11:58:52] NP: Verbose cipher only gets you halfway there
[11:59:03] NP: But that’s still halfway more than anything else
[11:59:28] vc: We could synthesize a coding which is capable to produce the same statistical properties as the MS
[11:59:48] NP: Yup, that was (basically) Gordon Rugg’s 2004 paper
[11:59:58] vc: simple enough to do manually of course
[12:00:31] NP: The problem is one of duplicating all the local structural rules
[12:00:40] vc: Gordon’s generating gibberish by encoding gibberish
[12:01:06] NP: Basically
[12:01:25] vc: Yes, we suspect that the text contains real information in a natural language.
[12:01:30] vc: We tried this.
[12:02:06] NP: Rugg’s work requires a clever (pseudo-random) daemon to drive his grille thing… but he never specified how someone 500 years ago could generate random numbers (or even conceive of them)
[12:02:07] vc: We tried to encode for example the vulgata with our method
[12:02:10] NP: ok
[12:02:23] NP: into A or B?
[12:02:24] vc: throw dices I guess?
[12:02:26] vc: lol
[12:02:37] NP: only gives you 1-6 random
[12:02:48] vc: 3 dices
[12:02:52] vc: ect
[12:02:52] NP: two dice give you a probability curve
[12:02:56] NP: not flat
[12:03:02] vc: hmm
[12:03:06] vc: roulette wheel
[12:03:11] NP: Anachronistic
[12:03:19] vc: Ok. We use no random.
[12:03:23] NP: 🙂
[12:03:25] vc: our encoder is deterministic
[12:03:33] NP: Good!
[12:03:35] vc: that’s the point
[12:04:28] vc: We suspect that the “user” added some randomness in some of the aspects of the encoding, but this is not overwhelming
[12:04:49] NP: That’s right
[12:05:21] vc: We also picked out the A and B languages
[12:05:23] NP: Though some aspects (like space insertion into ororor-type strings) were more tactical and visual than random
[12:05:27] NP: Good!
[12:05:33] vc: with different methods
[12:05:52] vc: so we basically verified a lot of past results
[12:06:17] NP: Do you have a synthetic A paragraph you can cut and paste here?
[12:06:17] vc: After that, we decided to concentrate on the first 20 pages
[12:06:22] NP: Good!
[12:07:17] vc: for example, A languages uses ey or y at the end of the words, while B language uses edy instead
[12:07:51] vc: Synthetic sample… ok, just a minute
[12:08:29] NP: ey/y vs edy – Mark Perakh pointed this out too, and suggested that it meant B was less contracted than A. It also forms the core of Elias Schwerdtfeger’s “Biological Paradox”
[12:09:25] vc: Our results are largely independent – the guys didn’t know the past results
[12:09:54] NP: That’s ok. 🙂
[12:10:41] vc: nu stom huhoicpeey strifihuicom ristngngpeet pept suhors periet pescet sticpescom ichoey pt om icpeript
[12:11:17] NP: I hope that’s not EVA
[12:11:41] vc: Y, of course not
[12:12:08] vc: not close, but the whole thing started here when some of us tried out a method which produced some non-trivial statistics very similar to VMS
[12:12:43] NP: I’m certainly getting a partially-verbose vibe off this
[12:12:52] vc: the original:
[12:13:17] vc: haec sunt verba que locutus est
[12:13:18] vc: Moses
[12:13:40] NP: Ummm… that’s pretty verbose, then. 🙂
[12:14:04] vc: Again, a deterministic, static automaton.
[12:14:15] NP: Fair enough!
[12:15:09] NP: Sorry for asking a lecturer-style question, 🙂 but how has doing that affected how you look at Voynichese?
[12:16:03] vc: Sec
[12:16:49] vc: discussing 🙂
[12:17:38] vc: it’s a coded natural language text. We suspect that the language is Italian – from measured results.
[12:18:00] vc: That’s why we are very curious about your news!
[12:18:21] NP: Let’s finish your news first!
[12:18:38] vc: ok. Was that an answer for your question?
[12:19:02] NP: Pretty much – would you like to write it up informally to publish on the blog?
[12:19:55] NP: 1000 words should cover it 🙂
[12:21:18] NP: (you don’t need to write it now!)
[12:21:25] vc: We admit that we would like to work on our theory and method a bit before publishing it, because one of the important statistical feature doesn’t match
[12:21:31] vc: yet
[12:21:35] NP: 🙂
[12:21:52] NP: ok
[12:22:06] NP: that’s good
[12:22:23] NP: what else have you been thinking about and discussing during the week?
[12:22:35] NP: VMs-wise, that is 🙂
[12:22:42] vc: 🙂
[12:22:54] vc: haha, you got the point…
[12:23:02] NP: 🙂
[12:23:56] vc: We toyed with the idea that the astrological diagrams are so poorly rendered that they aren’t astrological diagrams. They are coder tools.
[12:24:10] NP: cipher wheels?
[12:24:22] vc: Kind of. Yes.
[12:24:35] NP: (that’s been suggested many times, though never with any rigour)
[12:24:36] vc: we also tried to identify some of the star names.
[12:24:47] NP: No chance at all
[12:25:01] NP: That is a cliff with a huge pile of broken ships beneath it
[12:25:21] NP: sadly
[12:25:27] vc: been there, done that, yes
[12:25:30] NP: 🙂
[12:26:22] vc: We also observed that the takeshi transcription becomes less reliable when the text is rotated or tilted.
[12:26:36] vc: The other places – it is quite good.
[12:26:45] NP: Yes, that’s a fair enough comment
[12:27:08] NP: A complete transcription has been done, but it hasn’t been released – very frustrating
[12:27:25] NP: (by the EVMT people, Gabriel Landini mainly)
[12:27:17] vc: Also we are not contented with some of the EVA transcription’s choices of the alphabet
[12:27:34] NP: the “sh” really sucks
[12:27:39] vc: YES
[12:27:45] NP: 🙁
[12:28:53] NP: Glen Claston’s transcription added stuff in, many people use that instead purely for its better “sh” handling
[12:29:26] vc: hmm, ok
[12:29:53] NP: In a lot of ways, though, who’s to say? A single ambiguous letter shouldn’t really be enough to destroy an entire dcipherment attack
[12:30:04] NP: given that it’s not a pure polyalpha
[12:30:37] vc: of course
[12:30:54] NP: But analyses still don’t seem to get particularly close
[12:31:03] NP: Oh well
[12:31:23] vc: Analyses of whom
[12:31:24] vc: 🙂
[12:31:25] vc: ?
[12:31:29] vc: 😉
[12:31:35] NP: not yours, of course 😉
[12:32:32] NP: is that your week summarized, then?
[12:32:53] vc: Yes.
[12:33:16] NP: has it been fun? worthwhile? frustrating? dull?
[12:33:32] vc: All of them.
[12:33:34] NP: and would you do another next summer?
[12:33:57] vc: No need of it. Maybe with the rohonc codex
[12:34:00] vc: lol, of course
[12:34:13] NP: 🙂
[12:35:06] NP: I’m really pleased for you all – it sounds like you have managed to get a fairly clearheaded view of the VMs out of the whole process, and have had a bit of fun as well
[12:35:51] NP: Most VMs researchers get very tied up to a particular theory or evidence or way of looking at it – you have to keep a broader perspective to make progress here
[12:35:53] vc: let’s say two bits
[12:36:14] NP: “two bits of fun” 🙂
[12:36:21] NP: good

[I then went into a long digression about the “Antonio of Florence”, about which I’ve already posted far too much to the blog… so –SNIP–]

[12:51:50] vc: ooo wait a sec…
[12:52:16] vc: Can we ask Philip Neal to post some some pages of a reference book he uses?
[12:52:42] vc: sorry about the redundancy
[12:53:02] NP: He’s a medieval Latin scholar by training, what kind of thing would you want?
[12:53:39] vc: about the alchemical herbals. Can we manage it later?
[12:53:45] vc: Please go on
[12:53:51] NP: Well.. that’s about it
[12:54:10] NP: Obviously I typed faster than I thought 🙂

[13:00:11] vc: What do you know? How much people is working on a voynich-deciphering automaton based on markov thingies and such?
[13:00:37] vc: So basically with the same hypotheses like ours?
[13:00:57] NP: The problem with markov models is that they will choke on verbose ciphers, where letters are polyvalent
[13:01:08] NP: Nobody in the literature seems to have picked this up
[13:01:24] vc: bad for them
[13:01:50] NP: Unless you pre-tokenize the stream, Markov model finders will just get very confused
[13:02:03] NP: and give you a linguist-friendly CVCV-style model
[13:02:11] NP: that is cryptographically wrong
[13:03:04] NP: perhaps “multi-functional” rather than “polyvalent”, I’m not sure :O
[13:04:23] NP: So, I’m not convinced that anyone who has applied Markov model-style analysis to the VMs has yet got anywhere
[13:04:29] NP: Which is a shame
[13:05:04] NP: But there you go
[13:05:25] vc: We hope.
[13:05:47] NP: 🙂

[13:06:24] NP: Right – I’ve got to go now (sadly)
[13:06:48] NP: I hope I’ve been a positive influence on your week and not too dogmatic
[13:07:09] vc: Why, of course
[13:07:16] NP: And that I’ve helped steer you in generally positive / constructive directions
[13:07:30] vc: Yes, indeed.
[13:07:35] NP: (Because there are plenty of blind alleys to explore)
[13:07:41] NP: (and to avoid)
[13:07:52] vc: VBI…
[13:07:52] vc: 🙂
[13:08:07] NP: Plenty of that to step in, yes
[13:08:14] NP: 🙂
[13:08:24] NP: And I don’t mean puddles
[13:09:42] vc: Well, thank you again for the ideas and the lots of information 🙂
[13:11:18] vc: Unfortunately semester starts in weeks, so we can’t keep working on this project
[13:12:04] vc: but as soon as we earn some results, we will definitely contact you
[13:12:15] NP: Excellent, looking forward to that
[13:12:54] NP: Well, it was very nice to meet you all – please feel free to subscribe to Cipher Mysteries by email or RSS (it’s free) so you can keep up with all the latest happenings.
[13:13:23] vc: ok 🙂
[13:13:57] NP: Best wishes, and see you all for the Rohonc week next summer 🙂
[13:14:04] NP: !!!!!
[13:14:11] vc: lol 🙂
[13:14:21] vc: that’s right! 😉
[13:15:16] NP: Excellent – gotta fly, ciao!
[13:15:36] vc: Best!
[13:15:37] vc: bye

His eyes stinging from all the Google Translate hits popping up on his server logs (what did I tell you?), Elias Schwerdtfeger posted up an English translation of his Voynich Bullshit Index post from a couple of days ago, no doubt cursing me through gritted teeth as he typed. 🙂

By my reckoning, I reckon my Averlino theory in “The Curse of the Voynich” gets:

  • 10 points initial float (i.e. for having any kind of theory)
  • 2 points x 8 years wasted
  • 9 points x 2 identified nine-rosette buildings – St Mark’s Basilica in Venice, Castello Sforzesco in Milan
  • 9 points x 2 buildings identified after visiting them

…that is, a grand total of 62 points. Not bad: but in the words of most school reports, Could Do Better. 🙂

Here’s a Voynich page that made me laugh, and I hope it will do broadly the same for you too. 🙂

Elias Schwerdtfeger has posted a new meta-theoretical analysis tool to his blog, called VBI – short for the “Voynich Bullshit Index“. By carefully testing your pet Voynich theory against his long checklist of questions (each with its own VBI point rating), you can work out how high your overall VBI rating is… that is, how close to the perfect bullshit Voynich theory you have reached.

For maximum satirical effect, Elias includes a number of questions designed specifically to penalize various well-known Voynich theories: and yes, my Averlino theory is one of them, but I’d probably have been vastly disappointed if he hadn’t. 🙂

Sadly for Anglophones, Elias’ VBI post is only in German – but if you nag him enough (leaving a comment should do the trick) to add an English translation, I guess he probably will. Alternatively, you can annoy him by clicking here to look at his page via Google Translate: he’s forever trawling through his server logs for Google Translate entries and then moaning at me about my inability to read German, which isn’t strictly true – I actually sometimes use it because of my inability to read his German. Which, having just seen what a dog’s dinner Google Translate makes of his post, I don’t feel quite so bad about any more. 😉

Enjoy! 🙂

For a couple of weeks, I’ve been meaning to post about German Voynich blogger Elias Schwerdtfeger and what he calls the VMs’ “biological paradox”. His question is simple: why is it that the Voynich’s “biological” Quire 13 has both (a) complicated pictures of nymphs, tubes and baths, and (b) longwinded, redundant text? Surely, he asks, isn’t this combination somewhat paradoxical?

(To be honest, Elias’ post then goes off on a bit of a wild tangent: but given that it’s a good starting point and the whole issue of Q13 is a favourite of mine, I thought I’d step up to the line.)

Page f78r (one of the few that Leonell Strong was able to examine) has a number of good examples of this redundancy, in particular para 1 line 5’s “qokedy qokedy dal qokedy qokedy“, for which Strong’s 1945 worksheet #2 suggests the decryption “DUCTLE ROULLS THE GRAOTH COEMLI”.

This is the same piece of ciphertext about which Gordon Rugg asserted “This degree of repetition is not found in any known language (Sci Am, 2004). Of course, linguist Jacques Guy ferociously responded to this Ruggish in sci.lang firing off real-life counter-examples such as “di mana-mana ada barang-barang. Barang-barang itu…” As always, there’s a fair degree of truth in what both are saying: but the fact (as Elias points out) that only some parts of the Voynichese corpus read like “qokedy qokedy” is a pretty good indication that we can’t reduce this debate to an either-or between these two opposing poles. Essentially, it can’t be just a simple repetitive language if it’s not consistent throughout (and it isn’t): and beneath all the cryptographic window-dressing, there probably is some kind of meaningful language thing going on.

I’d say that Mark Perakh’s (1999) tentative conclusion on the language differences probably yields the most useful key to Elias’ paradoxical door. Mark wondered about the internal structural differences (i.e. within words) between Voynich Manuscript A and B language pages (and all the text that shades between A & B) and so carried out some tests: ultimately, his favoured explanation is that the A language is a more abbreviated & contracted version of the B language, but that beneath it all, they are still both expressions of the same thing. (Though Mark points to contraction probably being the main mechanism used).

So, the text in Q13 – as a B language object – therefore exhibits redundant probably because it is more verbose. This suggests that we should be looking to decipher the B text, simply because we stand less chance of being distracted by the A text’s arbitrary contractions.

My own take is a little more nuanced (though still hypothetical, lest I raise the hackles of the hypothesis police once more). Firstly, I suspect that the A pages were written first, and that these were trying to duplicate an existing document using a verbose cipher – meaning that a ciphertext line wouldn’t map to the same physical space as a plaintext line. The only way to fit it in was to aggressively abbreviate & contract… but this helped make the ciphertext more opaque.

Then, I suspect that the B pages were added, using smaller quills (say, eagle’s feather?) – because the smaller letter sizes took the pressure off the overall line lengths, the need for contraction and abbreviation was reduced. However, I think some aspects of the coding system changed (specifically the steganographic numbering scheme, but that’s another story!), making the B pages harder to break in a different way.

That is, I suspect that we have two types of ciphertext present in the VMs: a simpler cipher system A (but with a significant amount of contraction and abbreviation) or a more complex cipher system B (but with less contraction and abbreviation to distract us). And just to make things really difficult, there are probably system B pages that are also heavily contracted (i.e. the worst of both worlds).

And some people still wonder why computers can’t break the VMs! *sigh*

Apart from Cipher Mysteries, the Voynich blogosphere has been far too quiet of late. Even Elias Schwerdtfeger’s “Das Voynich Blog” is, despite some intriguing posts in the past, fairly subdued.

And so it is a breath of fresh air to see a new blog from an old friend: long-time Voynich mailing list member Elmar Vogt has recently started up his Voynich Thoughts blog. Elmar has already posted a whole heap of nice snippets, such as the German Wikipedia entry’s comparison of the plant on f56r with drosera intermedia (which I mentioned here and here), a nice comparison of the Sagittarius archer with a drawing in a 15th century woodcut, as well as a circa-1450 head-dress comparison with a zodiac nymph.

Part of me really wants him to put these fragments into context – for the Sagittarius page, for example, how it was suggested long ago that the zodiac motifs might well have largely been copied from a (probably 14th century?) German woodcut calendar; a discussion of the Sagittarius archer’s (probably 14th century and fairly rustic) crossbow; plus a wider comparison of the crossbowman’s headwear with (say) the 15th century “turban” / chaperon as depicted by Robert Campin and Van Eyck.

Yet another part of me simply wants Elmar to fill his blog with that thing he does so very well – which is to use his keen logical eye and pleasantly acid German wit to be entertainingly tart about Voynichological nonsense. Wherever contemporary haruspicators pop up to read their imagined stories into the VMs’ well-scanned entrails, I’ll always be delighted to read Elmar’s commentary.

Trivia time: it’s no great secret that software developer Elmar has long contributed text edits to Wikipedia (such as its VMs page) under the monicker “Syzygy“: but what is perhaps less known is that, as a fan of the Atari ST, he chose this as a homage to the company Atari – Nolan Bushnell and Ted Dabney used “Syzygy Engineering” for their original company name.

Hmmm… I’m not sure he’d be much impressed by the two computer games I wrote for the ST: 3D Pool and Loopz. Oh well!  🙂

A few errata and notes on the virtual pinboard, tacks don’t have to be taxing…

(1) Warburg librarian Francois Quiviger kindly points out that my description of the layout of the Warburg Institute (in the Day Two blog entry) wasn’t totally precise: though the overall layout matches Warburg’s arbitrary Mnemosyne plan, books within a section are arranged chronologically (or rather, by date of author’s death). Hmmm… hopefully it’ll be 60+ years before his successors will be able to place my book in its final order… 😮

Re-reading my blog entry with Francois’ other comments in mind, I think its emphasis (on madness) somewhat diverged from what I originally planned to say. In computer programming, you can “over-optimize” your solution by tailoring it too exactly to the problem: and this is how I felt about the Warburg. One tiny architectural detail at the Institute tells this story: the oddly hinged doors in the men’s toilets, that appeared to have been mathematically designed to yield the most effective use of floor space. For me, this is no different to the filing cabinets full of deities, all laid out in alphabetical order: and so the Institute is like a iconological Swiss Army Knife, optimally hand-crafted for Aby Warburg and the keepers of his meme. But the cost of keeping it functioning in broadly the same way goes up each year: programming managers would call it a “brittle” or “fragile” solution, one with a high hidden cost of maintenance.

But am I still a fan of the Warburg? Yes, definitely: it’s a fabulous treasure-house that only a particularly hard-hearted historian could even dream of bracketing. And in those terms, I think I’m actually a bit of a softy.

Finally, Francois very kindly offered to put in a reference for me (thank you very much indeed!!): so there should be a happy ending to the whole rollercoaster story after all. I will, of course, post updates and developments here as they happen. 🙂

(2) Thanks to a flood of HASTRO-L subscribers dropping by to read my review of Eileen Reeves’ “Galileo’s Glassworks”, Voynich News has just broken through the 1000 visitor mark (and well past the 2000 page-view mark). Admittedly, it’s not a huge milestone… but it’s a start, right? And though Google seems to like it, only Elias Schwerdtfeger and Early Modern Notes link to it: and nobody has yet rated it on Technorati etc, bah!

(3) Though in the end I was unable to get to the recent CRASSH mini-conference on books of secrets (which was a huge shame), I’m still up for the Treadwell’s evening on Magic Circles at 7.15pm on 19th March 2008 (which I mentioned here about ten Internet years ago). Should be fascinating, perhaps see you there! 😉

Incidentally, you’ve got to love Elias when he types (26th Jan 2007): “Ein Königreich für eine Zeitmaschine” – [my] kingdom for a time-machine.

To me, these brief words speaks volumes for the frustration (and Renaissance-like itch for knowledge) Voynichologists suffer from (while deriving a vaguely masochistic mental enjoyment from the same thing). What keeps you awake at night, then? Too much caffeine?

Here’s a link to Elias Schwerdtfeger’s very interesting “Das Voynich Blog”.

Elias has worked really hard behind the scenes to find ways of visualising the statistics expressing what “old hand” Voynichologists (such as, say, Philip Neal & I) see when we look at the Voynich – you know, the highly bonded, multi-level internal structure that exists at the stroke, character, glyph, digraph, word, line, paragraph, page and section levels.

As an aside, I’ve long disagreed with Renaissance encipherment hypotheses for the VMs based on moving alphabets, specifically because they fundamentally destroy these kinds of internal structure: the only way to keep such hypotheses alive is then to argue (as, for example, my old friend GC does) that these structures are part of the “surface language”, i.e. that the encipherer is dynamically stretching his plaintext to mimic these structures in the ciphertext. Yes, it’s possible, but… put all the pieces together and it’s a bit too much of a stretch for me.

Incidentally, I’ve been looking at f2v recently, specifically because of the “fa” marginalia there (one of the very few marginalia I didn’t really cover in my book, “The Curse of the Voynich”). Elias discusses f2v at some length, proposing the eminently sensible (and testable) hypothesis that the same pe(rso)n that/who made the dubious (o)ish(i) emendation to the last line of f2v also added the “fa” marking above the second paragraph. They’re both in similar darker ink (which is a good start): but I think that the Beinecke’s scans – though fantastic for most purposes – fall just short of being able to resolve this kind of question definitively.

Actually, I’ve got a list of about 50 similar/related cross-indexing questions like that I’d like to address (say, by multispectral imaging or Raman imaging) in the future. But for now, that project is stalled (because the Beinecke turned my proposals down). Oh well: maybe next year…