As promised (though a little later than planned), here’s the transcript of the second IM session I ran at the 2009 Voynich Summer Camp in Budapest. Not quite as meaty as the first IM session, but some OK stuff in there all the same. Enjoy!

[11:56:09] NP: Okeydokey, ready when you are
[11:56:18] vc: Okedykokedy
[11:56:27] NP: 🙂
[11:56:35] vc: We are.
[11:56:35] NP: I think that’s on f113r
[11:56:40] vc: …
[11:56:45] NP: 🙂
[11:56:55] NP: So… how has it all gone?
[11:57:12] NP: Tell me what you now think about the VMs that you didn’t before?
[11:57:27] vc: It should be simple.
[11:57:36] vc: The solution should be simple.
[11:57:41] NP: but…
[11:58:07] vc: But …
[11:58:33] vc: The verbose cipher still permits us a lot of possibilities.
[11:58:52] NP: Verbose cipher only gets you halfway there
[11:59:03] NP: But that’s still halfway more than anything else
[11:59:28] vc: We could synthesize a coding which is capable to produce the same statistical properties as the MS
[11:59:48] NP: Yup, that was (basically) Gordon Rugg’s 2004 paper
[11:59:58] vc: simple enough to do manually of course
[12:00:31] NP: The problem is one of duplicating all the local structural rules
[12:00:40] vc: Gordon’s generating gibberish by encoding gibberish
[12:01:06] NP: Basically
[12:01:25] vc: Yes, we suspect that the text contains real information in a natural language.
[12:01:30] vc: We tried this.
[12:02:06] NP: Rugg’s work requires a clever (pseudo-random) daemon to drive his grille thing… but he never specified how someone 500 years ago could generate random numbers (or even conceive of them)
[12:02:07] vc: We tried to encode for example the vulgata with our method
[12:02:10] NP: ok
[12:02:23] NP: into A or B?
[12:02:24] vc: throw dices I guess?
[12:02:26] vc: lol
[12:02:37] NP: only gives you 1-6 random
[12:02:48] vc: 3 dices
[12:02:52] vc: ect
[12:02:52] NP: two dice give you a probability curve
[12:02:56] NP: not flat
[12:03:02] vc: hmm
[12:03:06] vc: roulette wheel
[12:03:11] NP: Anachronistic
[12:03:19] vc: Ok. We use no random.
[12:03:23] NP: 🙂
[12:03:25] vc: our encoder is deterministic
[12:03:33] NP: Good!
[12:03:35] vc: that’s the point
[12:04:28] vc: We suspect that the “user” added some randomness in some of the aspects of the encoding, but this is not overwhelming
[12:04:49] NP: That’s right
[12:05:21] vc: We also picked out the A and B languages
[12:05:23] NP: Though some aspects (like space insertion into ororor-type strings) were more tactical and visual than random
[12:05:27] NP: Good!
[12:05:33] vc: with different methods
[12:05:52] vc: so we basically verified a lot of past results
[12:06:17] NP: Do you have a synthetic A paragraph you can cut and paste here?
[12:06:17] vc: After that, we decided to concentrate on the first 20 pages
[12:06:22] NP: Good!
[12:07:17] vc: for example, A languages uses ey or y at the end of the words, while B language uses edy instead
[12:07:51] vc: Synthetic sample… ok, just a minute
[12:08:29] NP: ey/y vs edy – Mark Perakh pointed this out too, and suggested that it meant B was less contracted than A. It also forms the core of Elias Schwerdtfeger’s “Biological Paradox”
[12:09:25] vc: Our results are largely independent – the guys didn’t know the past results
[12:09:54] NP: That’s ok. 🙂
[12:10:41] vc: nu stom huhoicpeey strifihuicom ristngngpeet pept suhors periet pescet sticpescom ichoey pt om icpeript
[12:11:17] NP: I hope that’s not EVA
[12:11:41] vc: Y, of course not
[12:12:08] vc: not close, but the whole thing started here when some of us tried out a method which produced some non-trivial statistics very similar to VMS
[12:12:43] NP: I’m certainly getting a partially-verbose vibe off this
[12:12:52] vc: the original:
[12:13:17] vc: haec sunt verba que locutus est
[12:13:18] vc: Moses
[12:13:40] NP: Ummm… that’s pretty verbose, then. 🙂
[12:14:04] vc: Again, a deterministic, static automaton.
[12:14:15] NP: Fair enough!
[12:15:09] NP: Sorry for asking a lecturer-style question, 🙂 but how has doing that affected how you look at Voynichese?
[12:16:03] vc: Sec
[12:16:49] vc: discussing 🙂
[12:17:38] vc: it’s a coded natural language text. We suspect that the language is Italian – from measured results.
[12:18:00] vc: That’s why we are very curious about your news!
[12:18:21] NP: Let’s finish your news first!
[12:18:38] vc: ok. Was that an answer for your question?
[12:19:02] NP: Pretty much – would you like to write it up informally to publish on the blog?
[12:19:55] NP: 1000 words should cover it 🙂
[12:21:18] NP: (you don’t need to write it now!)
[12:21:25] vc: We admit that we would like to work on our theory and method a bit before publishing it, because one of the important statistical feature doesn’t match
[12:21:31] vc: yet
[12:21:35] NP: 🙂
[12:21:52] NP: ok
[12:22:06] NP: that’s good
[12:22:23] NP: what else have you been thinking about and discussing during the week?
[12:22:35] NP: VMs-wise, that is 🙂
[12:22:42] vc: 🙂
[12:22:54] vc: haha, you got the point…
[12:23:02] NP: 🙂
[12:23:56] vc: We toyed with the idea that the astrological diagrams are so poorly rendered that they aren’t astrological diagrams. They are coder tools.
[12:24:10] NP: cipher wheels?
[12:24:22] vc: Kind of. Yes.
[12:24:35] NP: (that’s been suggested many times, though never with any rigour)
[12:24:36] vc: we also tried to identify some of the star names.
[12:24:47] NP: No chance at all
[12:25:01] NP: That is a cliff with a huge pile of broken ships beneath it
[12:25:21] NP: sadly
[12:25:27] vc: been there, done that, yes
[12:25:30] NP: 🙂
[12:26:22] vc: We also observed that the takeshi transcription becomes less reliable when the text is rotated or tilted.
[12:26:36] vc: The other places – it is quite good.
[12:26:45] NP: Yes, that’s a fair enough comment
[12:27:08] NP: A complete transcription has been done, but it hasn’t been released – very frustrating
[12:27:25] NP: (by the EVMT people, Gabriel Landini mainly)
[12:27:17] vc: Also we are not contented with some of the EVA transcription’s choices of the alphabet
[12:27:34] NP: the “sh” really sucks
[12:27:39] vc: YES
[12:27:45] NP: 🙁
[12:28:53] NP: Glen Claston’s transcription added stuff in, many people use that instead purely for its better “sh” handling
[12:29:26] vc: hmm, ok
[12:29:53] NP: In a lot of ways, though, who’s to say? A single ambiguous letter shouldn’t really be enough to destroy an entire dcipherment attack
[12:30:04] NP: given that it’s not a pure polyalpha
[12:30:37] vc: of course
[12:30:54] NP: But analyses still don’t seem to get particularly close
[12:31:03] NP: Oh well
[12:31:23] vc: Analyses of whom
[12:31:24] vc: 🙂
[12:31:25] vc: ?
[12:31:29] vc: 😉
[12:31:35] NP: not yours, of course 😉
[12:32:32] NP: is that your week summarized, then?
[12:32:53] vc: Yes.
[12:33:16] NP: has it been fun? worthwhile? frustrating? dull?
[12:33:32] vc: All of them.
[12:33:34] NP: and would you do another next summer?
[12:33:57] vc: No need of it. Maybe with the rohonc codex
[12:34:00] vc: lol, of course
[12:34:13] NP: 🙂
[12:35:06] NP: I’m really pleased for you all – it sounds like you have managed to get a fairly clearheaded view of the VMs out of the whole process, and have had a bit of fun as well
[12:35:51] NP: Most VMs researchers get very tied up to a particular theory or evidence or way of looking at it – you have to keep a broader perspective to make progress here
[12:35:53] vc: let’s say two bits
[12:36:14] NP: “two bits of fun” 🙂
[12:36:21] NP: good

[I then went into a long digression about the “Antonio of Florence”, about which I’ve already posted far too much to the blog… so –SNIP–]

[12:51:50] vc: ooo wait a sec…
[12:52:16] vc: Can we ask Philip Neal to post some some pages of a reference book he uses?
[12:52:42] vc: sorry about the redundancy
[12:53:02] NP: He’s a medieval Latin scholar by training, what kind of thing would you want?
[12:53:39] vc: about the alchemical herbals. Can we manage it later?
[12:53:45] vc: Please go on
[12:53:51] NP: Well.. that’s about it
[12:54:10] NP: Obviously I typed faster than I thought 🙂

[13:00:11] vc: What do you know? How much people is working on a voynich-deciphering automaton based on markov thingies and such?
[13:00:37] vc: So basically with the same hypotheses like ours?
[13:00:57] NP: The problem with markov models is that they will choke on verbose ciphers, where letters are polyvalent
[13:01:08] NP: Nobody in the literature seems to have picked this up
[13:01:24] vc: bad for them
[13:01:50] NP: Unless you pre-tokenize the stream, Markov model finders will just get very confused
[13:02:03] NP: and give you a linguist-friendly CVCV-style model
[13:02:11] NP: that is cryptographically wrong
[13:03:04] NP: perhaps “multi-functional” rather than “polyvalent”, I’m not sure :O
[13:04:23] NP: So, I’m not convinced that anyone who has applied Markov model-style analysis to the VMs has yet got anywhere
[13:04:29] NP: Which is a shame
[13:05:04] NP: But there you go
[13:05:25] vc: We hope.
[13:05:47] NP: 🙂

[13:06:24] NP: Right – I’ve got to go now (sadly)
[13:06:48] NP: I hope I’ve been a positive influence on your week and not too dogmatic
[13:07:09] vc: Why, of course
[13:07:16] NP: And that I’ve helped steer you in generally positive / constructive directions
[13:07:30] vc: Yes, indeed.
[13:07:35] NP: (Because there are plenty of blind alleys to explore)
[13:07:41] NP: (and to avoid)
[13:07:52] vc: VBI…
[13:07:52] vc: 🙂
[13:08:07] NP: Plenty of that to step in, yes
[13:08:14] NP: 🙂
[13:08:24] NP: And I don’t mean puddles
[13:09:42] vc: Well, thank you again for the ideas and the lots of information 🙂
[13:11:18] vc: Unfortunately semester starts in weeks, so we can’t keep working on this project
[13:12:04] vc: but as soon as we earn some results, we will definitely contact you
[13:12:15] NP: Excellent, looking forward to that
[13:12:54] NP: Well, it was very nice to meet you all – please feel free to subscribe to Cipher Mysteries by email or RSS (it’s free) so you can keep up with all the latest happenings.
[13:13:23] vc: ok 🙂
[13:13:57] NP: Best wishes, and see you all for the Rohonc week next summer 🙂
[13:14:04] NP: !!!!!
[13:14:11] vc: lol 🙂
[13:14:21] vc: that’s right! 😉
[13:15:16] NP: Excellent – gotta fly, ciao!
[13:15:36] vc: Best!
[13:15:37] vc: bye

One thought on “Voynich Summer Camp IM, transcript of session #2…

  1. D.N.O'Donovan on January 25, 2023 at 1:18 am said:

    Nick,
    For me, and for others who came later…

    Can you recall who you meant by ” the EVMT people” here..?

    quote
    12:27:08] NP: A complete transcription has been done, but it hasn’t been released – very frustrating
    [12:27:25] NP: (by the EVMT people, Gabriel Landini mainly)
    unquote

    also – I have to say this next item is new to me. Can you give more details of the statistical studies that produced these measured results at or by “the Voynich camp”?

    [12:17:38] vc: …. We suspect that the language is Italian – from measured results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Post navigation