I’ve blogged a few times about trying to crack the Scorpion Ciphers (a series of apparently homophonic ciphers sent to American crime TV host John Walsh). Most of my effort has been spent on the Scorpion S5 cipher, which (despite having 12 columns) appears to be rigidly cycling between 16 cipher alphabets.

However, it struck me a few days ago that this might also give us a way in to the Scorpion S1 cipher. This is because all the repeats there seem to be at a column distance of 0, 1, 4, 5, and 6, with the overwhelming majority of repeats at column distances 0 and 5. (The only exception is the “backwards L” glyph, which appears in two pairs, one pair at column distance 0 apart other, and the other at column distance 5 apart)

## The Slippy S1 Five-Alphabet Hypothesis

Putting the 16-alphabet-cycle from S5 together with the mostly-0-or-5-column-distance observation from S1 yields my “Slippy S1 Five-Alphabet Hypothesis”: that Scorpion S1 was constructed from a cycle of 5 cipher alphabets, where the encipherer always reset to alphabet #1 at the beginning of a line, and usually (but not always) stepped to the next alphabet along with each new column.

So whereas a rigid 5-alphabet cycle (i.e. with no slips) would have a fixed alphabet “ownership” of 1234512345 for each ten-glyph line, I suspect that we can make a “slippy” guess for S1’s cycle ownership, to try to reconstruct where the encipherer slipped from one cycle into the next. My best current set of guesses for S1 is therefore:

```1234512235 1234512344 1234412345 1234512345 1234112345 1234551245 2234512345```

(Note that I suspect that the “backwards L” shape appears on two alphabets, i.e. once in alphabet #2 and once in alphabet #4, but that this is the only exception to the rule.)

What this means is that each of the five alphabets has only 26 glyphs in them (one for each letter of the alphabet): and so we can tell that if two shapes are numbered as being in the same alphabet, they are very probably two different letters.

## Can We Solve This?

53 of S1’s 10 x 7 = 70 glyphs are unique, yielding a high multiplicity of 75.7%. By way of comparison, it would seem that normal (unstructured) homophonic ciphers are only solvable when their multiplicity is around the 20%-25% mark.

However, the question here is whether being able to group the letters into five unique alphabets (even probabilistically) reduces the number of combinations enough to make this genuinely solvable. As normal, pencil-and-paper solvers can make some pretty good guesses, e.g. the “S Λ” pair on lines #3 and #6 probably enciphers “TH”, while any repeated letter stands a good chance of being a normal high-frequency letter such as ETAOINS etc: but computers would do this much better.

My instinct is that this should be a good candidate for hill-climbing: and that the one-glyph-per-letter-per-alphabet constraint will prove reasonably effective. But effective enough? We’ll have to wait and see…

Incidentally, a good sanity check for this Scorpion S1 hypothesis would be to run some “forward simulations” (which is the kind of thing Dave Oranchak has done so much of with the Zodiac Killer Ciphers). By which I mean: if we feed a variety of 70-letter English texts into my best guess set of slippy cycles (i.e. “ITWASTHEBESTOFTIMESI” fed into 1234512235 / 1234512344 would become: “I1 T2 W3 A4 S5 T1 H2 E2 B3 E5 S1 T2 O3 F4 T5 I1 M2 E3 S4 I4”), I predict that the final average multiplicity of the texts will be close to 75%. But I might be wrong!

Apart from the case of the Somerton Man, has any other police investigation ever revolved around a book left in a complete stranger’s car? Personally, I’d be surprised: this seems to be a unique feature of the whole Somerton Man narrative.

But what, then, of the obvious alternate explanation, i.e. that the Rubaiyat was in the car already? For all the persuasive bulk the dominant explanation has gained from being parroted so heavily for nearly seven decades, I think it’s time to examine this (I think major) alternative and explore its logical consequences…

## Gerry Feltus’s Account

To the best of my knowledge, Gerry Feltus is the only person who has actually talked with the (still anonymous) man who handed the Rubaiyat in. So let us first look at Feltus’ account (“The Unknown Man”, p.105) of what happened at the time of the Somerton Man’s first inquest when the police search for the Rubaiyat was mentioned in the press:

Francis [note: this was Feltus’ codename for the man] immediately recalled that his brother-in-law had left a copy of that book in the glove box of his little Hillman Minx [note: not the car’s actual make] which he normally parked in Jetty Road. He could not recall him collecting it, and so it was probably there. He went to the car and looked in the glove box – yes, the book was still there. To his amazement a section had been torn out of the rear page, in the position described by past newspaper reports.

“Ronald Francis” then telephoned his brother-in-law:

Do you recall late last year when we all went for a drive in my car, just after that man was found dead on the beach at Somerton? You were sitting in the back with your wife and we all got out of the car, the book you were reading, you put in the glove box of my car, and you left it there.

To which the brother-in-law replied:

No it wasn’t mine. When I got in the back seat, the book was on the floor; I fanned through some pages and thought it was yours, so when I got out of the car I put it in the glove box for you.

A while back, I pressed Gerry Feltus for more specific details on this: though he wouldn’t say what make of car the “Hillman Minx” actually was, he said that the man told him that the book turned up “a day or two after the body was found on the beach, and during daylight hours“. Gerry added that “Francis” was now very elderly and suffering from severe memory loss. Even so, he said that “I have spoken to Francis, his family and others and I am more than satisfied with what he has told me“.

Finally: when “Francis” handed the Rubaiyat to the police, he “requested that his identity not be disclosed”, for fear that he would be perpetually hounded by the curious. Even today (2017) it seems that only Gerry Feltus knows his identity for sure: though a list of possible names would include Dr Malcolm Glen Sarre and numerous others.

## Newspaper Accounts

All the same, when I was trying to put everything into a timeline a while back, I couldn’t help but notice that Gerry’s account didn’t quite match the details that appeared in the newspapers at the time:

[…] an Adelaide businessman read of the search in “The News” and recalled that in November he had found a copy of the book which had been thrown on the back seat of his car while it was parked in Jetty road, Glenelg.

A new lead to the identity of the Somerton body may have been discovered on Saturday when Det.Sgt. R. L. Leane received from a city business man a torn copy of Fitzgerald’s translation of the Rubaiyat of Omar Khayyam said to have been found in his car at Glenelg about last November, a week or two before the body was found.
The last few lines of the poem, including the words “Tamam shud” (meaning “the end”) have been torn out of the book.
When the body was searched some time ago a scrap of paper bearing the words “Tamam shud” was found in a pocket.
Scrawled in pencilled block letters on the back of the cover of the book are groups of letters which appear to be foreign words and some numbers.
These, it is hoped, may be of assistance in tracing the dead man’s identity.
The business man told Det.Sgt. Leane that he found the copy of the Rubaiyat in the rear of his car while it was parked in Jetty road Glenelg, about the time of the RAAF air pageant in November.
He said he had known nothing about the much-publicised words “Tamam shud” until he saw a reference to them on Friday.

The book had been thrown into the back seat of a motor car in Jetty road, Glenelg, shortly before the victim’s body was found on the beach at Somerton on December 1.
[…]
Although the lettering was faint, police managed to read it by using ultra-violet light. In the belief that the lettering might be a code, a copy has been sent to decoding experts at Army Headquarters, Melbourne.

## Why Do These Accounts Differ?

The Parafield air pageant mentioned unequivocally in the above newspaper accounts was held on 20th November 1948, ten days or so before the Somerton Man was found dead on Somerton Beach. Yet Gerry Feltus was told by “Ronald Francis” himself that the book turned up “a day or two after the body was found on the beach”. Clearly, these two accounts can’t both be right at the same time.

I of course asked Gerry directly about this last year: by way of reply, he said “Don’t believe everything you read in the media, eg; ‘The business man told Det. Leane…. etc…’.“. Moreover, he suggested that I was beginning “to sound like [Derek] Abbott”, who had “nominated the same things as you”.

This is, of course, polite Feltusese for “with respect, you’re talking out your arse, mate“: but at the same time, all he has to back up this aspect of his account – i.e. that the book turned up after the Somerton Man was found, not ten days before – is “Ronald Francis”‘s word, given half a century after the event.

Hence this is the point where I have to temporarily bid adieu to Gerry Feltus’s account, because something right at the core of it seems to be broken… and when you trace the non-fitting pieces, they all seem to me to lead back to the Rubaiyat and the car.

So… what really happened with the Rubaiyat and the car? Specifically, what would it mean if the Rubaiyat had been in the car all along?

## The Rubaiyat Car Theory

If the Rubaiyat was already in the back of the “little Hillman Minx”, it would seem to be the case that:

(*) Ronald Francis had no idea what it was or why it was there
(*) Ronald Francis’ brother-in-law had no idea what it was or why it was there
(*) …and yet the Rubaiyat was connected to that car in some non-random way
(*) …or, rather, it was connected to someone who was connected to the car

Given that one of the phone numbers on its back was that of Prosper McTaggart Thomson – a person who lived a quarter of a mile away from where “Ronald Francis” lived or worked, and who (as the Daphne Page court case from five months earlier demonstrated beyond all doubt) helped people sell cars on the black market by providing fake “pegged-price” documentation – it would seem reasonable at this point to hypothesize that Prosper Thomson may well have been the person who had sold “Ronald Francis” that specific car.

There was also a very good reason why many people might well have been looking to sell their cars in November 1948: the Holden 48-215 – the first properly Australian car – was just then about to be launched. Note that the “little Hillman Minx” could not have been a Holden if it had been driven to the Parafield air pageant, as the very first Holden was not sold until the beginning of December 1948:

If “Ronald Francis” had just bought a car in (say) mid-November 1948, I can quite imagine him proudly taking his wife, his brother-in-law and his wife off to the Parafield air pageant for a nice day out.

If Prosper Thomson’s behaviour in the Daphne Page court case was anything to go by, I can also easily imagine that the person who had sold that car might have wondered if he was being swindled by the middle man. In his summing up, the judge said that “[t]he defendant [Thomson] had not paid the £400 balance, and had never intended to do so“: so who’s to say that Thomson was not above repeating that same trick, perhaps with someone from out of town?

Perhaps, then, the person whose Rubaiyat it was was not Prosper Thomson himself, but the person from whom Prosper Thomson had just bought the car in order to sell it to “Ronald Francis”.

Perhaps it was this person’s distrust of Thomson’s financial attitude had led him to hide the Rubaiyat under the back seat of the car, with the “Tamam Shud” specifically ripped out so that he could prove that it was he who had sold the car to Thomson in the first place.

And so perhaps it was the car’s previous owner who was the Somerton Man, visiting Glenelg to track down the owner of his newly sold car, simply to make sure he hadn’t been ripped off by Prosper Thomson.

## The Awkward Silence

I’ve previously written about how social the Somerton Man seemed to have been, and how that jarred with the lack of helpful response the police received. For all its physical size, Australia still had a relatively small population back then.

So perhaps the silence surrounding the Somerton Man cold case will turn out to be nothing more than that of jittery people buying and selling cars not through dealers, people who the Price Commissioners pegged prices had effectively turned into white-collar criminals – for how many professionals were so well-off in post-war Australia that they could afford to be principled about losing £400 or more in the sale of their shiny American car?

Incidentally, it has been reported that on the back of the Rubaiyat were written two phone numbers: one of which was the (now-famous) phone number for the nurse Jo Thomson (which her soon-to-be-husband Prosper Thomson was also using for small ads in the newspapers), while the other was allegedly for a local bank.

These are the two things people selling black market cars need: the number of the middle man who was laundering the transaction, and the number of bank to make sure cheques clear (remember that a dud cheque to pay for a car was ultimately what triggered the Daphne Page court case).

But the other thing such people need is an absence: an absence of discussion about the transaction. And if “Ronald Francis” had only just bought his car on the black market through Prosper Thomson (thanks to Price Commission pegging, only about 10% of car sales back then went through official car dealer channels), he would surely have had a very specific reason not to want the details of his sale explored and made public.

And so I wonder whether this was the real reason why Ronald Francis didn’t want his name revealed: because if the police were to understand the web of dealings that had brought the Somerton Man to Glenelg, that would inevitably make it clear that the two men were the participants in a black market car sale, one which – though widely practised – was still a Price Commission offence with stiff penalties.

Along those same lines, I also wonder whether it was Ronald Francis himself who erased the pencil writing from the Rubaiyat’s back cover, to try to cover at least some of the tracks that might lead police in his direction. Of course, we now know that SAPOL’s photographers were able to use ultra-violet photography to (mostly) reconstruct the letters: but this may well not have been known to him at the time.

Please note that I’m not saying this is the only plausible explanation for everything. However, insofar as it tackles (and indeed resolves) a large number of the trickiest aspects of the case, it’s at least worth considering, right?

## A Final Note

To be clear, when I ran this whole Rubaiyat Car suggestion past Gerry Feltus (admittedly in an earlier iteration), he dismissed it out of hand (though without any actual evidence to back up his position):

“I will not go into the possibility that the man purchased his car from Prosper. It is an absolutely rubbish suggestion that has no credibility. Poor old Prosper. He must have been the only ‘black market’ racketeer in Adelaide. From my knowledge of the climate during that relevant period he was a ‘nothing’.”

Well, Gerry was absolutely right insofar as that in 1948 Prosper was a small-time black marketeer, a mere minnow in the Melbourne-dominated black market car pool: but all the same, he was a minnow that lived extremely close by.

I suspect the real problem here is that if the mainstream story is wrong – that is, if Ronald Francis’ car had not long before (like so many others at the time) been bought at a premium on the black market, and if Francis had told white[-collar] lies to try to cover up his part in an illegal transaction once he realized what had happened – then people have been concealing their true involvement with what happened for nearly 70 years, not because of murder but because the price control legislation made criminals of nearly everyone selling their car.

And so it might well be that Gerry Feltus (and indeed just about everyone else) has been viewing the Somerton Man as entirely the wrong kind of mystery: not a police cold case, but a Price Commission cold case. How boringly middle class!

During and immediately after World War II, governments everywhere looked with dismay at their non-functioning factories, empty warehouses, and depleted male workforce. Even though the normal economic response to such shortages would be for prices to go up, it was politically vital under the circumstances to prevent profiteering, exploitation, and inflationary pressure from disrupting domestic marketplaces yet further.

In the Commonwealth, legislation was brought in during 1939 to control the prices of many key goods, commodities and supplies: this was known as the Commonwealth Prices Branch. In Australia, this was implemented by appointing a Deputy Price Commissioner for each state, who was tasked with assessing the correct level that specific prices should be. These commissioners were also given the power to investigate and enforce those “pegged” prices (quite independently of the police): the price controls continued until the 1950s.

(Archive material on price control in South Australia is indexed here. For what it’s worth, I’m most interested in D5480.)

## Black Markets

While this legislation did (broadly) have the desired effect, the mismatches it introduced between the price and the value of things opened up numerous opportunities for short term black markets to form. One well-known black market was for cars:

HINT OF FALL IN USED CAR PRICES
Melbourne.- If control were lifted, prices of used cars would fall and the black market would disappear, men in the trade said today.
Popular American cars would settle to slightly below the former black market price and expensive English cars to below the pegged price, they said.
The pegged price for a 1938 Ford has been £235, and the black market price £450. Buicks, Oldsmobiles, Chevrolets, and Pontiacs might sell for 75 per cent more
than the pegged price.
There was no shortage of English cars, so a 1937 Alvis, now £697, could go down to about £495. The classic English cars of the late 20’s and early 30’s, pegged at about £300, would probably sell at less than £100.
Every car would then find its level. Drivers who had kept their cars in good condition would be able to sell them in direct relation to their values.
Men in the trade said honest secondhand car dealers had almost been forced out of business during the war. Records showed that 90 per cent of all used car sales were on a friend-to-friend basis and they never passed through the trade.

But because you could be fined or go to prison if you bought or sold a car for significantly more than its pegged price, to sell your (say) fancy American car on the black market you would need two separate things: (1) a buyer willing to pay more than the pegged price, and also (2) someone who could supply nice clean paperwork to make the sale appear legitimate if the State Deputy Price Commissioner just happened to come knocking at your door.

And yet because back then cars were both aspirational and hugely expensive (in fact, they cost as much as a small house), so much money was at stake here that it was absolutely inevitable the black market in cars would not only exist, but, well, prosper.

So this is the point where Daphne Page and Prosper Thomson enter the room: specifically, Judge Haslam’s courtroom… I offer the remainder of the post without comment, simply because the judge was able to read the situation quite clearly, even if he didn’t much like what he saw:

## Daphne Page vs Prosper Thomson

Sequel To Alleged Loan. — Claiming £400, alleged to be the amount of a loan not repaid, Daphne Page, married of South terrace, Adelaide, sued Prosper McTaggart Thomson, hire car proprietor, of Moseley street, Glenelg.
Plaintiff alleged that the sum had been lent to defendant on or about November 27 last year so that he could purchase a new car and then go to Melbourne to sell another car.
Defendant appeared to answer the claim.
In reply to Mr. R. H. Ward, for defendant, the witness denied that anything had ever been said about £900 being paid for the car. She had never told Thomson that she wanted that sum for it. The pegged price of the car was £442.
Miss R. P. Mitchell for plaintiff.

BEFORE JUDGE HASLAM:—
Alleged Loan.— The hearing was further adjourned until today of a case in which Daphne Page, married, of South terrace, Adelaide, sued Prosper McTaggart Thomson, hire car proprietor, of Moseley street, Glenelg, for £400, alleged to have been a loan by her to him which be had not repaid.
Page alleged that the loan had been made on or about November 27 last year so that he could purchase a new car and then go to Melbourne to sell another car.
Thomson said that in answer to an advertisement Page had approached him on October 39 with a car to sell. She wanted £900 for it. On November 11 she accepted £850 as the price for the car and said that the RAA had told her that the pegged price was £442.
He drew a cheque for £450 and gave it to Page, who told him she had made out a receipt for £442, the pegged price. Early in December he went to Melbourne to sell a car for another man. On his return to Adelaide be found many messages from Page requesting that he would telephone her. He did not do so, but about a week later met her and told her that he could not pay her the £400 “black market balance” on the car because he had had a cheque returned from a bank.
Page had said she wanted the money urgently, as she had bought a business. Witness “put her off.”
Later, just before a summons was delivered to him, Page had telephoned and asked when he intended to pay the £400. She had spoken affably, but when he told her that he had had advice that he was not required to pay more than the pegged price of the car and did not intend to do so, she had said she would summons him and “make out that the money was a loan.” She had said that she would bring forward “all her family as witnesses.” He hung up the telephone receiver. He had never borrowed money from Page.
Thomson was cross-examined at length by Miss R. F. Mitchell, for Page. Mr. R. H. Ward for Thomson.

BEFORE JUDGE HASLAM:
Claim Over Car Transaction.
Judgment was reserved yesterday in a case in which Daphne Page, married, of South terrace, Adelaide, sued Prosper McTaggart Thomson, hire car proprietor, of Moseley street, Glenelg, for £400, alleged to have been a loan by her to him which he had not repaid.
It was alleged by Mrs. Page that the loan had been made on or about November 27 last year so that Thomson could purchase a new car and then go to Melbourne to sell another car.
Mr. [sic] R. P. Mitchell for plaintiff: Mr. R. H. Ward for defendant.

NEW Olds sedan taxi, radio equipped, available weddings, country trips, race meetings, &c.; careful ex-A.I.F. driver. lowest rates. Phone X3239.

WON CASE BUT NO COSTS ALLOWED

While he gave judgment for defendant in a £400 loan claim in Adelaide Local Court today in a case in which black-marketing of a motor car was mentioned, Judge Haslam refused costs because of defendant’s conduct in the transaction.
Mrs. Daphne Page, of South terrace, City, sued Prosper McTaggart Thomson, hire car proprietor, of Moseley street, Glenelg, for £400 alleged to be the amount of a loan not repaid.
His Honor said if it were not that the Crown would be faced with evidence of plaintiff in the case, he would send the papers to the Attorney-General’s Department with a suggestion that action be taken against defendant for the part he claimed to have taken in an illegal transaction.

“Direct conflict”

His Honor said there was a direct conflict between an account which alleged a simple contract loan of £400, made without security and not in writing, and one which set up that the £400 represented the unpaid balance of a black-market transaction.
Evidence was that in November last Mrs. Page had agreed to sell a Packard car for £442, but accepted a cheque for £450, defendant explaining the extra would cover the wireless in the car. Plaintiff gave a receipt for £442, the pegged price.
Plaintiff claimed that in November she lent £400 cash to defendant with which to buy another car in Melbourne. Defendant’s account was that Mrs. Page said her lowest price for her car was £900 and that she afterwards accepted his offer of £850. He said he would give her £450 next day and would want a receipt for the fixed price of £442.
When he gave her the cheque, plaintiff said she did not want a cheque for £450 when the pegged price was £442. He told her not to worry as the unexpired registration and insurance would cover the £8 difference.

Borrowing denied

Defendant said in evidence he did not pay the £400 balance and never intended to. He was advised of a new car being ready for delivery in November, but denied having borrowed £400 or any amount from Mrs. Page.
His Honor said there was little support for Mrs. Page’s account as to the terms on which her car was sold. He was of opinion plaintiff had not shown on the balance
of probabilities that any amount was lent to defendant.
Miss R. F. Mitchell appeared for plaintiff, and Mr. R. H. Ward for defendant.

Black Market Sale Alleged
BEFORE JUDGE HASLAM:
In a case arising from the sale of a motor car, in which his Honor yesterday gave Judgment for the purchaser, he refused him costs because of his conduct in the transaction.
The evidence, he said, had produced a direct conflict between an account alleging that a simple contract loan of £400 had been made without writing or security, and one which set up that the money represented the unpaid balance of a black market deal.
The plaintiff, Daphne Page, married woman, of South terrace, Adelaide, claimed £400 from Prosper McTaggart Thomson, hire car proprietor, of Moseley street, Glenelg, alleging the sum to be the amount of a loan not repaid.
It was alleged by the plaintiff that the money had been lent to the defendant on or about November 27 last year, so that he could purchase a new car, and then go to Melbourne to sell another car.
His Honor said he was of opinion that the plaintiff had not shown upon the balance of probabilities that any sum had been lent to the defendant. Were it not for the fact that the Crown would necessarily be faced with the evidence given by plaintiff in the case, he would send the papers relating to the proceedings on to the Attorney-General’s Department, with a suggestion that action sbould be taken against the defendant for the part he had claimed to have taken in an illegal transaction.
There was little to support the plaintiff’s account regarding the terms upon which the car had been sold by her to the defendant, his Honor said. According to her, the price had not been specifically agreed upon, but left to be ascertained by reference to the pegged price, which was £442.
The defendant’s account, his Honor continued, was tbat the plaintiff, after having first told him that £900 was the lowest price she would take for the car, had later accepted his offer of £850 for it. He had paid her £450 by cheque, telling her that he would have to borrow the remaining £400 from a finance company, and adding that he would want a receipt for the pegged price, and the registration to transfer the car into his name. The plaintiff had given him a receipt for £442. The defendant had not paid the £400 balance, and had never intended to do so.
Miss R. F. Mitchell for plaintiff: Mr. R. H. Ward for defendant.

It would be fair to say that the title of George Edmunds’ hefty book “Anson’s Gold and the Secret to Captain Kidd’s Charts” somewhat undersells its scope. Edmunds claims – as does his former research partner Ron Justron’s ‘Great Lost Treasure’, perhaps unsurprisingly – to have solved just about every treasure-related story going, including Ubilla’s treasure, Kidd’s (supposed) maps, The Loot of Lima, The Bosun Bird Treasure, Oak Island, Rennes-le-Chateau, Shugborough Hall, etc etc.

Even though Edmunds pulls his horse up in front of Becher’s Brook (i.e. Justron’s final assertions regarding Tintin and the Secret of the Unicorn, *sigh*), the two theorists’ oeuvres are otherwise difficult to slide a fag paper between, no matter how hard you sand it down. Perhaps experts at telling the People’s Front of Judea apart from the Judean People’s Front would find this easy: I struggled in many places.

Putting the issue of Ron Justron to one side, what is Edmunds’ actual argument that manages to take up his book’s whopping 585 pages?

## Part 1 – Identifying Killorain

Edmunds starts by taking the Ubilla treasure story completely at face value: he then trawls through a large number of similar-sounding buried treasure stories, before identifying (or, rather, offering an identification for) the character Killorain.

To do this, he uses what he calls “Story DNA”, i.e. by tracing the fragments of narrative shared, copied and re-used between different buried-treasure stories, Edmunds tries to deduce the relationships between those stories, and to reach out towards the Ur-story buried beneath.

Even though there’s a half-germ of a research idea in what he’s attempting here, at no point in his (actually quite large) book does this ever translate into a research methodology (or even an approach to complex reasoning) that anyone could follow, reproduce or use, on this subject or on anything else.

For sure, Part 1 is the clearest of all his sections: but at the end of it all, it’s still clear as mud to me why Edmunds thinks there can only be a single way of interpreting all the slabs of text he has copied over from numerous different sources to yield his particular conclusion. Yes, I can see how Killorain might be the person Edmunds thinks he is: but it’s a weak, sprawling, unfocused argument that carries him there, and it’s just not written in a way that acknowledges other possibilities or helps readers to eliminate those other possibilities.

Edmunds writes with enthusiasm (and not a little bombast at times): but it would need a significantly sharper knife than his “Story DNA” to pierce these historical veils. Has he managed to identify the treasure Ur-story’s paternity here? No, not really, sorry. 🙁

## Part 2 – Identifying the Band of Pirates

Here Edmunds again tries to use Story DNA to strip down the ‘Bosun Bird’, the Loot of Lima, Cocos Island Treasure, Mururoa Atoll Treasure (the same one that excited Ron Justron so much), and the Palmyra Island Treasure stories into their overlapping DNA fragments to identify the band of pirates behind the single (supposed) pirate treasure event from which all these stories were derived.

However, his argument here is terrifically speculative (and noticeably fuzzier and weaker than Part 1’s): and right at the end, Edmunds expands his scope yet further – he now also wants his argument to encompass “Masonic DNA”. By this he means things which sound as though they link to Masonic practices or Masonic history, if you (again) strip them down to their fragmentary parts.

Unfortunately, this latter half makes his argument sound exactly like the kind of paranoid Masonic delusions that have plagued just about every piece of writing on treasure maps for the last century. To the best of my knowledge, there is no historical evidence whatsoever that links Speculative Freemasonry to anything remotely like a genuine conspiracy involving treasure: everything written on the subject has been little more than a giant house of cards (sans Frank Underwood, of course) that a single committed sneeze would blow to the floor.

Hence this for me is where Edmunds’ book “jumps the shark”, i.e. the point where the reader’s sympathies towards the kind of thing Edmunds was attempting (however imperfectly) in Part 1 quickly drop to zero. “Story DNA” was already only as strong as the execution (and this itself was noticeably lacking): but his “Masonic DNA” is just wrong-headed, and on many different levels.

## Part 3 – H. T. Wilkins Joins The Party

Here, Edmunds recaps some of his previous book on Captain Kidd’s treasure maps (“Kidd: The Search For His Treasure”), but links his conclusions with Juan Fernandez Island, Oak Island, Plum Island, and a convoluted account of how he believes prolific author Wilkins was the mastermind behind it all.

Errmmm… really? Really truly honestly? Wilkins-as-Svengali is the conceit that enables both Edmunds and Ron Justron to make anything they want to be true sound true (i.e. where Wilkins can only have genuinely copied document X from an original source) or anything that doesn’t fit their chosen narrative sound false (i.e. Wilkins must have cleverly concocted document Y to leave a trail of clues that only the Wisest of the Wise can recognize and see past).

This is, of course, hyper-selective wishful thinking (as opposed to anything that might approach critical evaluation, or indeed critical thought). What makes this even clearer is Geoff Bath’s very interesting series of books, for which Geoff managed to uncover a whole lot of Wilkins’ correspondence. In my opinion, Bath offered up a picture of Wilkins that was radically different from (and, I believe, a lot more accurate and evidentially-grounded) than the one in either of Edmunds’ books.

Yes, Wilkins surely did personally create many of the maps that appear in his books, complete with Alle Ye Olde-Fashionned Nonne-Sense Texte He Couldde Comme Uppen Wyth: but it beggars belief that Wilkins was such a genius that he caused everything to fall into place for Edmunds, by leaving a faint trail of breadcrumb clues to The Real Treasure that only someone who just happened to cross-reference all his different books might possibly notice.

## Part 4 – Latcham and Guayacan

This is where Edmunds looks (somewhat cursorily, it has to be said) at the Guayacan treasure story written about by Richard Latcham (and yes, I do have a copy of the original book in Spanish).

I’m sorry, though: as a piece of supposed history, this story really sucks. And the extra letter (supposedly by Captain Cornelius Patrick Webb of the Unicorn) is enough to put anyone right off their soup.

To start to explain away the problems with this, Edmunds (or rather Ron Justron’s Latin teacher acquaintance) translated the Cornelius Webb letter back into Latin (from Wilkins’ supposed mistranslation) and then back into English: and then talks about star codes, alchemy, celestial navigation, and yet more Masonic DNA. All of which is then brought together in the kind of numeric over-wrangling typically employed by conspiracy nutters to prove whatever thing they wanted to prove in the first place. Not that I’m saying that Edmunds is one of those: but the problem here is that his argument doesn’t make it easy to tell the two apart.

Perhaps others will find themselves convinced by this, but it left me as stone cold as Stone Cold Steve Austin. In Antarctica. Eating cold soup.

## Part 5 – Rennes-le-Chateau

In which Edmunds recaps Pierre Plantard’s Rennes-le-Chateau story: he concludes that it is nonsense, but based on a genuine document connected to Lord Anson. Which is like asking the reader to disconnect their brain into neutral before turning the page. *sigh*

## Part 6 – Anson’s Monument

By this point I was finding it extremely difficult to find the will to turn the pages. Good luck if you want to try summarizing this.

## Part 7 – Mathematical-Sounding Stuff

This part covers the Golden Ratio, spirals, hidden geometry, and all the other gee-whizz crop circle stuff they don’t teach you on a Maths degree. If it had any redeeming features, I didn’t manage to pick up on them: by now, the nausea was really quite overwhelming.

Incidentally, a short section on Spanish Treasure Codes reproduces some drawings from a 65-page 2004 book called “The Spanish Code to Treasure” by Lou Layton (now deceased): however, it’s extraordinarily hard to tell whether these are genuine or just wishful thinking.

Errmmm… no, it wasn’t. Next!

## Part 9 – “Well, That About Wraps It Up For God”

Fans of Hitchhiker’s Guide To The Galaxy will probably recognize the above as the title of one of Oolon Colluphid’s books. These were all characterized by foolish self-referential logic that purported to use the existence of God to prove His non-existence, e.g.:

“I refuse to prove that I exist,” says God, “for proof denies faith, and without faith I am nothing.”

“But,” says Man, “the Babel fish is a dead giveaway, isn’t it? It could not have evolved by chance. It proves that You exist, and so therefore, by Your own arguments, You don’t. QED”

Suffice it to say that, to my mind, this final part of Edmunds’ book – that applies Story DNA, Masonic DNA, star codes, numerology and abstruse numerical calculations to the Shugborough Hall Shepherds’ Monument to supposedly yield the precise longitude and latitude of a buried pirate treasure – reminds me strongly of Oolon Colluphid. And not in a flattering way.

But feel free to read “Anson’s Gold” for yourself and make up your own mind: for what do I know?

As I wrote before, I think we have four foundational challenges to tackle before we can get ourselves into a position where we can understand Voynichese properly, regardless of what Voynichese actually is:

* Task #1: Transcribing Voynichese into a reliable raw transcription e.g. EVA qokeedy
* Task #2: Parsing the raw transcription to determine the fundamental units (its tokens) e.g. [qo][k][ee][dy]
* Task #3: Clustering the pages / folios into groups that behave differently e.g. Currier A vs Currier B
* Task #4: Normalizing the clusters i.e. understanding how to map text in one cluster onto text in another cluster

This post relates to Task #2, parsing Voynichese.

## Parsing Voynichese

Many recent Voynichese researchers seem to have forgotten (or, rather, perhaps never even knew) that the point of the EVA transcription alphabet wasn’t to define the actual / only / perfect alphabet for Voynichese. Rather, it was designed to break the deadlock that had occurred: circa 1995, just about every Voynich researcher had a different idea about how Voynichese should be parsed.

Twenty years on, and we still haven’t got any consensus (let alone proof) about even a single one of the many parsing issues:
* Is EVA qo two characters or one?
* Is EVA ee two characters or one?
* Is EVA ii two characters or one?
* Is EVA iin three characters or two or one?
* Is EVA aiin four characters or three or two or one?
…and so forth.

And so the big point of EVA was to try to provide a parse-neutral stroke transcription that everyone could work on and agree on even if they happened to disagree about just everything else. (Which, as it happens, they tend to do.)

## The Wrong Kind Of Success

What happened next was that as far as meeting the challenge of getting people to talk a common ‘research language’ together, EVA succeeded wildly. It even became the de facto standard when writing up papers on the subject: few technical Voynich Manuscript articles have been published since that don’t mention (for example) “daiin daiin” or “qotedy qotedy”.

However, the long-hoped-for debate about trying to settle the numerous parsing-related questions simply never happened, leaving Voynichese even more talked about than before but just as unresolved as ever. And so I think it is fair to say that EVA achieved quite the wrong kind of success.

By which I mean: the right kind of success would be where we could say anything definitive (however small) about the way that Voynichese works. And just about the smallest proof would be something tangible about what groups of letters constitute a functional token.

For example, it would be easy to assert that EVA ‘qo’ acts as a functional token, and that all the instances of (for example) ‘qa’ are very likely copying mistakes or transcription mistakes. (Admittedly, a good few o/a instances are ambiguous to the point that you just can’t reasonably decide based on the scans we have). To my eyes, this qo-is-a-token proposition seems extremely likely. But nobody has ever proved it: in fact, it almost seems that nobody has got round to trying to prove anything that ‘simple’ (or, rather, ‘simple-sounding’).

## Proof And Puddings

What almost nobody seems to want to say is that it is extremely difficult to construct a really sound statistical argument for even something as basic as this. The old saying goes that “the proof of the pudding is in the eating” (though the word ‘proof’ here is actually a linguistic fossil, meaning ‘test’): but in statistics, the normal case is that most attempts at proof quickly make a right pudding out of it.

As a reasonably-sized community of often-vocal researchers, it is surely a sad admission that we haven’t yet put together a proper statistical testing framework for questions about parsing. Perhaps what we all need to do with Voynichese is to construct a template for statistical tests for testing basic – and when I say ‘basic’ I really do mean unbelievably basic – propositions. What would this look like?

For example: for the qo-is-a-token proposition, the null hypothesis could be that q and o are weakly dependent (and hence the differences are deliberate and not due to copying errors), while the alternative hypothesis could be that q and o are strongly dependent (and hence the differences are instead due to copying errors): but what is the p-value in this case? Incidentally:

* For A pages, the counts are: (qo 1063) (qk 14) (qe 7) (q 5) (qch 1) (qp 1) (qckh 1), i.e. 29/1092 = 2.66% non-qo cases.
* For B pages, the counts are: (qo 4049) (qe 55) (qckh 8) (qcth 8) (q 8) (qa 6) (qch 3) (qk 3) (qt 2) (qcph 2) (ql 1) (qp 1) (qf 1), i.e. 98/4147 = 2.36% non-qo cases.

But in order to calculate the p-value here, we would need to be able to estimate the Voynich Manuscript’s copying error rate…

## Voynichese Copying Error Rate

In the past, I’ve estimated Voynichese error rates (whether in the original copying or in the transcription to EVA) at between 1% and 2% (i.e. a mistake every 50-100 glyphs). This was based on a number of different metrics, such as the qo-to-q[^o] ratio, the ain-to-oin ratio, the aiin-to-oiin ratio, the air-to-oir ratio, e.g.:

A pages:
* (aiin 1238) (oiin 110) i.e. 8.2% (I suspect that Takeshi Takahashi may have systematically over-reported these, but that’s a matter for another blog post).
* (ain 241) (oin 5) i.e. 2.0% error rate if o is incorrect there
* (air 114) (oir 3) i.e. 2.6% error rate

B pages:
* (aiin 2304) (oiin 69) i.e. 2.9% error rate
* (ain 1403) (oin 18) i.e. 1.2% error rate
* (air 376) (oir 6) i.e. 1.6% error rate

It’s a fact of life that ciphertexts get miscopied (even printed ciphers suffer from this, as Tony Gaffney has reported in the past), so it seems unlikely that the Voynich Manuscript’s text would have a copying error rate as low as 0.1% (i.e. a mistake every 1000 glyphs). At the same time, an error rate as high as 5% (i.e. every 20 glyphs) would arguably seem too high. But if the answer is somewhere in the middle, where is it? And is it different for Hand 1 and Hand 2 etc?

More generally, is there any better way for us to estimate Voynichese’s error rate? Why isn’t this something that researchers are actively debating? How can we make progress with this?

## (Structure + Errors) or (Natural Variation)?

This is arguably the core of a big debate that nobody is (yet) having. Is it the case that (a) Voynichese is actually strongly structured but most of the deviations we see are copying and/or transcription errors, or that (b) Voynichese is weakly structured, with the bulk of the deviations arising from other, more natural and “language-like” processes? I think this cuts far deeper to the real issue than the typical is-it-a-language-or-a-cipher superficial bun-fight that normally passes for debate.

Incidentally, a big problem with entropy studies (and indeed with statistical studies in general) is that they tend to over-report the exceptions to the rule: for something like qo, it is easy to look at the instances of qa and conclude that these are ‘obviously’ strongly-meaningful alternatives to the linguistically-conventional qo. But from the strongly-structured point of view, they look well-nigh indistinguishable from copying errors. How can we test these two ideas?

Perhaps we might consider a statistical study that uses this kind of p-value analysis to assess the likeliest level of copying error? Or alternatively, we might consider whether linguistic hypotheses necessarily imply a lower practical bound for the error rate (and whether we can calculate this lower bound). Something to think about, anyway.

All in all, EVA has been a huge support for us all, but I do suspect that more recently it may have closed some people’s eyes to the difficulties both with the process of transcription and with the nature of a document that (there is very strong evidence indeed) was itself copied. Alfred Korzybski famously wrote, “A map is not the territory it represents”: similarly, we must not let possession of a transcription give us false confidence that we fully understand the processes by which the original shapes ended up on the page.

After my last post proposing a possible link between the Silk Dress Cipher and Orphan Trains, I widened my search a little to take in 19th Century Baltimore orphanages. What kind of archival sources might still exist, in a town where 1,500 buildings were destroyed by fire in 1904?

But rather than look directly, I decided to instead first try to find any books or studies on 19th century Baltimore orphanages. And it turns out that (unless you know otherwise) there are only really two of those to consider…

## Baltimore orphanages #1 – Marcy Kay Wilson

The first is “Dear Little Living Arguments”: Orphans and Other Poor Children, Their Families and Orphanages, Baltimore and Liverpool, 1840-1910, a freely downloadable 2009 dissertation by Marcy Kay Wilson at the University of Maryland:

The two Baltimore orphanages that I examine are the Home of the Friendless of Baltimore City (HOF), which was established in 1854, and the Baltimore Orphan Asylum (BOA). The latter was known as the Female Humane Association Charity School (FHACS) at the time of its incorporation in 1801. Six years later (1807), it was reincorporated as the Orphaline Charity School (OCS). It was renamed the Baltimore Female Orphan Asylum (BFOA) in 1826, and finally became known as the BOA in 1849. [pp.10-11]

Her primary sources for the Baltimore Orphan Asylum (in the Woodbourne Collection of the Maryland State Archives) include:
* Board Minutes (1881-1897, 1905-1921)
* Monthly Reports (1893-1917)
* Annual Reports (1860-1930)

For the Home of the Friendless of Baltimore City, the same Woodbourne Collection holds:
* Annual Reports (1854-1914)
* Constitution and By-Laws, 1859.
* Charter and By-Laws, revised 1904.
* Board Minutes (1901-1913)

Also (even though I’m not initially looking at Catholic orphanages):

The female-religious order known as the Oblate Sisters of Providence (OSP) granted me access to its records, which are housed at Our Lady of Mount Providence Convent in Baltimore. The OSP has the distinction of being the oldest Catholic religious order for African-American women in the United States, and was created in 1829. [p.13]

I’ve started to patiently work my way through its 402 pages, but I’ll be a little while. It turns out that orphanages sprung up all over America during the 19th century, initially triggered by the family-destroying ravages of cholera epidemics… so best not hold your breath, then. 🙂

## Baltimore orphanages #2 – Nurith Zmora

Marcy Kay Wilson refers early on to Nurith Zmora’s Orphanages Reconsidered: Child Care Institutions in Progressive Era Baltimore (Philadelphia: Temple University Press, 1994).

Zmora used the records of the Samuel Ready School for Orphan Girls (which opened in 1887, and whose archives are in the special collections of the Langsdale Library at the University of Baltimore), the Hebrew Orphan Asylum (whose records are now held by the Maryland Jewish Historical Society), and the Dolan Children’s Aid Society (whose records are in the Children’s Bureau archive of the Associated Catholic Charities of Baltimore).

Though Wilson and (the far more revisionist, it has to be said) Zmora both offer fascinating insights into the social and political dynamics underpinning Baltimore’s orphanages, it’s hard not to conclude that their efforts sit somewhat at right-angles to our present angle: and it also has to be said that there is not a hint of the whole Orphan Trains narrative emerging from the various archives so far. But… perhaps this is all just the tip of an evidential iceberg. 😉

## Other sources

There were a number of other books that kept coming up during my literature trawl, that I thought I ought to mention:

Second Home: Orphan Asylums and Poor Families in America by Timothy A. Hacsi (Harvard University Press).

Clement, Priscilla Ferguson, “Growing Pains: Children in the Industrial Age, 1850-1890”, New York: Twayne Publishers, 1997. [Wilson points to p.200]

Holt, Marilyn. “The Orphan Trains: Placing Out in America. Lincoln: University of Nebraska Press, 1992. [Wilson points to pp.80-117]

O’Connor, Stephen. “Orphan Trains: The Story of Charles Loring Brace and the Children He Saved and Failed”. New York: Houghton Mifflin Company, 2001.

Crooks, James B. “Politics and Progress; The Rise of Progressivism in Baltimore, 1895 to 1911”. Baton Rouge: Louisiana State University Press, 1968.

Since my recent post on the silk dress cipher, Jim Shilliday left an extremely helpful comment, in which he suggested specific readings for many of its various codewords.

So here’s a link to a Microsoft Word document containing a tabbed transcription of the Silk Dress Cipher.

## The Locations

The first two columns contain a large number of codewords that seem almost certain to be American / Canadian place-names:

`-----Sheet 1-----`

``` Smith nostrum Antonio rubric == San Antonio, Texas Make Indpls == Indianapolis, Indiana Spring wilderness Vicksbg rough-rack == Vicksburg, Mississippi Saints west Leavwth merry == Leavenworth, Kansas Cairo rural == Cairo, Illinois (or perhaps Cairo, Georgia?) Missouri windy == Missouri / Chicago? Elliott memorise == Elliot, Maine [though this is not hugely convincing] Concordia mammon == Concordia, Kansas Concordia merraccous == Concordia, Kansas / Americus? -----Sheet 2----- ```

```Bismark Omit == Bismarck, North Dakota Paul Ramify == ? Helena Onus == Helena, Montana Green Bay == Green Bay, Wisconsin Assin Onaga == Onaga, Kansas Custer Down == Custer, South Dakota Garry [Noun] Lentil = Gary, Indiana? Minnedos [Noun] Jammy = Minnedosa, Manitoba Calgarry Cuba == Calgary, Alberta / Cuba Grit wrongful Calgarry [Noun] Signor == Calgary, Alberta Landing [Noun] Regina == Regina, Saskatchewan```

I put all these locations onto Google Maps to see if any patterns emerged:

## So… What Links These Places?

In a comment here, bdid1dr suggested that these towns might possibly be connected with the “Underground Railroad”, a route a large number of runaway slaves followed to get them from the South to Canada (where slavery was illegal). All the same, even though this is an interesting slice of American history, it is almost certainly not the explanation for the Silk Dress Cipher because (a) the dates are wrong (slavery had been made illegal in the US by the mid-1880s, and so the Underground Railroad was not still in operation), and (b) the locations are wrong (the Underground Railroad largely ran up the Eastern side of the US, quite different to the pattern we see here).

In a further comment, however, Jim Shilliday points instead to a quite different American history: the Orphan Trains. These ran from 1854 until as late as 1929, shifting East Coast orphans (though in fact a large number of them had one or even two parents) out to farms, many in the mid-West. What particularly triggered Jim’s memory was that (as he noted in his comment) “Concordia, Kansas (mentioned twice in the text) is the site of the National Orphan Train Complex, housed in a restored Union Pacific Railroad Depot“.

It is certainly striking that for a piece of paper found in Maryland, everywhere (apparently) listed seems to be a long way away: and that there appear to be three locations in a line in Kansas – Leavenworth, Onaga, and Concordia. (When I checked, all three had railroad stations: from Leavenworth Junction, trains ran to Onaga [rails laid 1877 by the Leavenworth, Kansas & Western Railway] and separately to Concordia (via Miltonvale on the Atchison, Topeka & Santa Fe Line.)

The New York Historical Society holds MS 111, (The Victor Remer Historical Archives of The Children’s Aid Society): which is so large that it’s hard to know where to begin. Portions have been digitized and placed on flickr, but these seem to be mainly photographs: individual case files are only allowed to be examined at the archives.

If there is some kind of guide to the Orphan Trains’ destinations (whether as a book or online), I haven’t yet found it. However, given that somewhere between 120,000 and 270,000 children (depending on which source you believe) were placed, it would perhaps be unsurprising if almost all destinations were covered at one time or another: and it would also be unsurprising if the placement or travel records that remain are far from complete.

Incidentally, the National Orphan Train Complex in Concordia is holding its 2017 Annual Orphan Train Riders Celebration from 1st to the 4th June 2017, if anyone not too far away is interested to find out more.

## Orphan Trains and Maryland

Probably the most usefully skeptical resource is Orphan Train Myths and Legal Reality: the author (R.S.Trammell) argues that, though well-intentioned, in practice the Orphan Trains offered what was only a quick fix for what was a much deeper problem, and helped delay the kinds of deeper reforms and changes in attitude that were needed at the time.

Trammell also notes: “Orphan train trips were also sponsored and financed by charitable contributions and wealthy philanthropists such as Mrs. John Jacob Astor III who, by 1884, had sent 1,113 children west on the trains.” And also that New York wasn’t the only starting point: ” [s]imilar institutions were created in Baltimore, Maryland and Boston, Massachusetts”.

Trammell’s source for this last point was the 1902 book by Homer Folks: “The care of destitute, neglected, and delinquent children“. This talks (p.49) about the 1807 foundation of the Baltimore orphan asylum, which had originally been the “female orphaline charity school”, and then the Baltimore female orphan asylum managed by “nine discreet female characters”, and where “[t]he directors were also given power to bind out children placed in the school”. Folks also mentions “St. Mary’s female orphan asylum”, a Catholic asylum in Baltimore founded in 1817.

But can we find any records of these orphan asylums? Hmmm…

OK, so there’s like another Zodiac film coming out this summer (2017), and it’s like called Awakening The Zodiac. And if that’s not just like totally thrilling enough for you kerrrazy cipher people already, there’s also a trailer on YouTube long enough to eat a couple of mouthfuls of popcorn (maybe three tops):

I know, I know, some haters are gonna say that it’s disrepectful to the memory of the dead, given that the Zodiac claimed to have killed 37 people, and that the film makers are just building cruddy entertainment on top of their families’ suffering. But it’s just Hollllllllllywood, people, or rather about as Hollywood as you can get when you film it on the cheap in Canada. Though if the pitch was much more elaborate than “Storage Hunters meets serial killer”, you can like paint my face orange and call me Veronica.

Seriously, though, I’d be a little surprised if anyone who knows even 1% more than squat about ciphers was involved: if my eyes don’t deceive me, there certainly ain’t no “Oranchak” in the credits. Maybe there’ll turn out to be hidden depths here: but – like the Z340 – if there are, they’re very well hidden indeed.

This, you may be a little surprised to read, is a story about a “two-piece bustle dress of bronze silk with striped rust velvet accents and lace cuffs“, with original Ophelia-motif buttons. Maryland-based curator and antique dress collector Sara Rivers-Cofield bought it for a Benjamin from an antique mall around Christmas 2013: but it turned out – marvel of marvels – to have an odd-looking ciphertext concealed in a secret inside pocket.

In early 2014, German crypto-blogger Klaus Schmeh threw this puzzle to his readers to solve, not unlike a juicy bone to a pack of wolves. However, their voracious code-breaking teeth – normally reliable enough for enciphered postcards and the like – seemed not to gain any grip on this silk dress cipher, even when he revisited it a few days ago.

So… what is going on here? Why can’t we just shatter its cryptographic shell, like the brittle antique walnut it ought by all rights to be? And what might be the cipher’s secret history?

## First, The Dress

It’s made of nice quality silk (and has been looked after well over its 130-odd year lifetime), so would have been a pricey item. The buttonholes are hand-stitched (and nicely finished), yet much of the other stitching was done by machine.

This alone would date the item to after 1850 or so (when Isaac Singer’s sewing machines began to be sold in any quantity). However, Sara Rivers-Cofield dates it (on purely stylistic grounds) to “the mid-1880s”, which I find particularly interesting, for reasons I’ll explain later.

All we know about its original owner, apart from a penchant for hidden ciphers, is her surname (“Bennett”) and her dress size. We might reasonably speculate (from the cost and quality of her silk two-piece) that she was somewhere between well-to-do and very well off; and perhaps from a larger city in Maryland (such as Baltimore) where silk would be more de rigueur; and possibly she wasn’t much beyond her mid-20s (because life expectancy wasn’t that good back then).

## Who Might She Be?

It doesn’t take much web searching to come up with a plausible-sounding candidate: Margaret J. Bennett, “a dowager grand dame of Baltimore society” (according to the Baltimore Sun) who died childless in 1900, leaving \$150,000 to endow a local trust to provide accommodation for homeless women.

Among Baltimore architectural historians, she is also remembered for the Bennett House at 17 West Mulberry Street: there, the land was purchased by F.W. Bennett (who was the head of his own Auction House in town), while the house was erected in 1880.

Anyway, if anyone here has access to American newspapers archives or Ancestry.com (though I have in the past, I don’t at the moment), I’d be very interested to know if they have anything on Margaret J. Bennett. I didn’t manage to find any family archives or photographs online, but hopefully you cunning people can do much better.

Of course, there may well be many other Mrs Bennetts who also match the same basic profile: but I think Margaret J. is too good a catch not to have at least a quick look. 🙂

## Now, The Silk Dress Cipher Itself

What Sara Rivers-Cofield (and her mother) found hidden inside the silk dress’s secret inner pocket were two balled-up sheets of paper (she called them “The Bustle Code”):

Within a few seconds of looking at these, it was clear to me that what we have here is a genuine cipher mystery: that is, something where the cryptography and the history are so tangled that each obscures the other.

Curiously, the writing on the sheets is very structured: each line consists of between two and seven words, and all bar three of these have the number of words written in just below the first word. So even when text wraps round, it appears that we can treat that whole (wrapped) line as a single unit.

Also oddly, the writing is constrained well within the margins of the paper, to the point that there almost seems to be an invisible right-hand margin beyond which the writer did not (or could not) go. It therefore seems as though these sheets might be a copy of a document that was originally written on much narrower pieces of paper, but where the original formatting was retained.

Another point that’s worth making is that the idea of using word lists for telegraphy (and indeed cryptography) is to keep the words dissimilar to each other, to prevent messages getting scrambled. Yet here we appear to have words very similar to each other (such as “leafage” and “leakage”), along with words that seem to have been misheard or misspelt (“Rugina” for “Regina”, “Calgarry” for “Calgary”, etc).

To me, this suggests that part of the process involved somebody reading words out loud to someone writing them down. Hence I’ve attempted to correct parts of my transcription to try to bring some semblance of uniformity to it. (But feel free to disagree, I don’t mind).

Interestingly, if you lay out all the words in columns (having unwrapped the word wrapping), a number of striking patterns emerge…

## The Column Patterns

Where the codetext’s words repeat, they do so in one of three groups: within the first column (e.g. “Calgarry”), within the second column (e.g. “Noun”), or within the remainder (e.g. “event”). In the following image, I’ve highlighted in different colours where words starting with the same letter repeat from column three onwards:

Moreover, the words in the first column are dominated by American and Canadian place names: although (just to be difficult) “egypt” and “malay” both appear elsewhere in the lines.

The third column is overwhelmingly dominated by l-words (legacy, loamy, etc): generally, words in the third to seventh columns start with a very limited range of letters, one quite unlike normal language initial letter distributions.

Indeed, this strongly suggests to me that the four instances of “Noun” in the second column are all nulls, because if you shift the remainder of those words across by one column, “laubul” / “leakage” / “loamy” / “legacy” all slide from column #4 back into the l-initial-heavy column #3.

It seems almost impossible at this point not to draw the conclusion that these words are drawn from lists of arbitrary words, arranged by first letter: and that without access to those same lists, we stand no real chance of making progress.

All the same, a commenter on Sara Rivers-Cofield’s blog (John McVey, who collects historical telegraph codes, and who famously – “famously” around here anyway – helped decode a 1948 Israeli telegram recently) proposed that what was in play might be not so much a telegraphic code as a telegraphic cipher.

These (though rare) included long lists of words to yield numerical equivalents, which could then be used to index into different lists (or sometimes the same list, but three words onwards). Here’s a link to an 1870 telegraphic cypher from McVey’s blog.

However, from the highly-structured nature of the word usage and repetitions here, I think we can rule out any kind of formal telegraphic code, i.e. this is not in any way a “flat” words-in-words-out code substitution.

Rather, I think that we are looking at something similar to the semi-improvised (yet complex) rum-runner codes that Elizebeth Friedman won acclaim for breaking in the 1920s and 1930s: strongly reliant on code lists, yet also highly specialized around the precise nature of the contents of the communication, and using amateur code-making cunning.

That is, the first two columns seem to be encoding a quite different type of content to the other columns: the l-list words seem to be signalling the start of the second half’s contents.

## Were Other People Involved?

I’ve already suggested that the words on the two sheets were copied from smaller (or at least narrower) pieces of paper, and that as part of this someone may well have read words out for someone else to copy down (because spelling mistakes and/or mishearing mistakes seem to have crept in).

However, someone (very possibly a third person) has also apparently checked these, ticking each numbered line off with a rough green pencil. There are also underlinings under some words (such as “Lental”), not unlike a schoolteacher marking corrections on an exercise book.

Yet once you start to get secret writing with as many as three people involved, the chances of this being an individual’s private code would seem to sharply reduced – that is, I think we can rule out the possibility that this was the delusional product of a “lone gunman”. Moreover, there must surely have been a good-sized pie involved to warrant the effort of buying (or, perhaps more likely given the idiosyncratic nature of the words) assembling code books: by which I mean there was enough benefit to be divided into at least three slices and still be worth everyone’s while.

What I’m trying to get at here is that, from the number of people involved, the tangledness of the code books, and the curious rigid codetext structure, that this seems to have been an amateur code system constructed to enable some kind of organized behaviour.

Betting springs obviously to mind here: and possibly horse-racing, given that “dobbin” and “onager” appear in the codewords. But there’s another possibility…

## Numbers and policies?

With its Puritan historical backdrop, America has long had an ambivalent attitude towards both gambling and alcohol: the history of casinos, inter-state gambling, and even Prohibition all attest strongly to this.

By the 1880s, the kind of state or local lotteries that had flourished at the start of that same century had almost all been shut down, victims of corruption and scandals. The one that remained (the Louisiana Lottery) was arguably even more corrupt than the others, but remained afloat thanks to the number of politicians benefiting from it: in modern political argot, it was (for a while, at least) “too big to fail”.

What stepped into the place of the state lotteries were illegal local lotteries, better known as the “numbers game”, or the numbers racket. Initially, these were unofficial lotteries run from private residences: but later (after the 1920s, I believe), they began to instead use numbers printed in newspapers that were believed to be random (such as the last three digits of various economic indicators, such as the total amount of money taken at a given racetrack), because of – surprise, surprise – the same kinds of corruption and rigging that had plagued the early official state lotteries.

Though the numbers racket became known as the scourge of Harlem in the first half of the twentieth century (there’s a very good book on this, “Playing the Numbers: Gambling in Harlem between the Wars”), modern state lotteries and interstate sports betting all but killed it off, though a few numbers joints do still exist (“You’re too late to play!“).

Back in the second half of the 19th century, ‘policy shops’ (where the question “do you want to buy a policy?” drew a parallel between insurance and gambling) started to flourish, eventually becoming a central feature of the American urban landscape. With more and more state lotteries being shut down as the century progressed, numbers were arguably the face of small-stake betting: in terms of accessibility, they were the equivalent of scratch cards, available nearly everywhere.

For a long time, though, information was king: if you were organized enough to get access to the numbers before the policy shop did, you could (theoretically) beat the odds. Winning numbers were even smuggled out by carrier pigeon: yet policy shops (who liked to take bets right up until the last possible moment) were suspicious of “pigeon numbers”, and would often not pay out if they caught so much as a sniff of subterfuge. It’s not as if you could complain to the police, right?

At the same time, a whole hoodoo culture grew up around numbers, where superstitious players were sold incense sticks, bath crystals, and books linking elements in your dreams to numbers. First published in 1889, one well-known one was “Aunt Sally’s Policy Player’s Dream Book”:

This contained lists linking dream-items to suggestions of matching number sequences to back, with two numbers being a “saddle”, three numbers a “gig”, and four numbers a “horse”: on the book’s cover, Aunt Sally is shown holding up “the washerwoman’s gig” (i.e. 4.11.44). There’s much more about this on Cat Yronwode’s excellent Aunt Sally page.

Might it be that these two Silk Dress Cipher sheets are somehow numbers betting slips that have been encoded? Could it be that each line somehow encodes a name (say, the first two columns), the size of the bet, and a set of numbers to bet on? There were certainly illegal lotteries and policy shops in Baltimore, so this is far from impossible.

Right now, I don’t know: but I’d be very interested to know of any books that cover the history of “policy shops” in the 19th century. Perhaps the clues will turn out to be somewhere under The Baltimore Sun…

As I see it, there are four foundational tasks that need to be done to wrangle Voynichese into a properly usable form:

* Task #1: Transcribing Voynichese text into a reliable computer-readable raw transcription e.g. EVA qokeedy
* Task #2: Parsing the raw transcription to determine Voynichese’s fundamental units (its tokens) e.g. [qo][k][ee][dy]
* Task #3: Clustering the pages / folios into groups where the text shares distinct features e.g. Currier A vs Currier B
* Task #4: Normalizing the clusters e.g. how A tokens / patterns map to B tokens / patterns, etc

I plan to tackle these four areas in separate posts, to try to build up a substantive conversation on each topic in turn.

## Takahashi’s EVA transcription

Rene Zandbergen points out that, of all the different “EVA” transcriptions that appear interleaved in the EVA interlinear file, “the only one that was really done in EVA was the one from Takeshi. He did not use the fully extended EVA, which was probably not yet available at that time. All other transcriptions have been translated from Currier, FSG etc to EVA.

This is very true, and is the main reason why Takeshi Takahashi’s transcription is the one most researchers tend to use. Yet aside from not using extended EVA, there are a fair few idiosyncratic things Takeshi did that reduce its reliability, e.g. as Torsten Timm points outTakahashi reads sometimes ikh where other transcriptions read ckh“.

So the first thing to note is that the EVA interlinear transcription file’s interlinearity arguably doesn’t actually help us much at all. In fact, until such time as multiple genuinely EVA transcriptions get put in there, its interlinearity is more of an archaeological historical burden than something that gives researchers any kind of noticeable statistical gain.

What this suggests to me is that, given the high quality of the scans we now have, we really should be able to collectively determine a single ‘omega’ stroke transcription: and even where any ambiguity remains (see below), we really ought to be able to capture that ambiguity within the EVA 2.0 transcriptions itself.

## EVA, Voyn-101, and NEVA

The Voyn-101 transcription used a glyph-based Voynichese transcription alphabet derived by the late Glen Claston, who invested an enormous amount of his time to produce a far more all-encompassing transcription style than EVA did. GC was convinced that many (apparently incidental) differences in the ways letter shapes were put on the page might encipher different meanings or tokens in the plaintext, and so ought to be captured in a transcription.

So in many ways we already have a better transcription, even if it is one very much tied to the glyph-based frame of reference that GC was convinced Voynichese used (he firmly believed in Leonell Strong’s attempted decryption).

Yet some aspects of Voynichese writing slipped through the holes in GC’s otherwise finely-meshed net, e.g. the scribal flourishes on word-final EVA n shapes, a feature that I flagged in Curse back in 2006. And I would be unsurprised if the same were to hold true for word-final -ir shapes.

All the same, GC’s work on v101 could very well be a better starting point for EVA 2.0 than Takeshi’s EVA. Philip Neal writes: “if people are interested in collaborating on a next generation transcription scheme, I think v101/NEVA could fairly easily be transformed into a fully stroke-based transcription which could serve as the starting point.

## EVA, spaces, and spatiality

For Philip Neal, one key aspect of Voynichese that EVA neglects is measurements of “the space above and below the characters – text above, blank space above etc.

To which Rene adds that “for every character (or stroke) its coordinates need to be recorded separately”, for the reason that “we have a lot of data to do ‘language’ statistics, but no numerical data to do ‘hand’ statistics. This would, however, be solved by […having] the locations of all symbols recorded plus, of course their sizes. Where possible also slant angles.

The issue of what constitutes a space (EVA .) or a half-space (EVA ,) has also not been properly defined. To get around this, Rene suggests that we should physically measure all spaces in our transcription and then use a software filter to transform that (perhaps relative to the size of the glyphs around it) into a space (or indeed half-space) as we think fit.

To which I’d point out that there are also many places where spaces and/or half-spaces seem suspect for other reasons. For example, it would not surprise me if spaces around many free-standing ‘or’ groups (such as the famous “space transposition” sequence “or or oro r”) are not actually spaces at all. So it could well be that there would be context-dependent space-recognition algorithms / filters that we might very well want to use.

Though this at first sounds like a great deal of work to be contemplating, Rene is undaunted. To make it work, he thinks that “[a] number of basics should be agreed, including the use of a consistent ‘coordinate system’. Again, there is a solution by Jason Davies [i.e. voynichese.com], but I think that it should be based on the latest series of scans at the Beinecke (they are much flatter). My proposal would be to base it on the pixel coordinates.

For me, even though a lot of this would be nice things to have (and I will be very interested to see Philip’s analysis of tall gallows, long-tailed characters and space between lines), the #1 frustration about EVA is still the inconsistencies and problems of the raw transcription itself.

Though it would be good to find a way of redesigning EVA 2.0 to take these into account, perhaps it would be better to find a way to stage delivery of these features (hopefully via OCR!), just so we don’t end up designing something so complicated that it never actually gets done. 🙁

## EVA and Neal Keys

One interesting (if arguably somewhat disconcerting) feature of Voynichese was pointed out by Philip Neal some years ago. He noted that where Voynichese words end in a gallows character, they almost always appear on the top line of a page (sometimes the top line of a paragraph). Moreover, these had a strong preference for being single-leg gallows (EVA p and EVA f); and also for appearing in nearby pairs with a short, often anomalous-looking stretch of text between them. And they also tend to occur about 2/3rds of the way across the line in which they fall.

Rather than call these “top-line-preferring-single-leg-gallows-preferring-2/3rd-along-the-top-line-preferring-anomalous-text-fragments“, I called these “Neal Keys”. This term is something which other researchers (particularly linguists) ever since have taken objection with, because it superficially sounds as though it is presupposing that this is a cryptographic mechanism. From my point of view, those same researchers didn’t object too loudly when cryptologist Prescott Currier called his Voynichese text clusters “languages”: so perhaps on balance we’re even, OK?

I only mention this because I think that EVA 2.0 ought to include a way of flagging likely Neal Keys, so that researchers can filter them in or out when they carry out their analyses.

## EVA and ambiguity

As I discussed previously, one problem with EVA is that it doesn’t admit to any uncertainty: by which I mean that once a Voynichese word has been transcribed into EVA, it is (almost always) then assumed to be 100% correct by all the people and programmes that subsequently read it. Yet we now have good enough scans to be able to tell that this is simply not true, insofar as there are a good number of words that do not conform to EVA’s model for Voynichese text, and for which just about any transcription attempt will probably be unsatisfactory.

For example, the word at the start of the fourth line on f2r:

Here, the first part could possibly be “sh” or “sho”, while the second part could possibly be “aiidy” or “aiily”: in both cases, however, any transcriber attempting to reduce it to EVA would be far from certain.

Currently, the most honest way to transcribe this in EVA would be “sh*,aii*y” (where ‘*’ indicates “don’t know / illegible”). But this is an option that isn’t taken as often as it should.

I suspect that in cases like this, EVA should be extended to try to capture the uncertainty. One possible way would be to include a percentage value that an alternate reading is correct. In this example, the EVA transcription could be “sh!{40%=o},aiid{40%=*}y”, where “!{40%=o}” would mean “the most likely reading is that there is no character there (i.e. ‘!’), but there is a 40% chance that the character should be ‘o'”.

For those cases where two or more EVA characters are involved (e.g. where there is ambiguity between EVA ch and EVA ee), the EVA string would instead look like “ee{30%=ch}”. And on those occasions where there is a choice between a single letter and a letter pair, this could be transcribed as “!e{30%=ch}”.

For me, the point about transcribing with ambiguity is that it allows people doing modelling experiments to filter out words that are ambiguous (i.e. by including a [discard words containing any ambiguous glyphs] check box). Whatever’s going on in those words, it would almost always be better to ignore them rather than to include them.

Rene points out that the metadata “were added to the interlinear file, but this is indeed independent from EVA. It is part of the file format, and could equally be used in files using Currier, v101 etc.” So we shouldn’t confuse the usefulness of EVA with its metadata.

In many ways, though, what we would really like to have in the EVA metadata is some really definitive clustering information: though the pages currently have A and B, there are (without any real doubt) numerous more finely-grained clusters, that have yet to be determined in a completely rigorous and transparent (open-sourced) way. However, that is Task #3, which I hope to return to shortly.

In some ways, the kind of useful clustering I’m describing here is a kind of high-level “final transcription” feature, i.e. of how the transcription might well look much further down the line. So perhaps any talk of transcription

## How to deliver EVA 2.0?

Rene Zandbergen is in no doubt that EVA 2.0 should not be in an interlinear file, but in a shared online database. There is indeed a lot to be said for having a cloud database containing a definitive transcription that we all share, extend, mutually review, and write programmes to access (say, via RESTful commands).

It would be particularly good if the accessors to it included a large number of basic filtering options: by page, folio, quire, recto/verso, Currier language, [not] first words, [not] last words, [not] first lines, [not] labels, [not] key-like texts, [not] Neal Keys, regexps, and so forth – a bit like voynichese.com on steroids. 🙂

It would also be sensible if this included open-source (and peer-reviewed) code for calculating statistics – raw instance counts, post-parse statistics, per-section percentages, 1st and 2nd order entropy calculations, etc.

Many of these I built into my JavaScript Voynichese state machine from 2003: there, I wrote a simple script to convert the interlinear file into JavaScript (developers now would typically use JSON or I-JSON).

However, this brings into play the questions of boundaries (how far should this database go?), collaboration (who should make this database), methodology (what language or platform should it use?), and also of resources (who should pay for it?).

One of the strongest reasons for EVA’s success was its simplicity: and given the long (and complex) shopping list we appear to have, it’s very hard to see how EVA 2.0 will be able to compete with that. But perhaps we collectively have no choice now.