Some more thoughts on the curious “key” sequence in the Beale Papers

Back in 1980, Jim Gillogly applied the Declaration of Independence codebook for the second Beale Paper (“B2”) to the first Beale Paper (“B1”), and discovered a very unlikely sequence in the resulting text: ABFDEFGHIIJKLMMNOHPP. The chance of the middle section alone (“DEFGHIIJKLMMNO”) occurring at random is about one in a million million, and what is even spookier is that the two aberrant letters in the longer sequence (“F” near the beginning, and “H” near the end) are one entry off from correct letters in the codebook (195 = “F” while 194 = “C”, and 301 = “H” while 302 = “O”).

Gillogly attributed these to encoding slips: but given that I’m wondering whether this string is perhaps a code-sequence of some sort, could it be that the encoder used a slightly different transcription of the Declaration of Independence from the one he/she used for B2? This would yield systematic single-number shifts: so let’s look again at the key-sequence and the adjacent letters in the B2 codebook:

112 T R G A I
18  P B W H C
147 T A O T A
436 L B A P U
195 C F L A T  <-- Gillogly's first apparently offset code
320 I D O T E
37  A E S T W
122 P F S T W
113 R G A I A
6   O H E I B
140 I I T R O  <-- this code might possibly be offset too?!
8   E I B N F
120 T J P F T
305 P K O G B
42  T L O N A
58  O M R R T
461 H M H H D
44  O N A O N
106 P O H T T
301 T H O T P  <-- Gillogly's second apparently offset code
13  O P T D T
408 O P U T P
680 C A U B O
93  C W C U R

Today’s observation, then, is that if the errors in the Gillogly key sequence arose from having used a slightly different codebook transcription of the Declaration of Independence and that the key string should have been ABCDEFGHIIJKLMMNOOPP (as seems to have been intended), then we have two definite (but possibly even three) places where the B1 codebook transcription may have slipped out of registration with the B2 codebook transcription: the code used for the first “I” (141) could equally well have been 140, because that also codes for “I”.

Yet because the sequence is long enough to contain codes that seem correct either side of these errors, we have the possibility of determining the bounds of those stretches in the B1 transcription where the variations (in this scenario) would have occurred. Specifically:-

122 P F S T W
 ?? -1
140 I I T R O
 ?? +1
147 T A O T A
147 T A O T A
 ?? -1
195 C F L A T
 ?? +1
 ?? +1
301 T H O T P
 ?? -1
305 P K O G B

So, if this scenario is correct, it would imply that (relative to the B2 codebook) the B1 codebook transcription dropped a character somewhere between #147 and #195, gained two somewhere between #195 and #301, and then lost another one between #301 and #305. There’s also the possibility that a character was dropped between #122 and #140 and then regained between #140 and #147… not very likely, but worth keeping in mind.

Between #147 and #195, the B1 code usage table looks like this (20 instances):-

148
150 150 150 150 – 154
160 – 162
170 – 172 – 176 176
181 – 184 – 189
191 – 193 – 194 194 194

Between #195 and #301, the B1 code usage table looks like this (64 instances):-

200 200 – 201 201 – 202 – 203 – 206 – 207 – 208 208
210 – 211 211 212 212 – 213 213 – 214 – 216 216 216 216 216 216 216 – 218 218 – 219 219 219 219
221 221 –  224 – 225 – 227
230 230 – 231 – 232 232 – 233 – 234 234 234 – 236
242 – 246 – 247
251
261 – 263 – 264
275 275
280 280 – 283 283 – 284 284 – 286
290 – 294

So, this proposed mechanism would offset up to 84 codes from B1, which may be sufficiently disruptive to have caused B1 to appear undecodable to cryptological luminaries such as Jim Gillogly. It is also entirely possible that (just as with the B2 codebook) there are other paired insertions and deletions to contend with here.

There’s an interesting observation here that many of the transcription errors in the B2 codebook fell close to 10-character (line) boundaries: if this is also the case for some of these (putative) B1 codebook transcription errors, then we should be able to reduce the number of possible variations to check.

4 thoughts on “Jim Gillogly’s Beale sequence revisited…

  1. On my website I give an alternate explanation for the monotonic increasing letter strings discovered by Gillogloy, and why I believe they were not created as a result of an encryptor becoming bored as suggested by Gillogly. The strings have the appearance of being created using a process of double enciherment in which the enciphering process did not provide the encryptor enough freedom to create perfectly formed stings. I also offer a purpose for the strings and how Beale intended to use them.

  2. Stanley Clayton on September 11, 2012 at 9:04 pm said:

    part of the Gillogly string appears decoded in my book BEALE TREASURE MAP TO CIPHER SUCCESS. along with proof that E.A Poe wrote the Beale Pamphlets.the keys are embedded in the codes,sample.starts at position 194,
    122. = 5.
    113 = 5 + 10 = 15 = O.
    6 = 6 + 10 = 16 = P.
    140 = 5 + 12 = 17 = Q.
    8 = 8 + 10 = 18 = R.
    120 = 3 + 16 = 19 = S.

    113 added = 5 plus twice the added sum of the previous number,read the book and tell me all my decodes are a coincidence. ISBN 9781780353470. Stan Clayton

  3. Stanley Clayton on September 29, 2012 at 7:59 pm said:

    I FORGOT TO MENTION THE NAME LOVE OLD POE CROPPED UP IN MY DECODES IF THIS IS FOUND TO BE CORRECT ITS GOING TO MAKE ALL THE EXPERTS LOOK SILLYPOE HAD A HISTORY OF HOAXES THE GREAT BALLOON HOAX COMES SECOND,AND MORRISES LETTER WAS NEVER SENT IT BEEN WITH US ALL THE TIME ,THATS IN MY BOOK.www. fast-print.net ISBN 978 1780353470 Stan Clayton

Leave a Reply

Your email address will not be published. Required fields are marked *

Post navigation