A recent post on voynich.ninja brought up the subject of differences / similarities between Voynichese words starting with EVA ch and those starting with EVA sh. But this got me thinking more generally about the difference between ch and sh in Voynichese (i.e. in any position), and even more generally about letter contact tables.

Problems With Letter Contact Tables

For ciphertexts where the frequency instance distribution has been flattened, a normal first test is William Friedman’s Index of Coincidence (IoC). This often helps determine the period of the cryptographic means that was used to flatten it (e.g. the length of the cyclic keyword, etc). But this is not the case with the Voynich Manuscript.

For ciphertexts where the frequency instance graph is normal but the letter to letter adjacency has been disrupted, the IoC is one of the tests that can help determine the period of any structured transposition (e.g. picket fence etc) that has been carried out. But the Voynich is also not like this.

So, when cryptologists are faced by a structured ciphertext (i.e. one where the frequency instance graph more closely resembles a natural language, and where the letter adjacency also seems to follow language-like rules, the primary tool they rely on is letter contact tables. These are tables of counts (or percentages) that show how often given letters are followed by other given letters.

But for Voynichese there’s a catch: because in order to build up letter contact tables, you have to first know what the letters of the underlying text are. And whatever they might be, the one thing that they definitely are not is the letters of the EVA transcription.

Problems With EVA

The good thing about EVA was that it was designed to help Voynich researchers collaborate on the problems of Voynichese. This was because it offered a way for them to talk about Voynichese that online was (to a large degree) independent of all their competing theories about what specific combinations of Voynichese shapes or strokes genuinely made up a Voynichese letter. And there were a lot of these theories back then, a lot.

To achieve this, EVA was constructed as a clever hybrid stroke transcription alphabet, one designed to capture in a practical ‘atomic’ (i.e. stroke-oriented) way many of the more troublesome composite letter shapes you find in Voynichese. Examples of these are the four “strikethrough gallows” (EVA ckh / cth / cfh / cph), written as an ornate, tall character (a “gallows character”) but with an odd curly-legged bench character struck through it.

However, the big problem with EVA is arguably that it was too successful. Once researchers had EVA transcriptions to work with, almost all (with a few heroic exceptions) seem to have largely stopped wondering about how the letters fit together, i.e. how to parse Voynichese into tokens.

In fact, we have had a long series of Voynich theorists and analysts who look solely at Voynich ‘words’ written in EVA, because it can seem that you can work with EVA Voynichese words while ignoring the difficult business of having to parse Voynichese. So the presence of EVA transcriptions has allowed many people to write a lot of stuff bracketing out the difficult stuff that motivated the complicated transcription decisions that went into designing EVA in the first place.

As a result, few active Voynich researchers now know (or indeed seem to care much) about how Voynichese should be parsed. This is despite the fact that, thanks to the (I think somewhat less than positive) influence of the late Stephen Bax, the Voynich community now contains many linguists, for whom you might think the issue of parsing would be central.

But it turns out that parsing is typically close to the least of their concerns, in that (following Bax’s example) they typically see linguistic takes and cryptographic takes as mutually exclusive. Which is, of course, practically nonsensical: indeed, many of the best cryptologists were (and are) also linguists. Not least of these was Prescott Currier: I would in fact go so far as to say that everyone else’s analyses of Voynichese have amounted to little more than a series of minor extensions and clarifications to Currier’s deeply insightful 1970s contributions to the study of Voynichese.

Problems With Parsing

Even so, there is a further problem with parsing, one which I tried to foreground in my book “The Curse of the Voynich” (2006). This is because I think there is strong evidence that certain pairs of letters may have been used as verbose cipher pairs, i.e. pairs of glyphs used to encipher a single underlying token. These include EVA qo / ee / or / ar / ol / al / am / an / ain / aiin / aiiin / air / aiir (the jury is out on dy). However, if you follow this reasoning through, this also means that we should be highly suspicious of anywhere else the ‘o’ and ‘a’ glyphs appear, e.g. EVA ot / ok / op / of / eo etc.

If this is even partially correct, then any letter contact tables built on the component glyphs (i.e. the letter-like-shapes that such verbose pairs are made up of) would be analysing not the (real underlying) text but instead what is known as the covertext (i.e. the appearance of the text). As a result, covertext glyph contact tables would hence be almost entirely useless.

So I would say that there is a strong case to be made that almost all Voynichese parsing analyses to date have found themselves entangled by the covertext (i.e. they have been misdirected by steganographic tricks).

All the same, without a parsing scheme we have no letter contact tables: and without letter contact tables we can have no worthwhile cryptology of what is manifestly a structured text. Moreover, arguably the biggest absence in Mary D’Imperio’s “An Elegant Enigma” is the lack of letter contact tables, which I think sent out the wrong kind of message to readers.

Letter Contact Tables: v0.1

Despite this long list of provisos and problems, I still think it is a worthwhile exercise to try to construct letter contact tables for Voynichese: we just have to be extraordinarily wary when we do this, that’s all.

One further reason to be wary is that many of the contact tables are significantly different for Currier A and Currier B pages. So, because I contend that it makes no sense at all to try to build up letter contact pages that merge A and B pages together, I present A and B separately here.

The practical problem is that doing this properly will require a much better set of scripts than I currently have: what I’m presenting here is only a small corner of the dataset (forward contacts for ch and sh), executed very imperfectly (partly by hand). But hopefully it’s a step in the right direction and others will take it as an encouragement to go much further.

Note that I used Takahashi’s transcription, and got a number of unmatched results which I counted as ??? values. These may well be errors in the transcription or errors in my conversion of the transcription to JavaScript (which I did a decade ago). Or indeed just bit-rot in my server, I don’t know.

A ch vs B ch, Forward Contacts

A ch
(cho 1713)
—– of which (chol 531, chor 400, chod 196, chok 130, cho. 113, chot 94, chos 50, chom 28, choi 20, choy 18, chop 14, chof 12, choe 11, choa 7, choc 6, choo 5, cho- 4, chon 4, chog 2, cho??? 69)
(che 918)

—– of which (cheo 380, chey 229, chee 156, chea 64, chek 30, chet 19, ched 12, ches 8, chep 7, cher 2, cheg 1, chef 1, che. 1, che* 1, che??? 7)
(chy 544)
(cha 255)

(ch. 112)
(chk 60)
(chd 35)
(cht 31)
(chs 21)

(chch 5) (chp 5) (chsh 4) (chm 2) (chi 2) (chc 2) (chf 1) (chl 0) (chn 0) (chr 0) (chs 0) (ch- 0) (ch= 0)

B ch
(che 3640)
—– of which (ched 1482, chey 597, cheo 565, chee 537, chek 119, chea 82, ches 55, chet 42, chep 25, chef 15, cher 4, cheg 4, che. 2, chel 1, che??? 117)
(chd 725)
(cho 633)

—– of which (chol 200, chod 123, chor 83, chok 65, chot 44, cho. 34, chop 7, chos 22, choa 10, chop 7, choy 4, choe 4, chof 4, choo 4, choi 2, cho= 1, cho??? 26)
(ch. 403)
(chy 331)
(cha 185)

(chk 84)
(chs 50)
(cht 38)
(chp 20)

(chch 6) (chc 6) (chsh 5) (chf 2) (chi 0) (chm 0) (chl 0) (chn 0) (chr 0) (chs 0) (ch- 0) (ch= 0)

Observations of interest here:

  • A:cho = 1713, while B:cho = 633
  • A:chol = 531, while B:chol = 200
  • A:chor = 397, while B:chor = 83
  • A:che = 918, while B:che = 3640
  • A:ched = 12, while B:ched = 1482
  • A:chedy = 7, while B:chedy = 1193
  • A:chd = 35, while B:chd = 725
  • A:chdy = 21, while B:chdy = 504

As an aside:

  • dy appears 765 times in A, 5574 times in B

A sh vs B sh, Forward Contacts

A sh

(sho 625)
—– of which (shol 174, sho. 143, shor 105, shod 77, shok 32, shot 22, shos 11, shoi 9, shoa 6, shoy 5, shoe 4, shom 4, shop 4, sho- 1, shof 1, shoo 1, sho??? 26)
(she 407)

—– of which (sheo 174, shee 84, shey 81, shea 20, she. 19, shek 12, shes 8, shed 3, shet 2, shep 1, sheq 1, sher 1, she??? 1)
(shy 153)
(sha 58)

(sh. 39)
(shk 13)
(shd 7)
(shch 6)
(sht 5)
(shs 3)
(shsh 1)
(shf 1) (everything else 0)

B sh

(she 1997)
—– of which (shed 734, shee 386, shey 334, sheo 286, shek 78, shea 37, shet 18, shes 15, she. 13, shep 6, shef 5, shec 2, sheg 2, she* 1, shel 1, sher 1, she??? 79)
(sho 284)

—– of which (shol 89, shod 59, shor 43, shok 24, sho. 23, shot 8, shos 8, shoa 5, shoi 5, shoe 3, shof 2, shoo 2, shoy 1, shop 1, sho??? 11)
(shd 161)
(sh. 136)
(shy 104)
(sha 67)

(shk 35)
(sht 13)
(shs 12)
(shch 6)
(shsh 1)
(shf 1) (everything else 0)

Observations of interest here:

  • A:sho = 625, while B:sho = 284
  • A:shol = 174, while B:shol = 89
  • A:shor = 105, while B:shor = 43
  • A:she = 406, while B:she = 1997
  • A:shed = 3, while B:shed = 734
  • A:shedy = 2, while B:shedy = 629
  • A:shd = 7, while B:shd = 161
  • A:shdy = 3, while B:shdy = 100

Final Thoughts

The above is no more than a brief snapshot of a corner of a much larger dataset. Even here, a good number of the features of this corner have been discussed and debated for decades (some most notably by Prescott Currier).

But given that there is no shortage of EVA ch, sh, e, d in both A and B, why are EVA ched, chd, shed, and shd so sparse in A and so numerous in B?

It’s true that dy appears 7.3x more in B than in A: but even so, the ratios for ched, chedy, shed, shedy, chd, chdy, shd and shdy are even higher (123x, 170x, 244x, 314x, 20x, 24x, 23x, and 33x respectively).

Something to think about…

28 thoughts on “Voynich Manuscript: Letter Contact Tables

  1. Koen Gheuens on November 24, 2019 at 10:42 am said:

    Interesting, Nick. I obviously agree on both points. The importance of parsing and treating A and B separately. (For some tests I’d even say you have to separate sections as well).

    So.. do you think there may be equivalents for words like shedy in A? Like, the amount is similar to B but they are just written differently?

  2. Koen: I’ve been saying for some time that I think the next big “step up” in Voynichese study will come when some clever person finds a way to map between A patterns and B patterns, i.e. to normalize the two (errrm… actually several) parts into a single thing.

    But to do this properly, you need to parse A and B, build letter contact tables for them, and then build state machine ‘grammars’ that capture how the two behave – the stuff that’s the same is probably the same, but the stuff that’s different probably involves something that was written as XXX in A being written as YYY in B. Normalizing A/B would involve being able to say “XXX == YYY”. However, this rests on the back of parsing, letter contact tables, and state machines, which (I think) steganographica tricks are disrupting. So I’m still not at all sure how we get over all the technical hurdles to get to a state where we can approach this in a rigorous enough way.

    But perhaps some of these XXX == YYY equivalences can be worked out even without all that machinery. For example, I have long strongly wondered whether daiin daiin patterns in A reappear (in some way) as qotedy qokedy patterns in B. Clearly, both involve repetitive “bla-bla-bla” word sequences that are hard to reconcile with either linguistic readings or crypto theories. And given that I’ve previously speculated whether daiin daiin might be enciphering Arab numerals, it would be logical for me to speculate whether qotedy qokedy might be doing the same (but in a different way). Just a thought.

  3. Koen: also, I do wonder whether the internal structure (I struggle to say ‘grammar’ without wincing) of ‘pure’ A pages would explicitly prohibit these chd/ched/shd/shed patterns, but the reason we’re seeing any coming up at all is because some ‘impure’ (hybridized) A/B pages have been miscategorized (or arguably misinterpreted by us) as ‘pure’ A. I haven’t looked at the places in A where chd/ched/shd/shed appear, but it might be a revealing exercise, i.e. it might highlight ‘late’ A pages with B page influences.

  4. Koen Gheuens on November 24, 2019 at 7:20 pm said:

    Nick: I agree that if you want a deep understanding of A-B correspondences, the method you propose might be effective.

    But if these equivalences exist, shouldn’t it be possible to formulate some initial hypotheses based on A vs B word frequencies? This would be entirely on the level of “vocabulary” so internal parsing is not an issue.

  5. Koen: ah, the bit you’re missing is that you can build up a Markov finite state machine describing A’s behaviour or B’s behaviour even if it isn’t strictly a classical linguistic grammar – and, as you probably have guessed, I think there are plenty of reasons to suspect that what we’re looking at in both A and B is not a classical linguistic grammar.

  6. Nick,
    Your speaking of characteristics which are difficult to reconcile with linguistic or with crypto theories reminded me of Julian Bunn – do you think this is the sort of project which might interest him?

    I say it reminded me of him because he once said – though he may have revised his opinion since – that the results of certain tests had convinced him that Voynichese was not made up of words at all.

  7. Koen, despite the overall statistical differences in A/B, the set of glyph pairs which occur with high frequency straddling spaces but very low frequency word-medially is similar — if spaces are inserted mechanically inside certain glyph pairs, A “words” won’t necessarily map to B “words”.

    Nick, why Takahashi’s transcription rather than Landini-Zandbergen? While I haven’t analyzed Takahashi’s transcription, based on my comparison of L-Z with v101 (converting both to Currier where there are unambiguous Currier equivalents), we can be fairly confident that L-Z (at least WRT glyphs — spaces are more ambiguous) has a high degree of accuracy.

  8. Mark Knowles on November 25, 2019 at 9:19 pm said:

    As far as EVA goes. if I am honest, I can’t say that I am a huge fan. Nevertheless it is better than nothing. The fact that it is a hybrid and therefore neither wholly stroke based or character based, I find frustrating. I appreciate that the relative easy way of pronouncing words with EVA is attractive to people and I can see why on that basis people would find words easier to remember, I am not sure how much difference it makes to me though. Anyway I can see that EVA has its place. I have proposed two other forms of representation operating in parallel:

    1) A stroke based from of transliteration, which would necessarily be a longer form and one that I think should contain within it the variations in the transliteration on the basis of different opinions as to what strokes are written. This will be long and laborious to work through, but also precise and include within it different interpretations. This might well use the same characters as EVA where letters represent strokes in EVA such that there would be a one-to-one mapping, but necessarily be different in other instances. So this would mean that one could have a SVA (Stroke based Voynich Alphabet) which would in some instances be the same as EVA and in others quite different. So this would be akin to a “lower level language” where EVA is an “intermediate level language”, if you find these terms useful. An EVA transcription would be generated by parsing a SVA transcription.

    2) A form of transliteration based on Voynich characters, glyphs or tokens (if these are the different terms that other people use). Again in some instances that would share letters with EVA in other instances it would diverge. How this parsing is done could and probably would be subject to significant dispute. However there is no reason that one could have something like GVA(1), GVA(2) etc. depending on how different people interpret the symbols; GVA standing for something like Glyph Voynich Alphabet(I am not a fan of the term Glyph, but anyway). Obviously, one would encourage people to their own form of GVA as similar to other people’s as is possible. This GVA, to use my previous terms, would be a “high level language”. A GVA transcription would be produced either by parsing an EVA transcription or possibly by parsing a SVA transcription.

    Introducing 2 new forms of transcription seems like an awful lot of extra baggage, especially with scope for variants on the Glyph representations, however I can’t see an alternative and it seems to me EVA on it’s own cannot and never will be able to serve all needs. Of course having alphabets that are as distinct as possible where necessary to avoid confusion between each form of representation is a question to be approached with considerable care.

    So I think introducing at least 1 new form of representation seem essential. Of course a process of introducing this and agreeing on a consensus for each form of representation sounds like a big job. I think if done properly this would be very valuable, but I daresay others would disagree as is always the way with Voynich research.

  9. Mark: the point of EVA was explicitly not to move towards what you would think of as a ‘realistic’ transcription, but to allow people to collaborate on a transcription that could be transformed relatively easily into what each researcher thought was their own preferred transcription, without having to grind their way through 200+ pages of tiny writing to do it.

    If you even try transcribing even one page (or even half a page) for yourself, you’ll rapidly appreciate what legends the people who did this work are. Restecp, as Ali G would say.

  10. Karl: I built a JavaScript tool many years ago around Takahashi’s transcription, and used that for this experiment. What I learned from the experience is that I really need to find a better way of doing this. 😉

  11. Diane: the project of trying to model A and B via token contact tables and Markov finite state machines is logical and sensible, but massive and fraught with difficulty. I’m sure that Julian Bunn has already considered it, and my guess is that he rejected tackling it because it was way *way* too much work.

    But ultimately a research programme built around this may be just about the only hope we have of working out how Voynichese is parsed. Not ‘might be’ or ‘should be’ or ‘could be’, but is.

  12. Mark Knowles on November 25, 2019 at 10:23 pm said:

    Nick: I didn’t use the word ‘realistic’ precisely because what one person regards as being ‘realistic’ is clear not the same as the other, so I wasn’t really talking about collectively moving towards anything in terms of specific transcription, but rather having different forms of transcription for different requirements as I question the adequacy of EVA in certain circumstances. I am sufficiently aware that my own personal final transliteration is almost certainly going to be at variance with probably many other people as seems normal with Voynich research, but nevertheless there would be room for overlap I think from one individual to the next.

    I am sure the process of doing the transliteration is arduous. I am likely going to have to do some transliteration of a not sizeable quantity of text as I don’t know whether it is wise to trust the EVA transcription and I am not sure I want to take the chance. I have noticed issues relating to whether we have 1 word or 2 in a given instance I.e. whether we can be confident whether a specific word has a space/gap in it and so I have formed my own judgement or flagged it as uncertain where necessary.

  13. Mark: as I understand it, early EVA transcriptions (such as the Takahashi one) were derived from the CopyFlo (a horrible black-and-white facsimile the Beinecke used to sell back when I was getting started), making them even more of a Stakhanovite triumph against the odds. But as Karl just mentioned, more recent ones are a lot closer, well worth using.

  14. Nick,,
    I daresay this is a kindergarten-level question for you and others, but I’m curious about how contact tables might cope with possibilities in the range of the “not-quite-standard-but-not-enciphered” text. Text which doesn’t require full grammatical structures and can do without pronouns, and/or verbs; or efforts to reproduce words heard in another language (as the quasi-Arabic forms in medieval astronomical works. The range of orthography and variations (as sh or ck) can be so great. I’ve recently had reason (not Voynich related) to look at how names and other words from languages spoken around the Black Sea were rendered in Greek or in Roman letters, from the Hellenistic to the early modern period and that’s what prompts me to ask whether and how contact tables could help.

  15. Diane: the beauty of Markovian FSMs (Finite State Machines) is that they model the behaviour, not the language. That is, FSMs form a network of states where the probabilistic transitions between states generate something close to the language that you are presented with. What I present as the primary challenge is to work out how Currier A and Currier B map to each other: the hope would be that this could be reduced to understanding how the two corresponding FSMs map to each other (if indeed they do map to each other – they seem to be close, but that might just be an illusion).

    The core problem here is that our inability to parse Voynichese into tokens is disrupting this otherwise logical cryptological process. 🙁

  16. Oh I see… a sort of ‘teleportation’ game, except that you go in a giraffe and come out an okapi … it must be pretty to watch.

    Whether it will help understand my raft of ‘nonsense’ inscriptions (of which we know some aren’t nonsense) I’m still not sure. ntw. I’ll read and ask around a bit more.

  17. Diane: not quite – the tokens are letters (e.g. EVA t, k, etc) or groups of letters (e.g. perhaps EVA qo, or, ol, etc) rather than words.

  18. Nick,
    Oh yes, i realise that. In the inscriptions, an attempt to render a word which properly begins with a gutteral might find the inscription has ‘k+h’ or ‘k+ho’ or even a ‘ak+h+,,’ and then compounding difficulty to the degree that the word is long. Come to think of it, I bet it was only the Greeks’ sense of being the ideal which prevented them from inventing an all-purpose phonetic alphabet. Smart enough to; no incentive for it. 🙂

  19. I am not sure who are aware that there are some quite nice (I think 🙂 ) contact tables at my web site (see link at the bottom).

    These are not (yet) addressing the difference between the two main Currier languages, or the various ‘dialects’, but they do address the impact of the choice of transliteration alphabet.

    I know that among the contributors writing here, Karl is one (and perhaps the only one) who predates the emergence of Eva, when the ‘going’ system was Currier’s. On the old mailing list, Currier’s system was still in use for quite a long time.

    The FSG system was hardly known to anyone because of the Friedman couple’s confidential attitude to their Voynich work. It became known through the efforts of Jim Reeds. In some respects I find it more elegant than Currier’s and Currier seems also not to have been aware of the FSG system.

    All these are compared using colour-coded contact tables at this page:
    http://www.voynich.nu/extra/sol_ent.html
    The emphasis is on seeing just how very different the Voynich text is from known plain texts, for all transliteration systems.

    This could be repeated for sub-sections of the text, but to do it right would require a small change to the normalisation approach. Let me think about this.

    Mary D’Imperio used an approach based on Markov-like finite states, but as far as I remember she also did not look at the Currier languages separately. A link to her (undated) paper: “An Application of PTAH to the Voynich MS” may be found at the references page of my Web site.

  20. Rene: thanks for that. Yes, I was previously aware of the tables on that page, but the point of the post is that I think there’s still an entire research project’s worth of work that needs to be built on top of them (e.g. A/B/subgroup tables, token parsing, FSMs, normalization between A/B/subgroup, etc) before they become something genuinely useful to us all. :-/

    Incidentally, is there a page on voynich.nu specifically relating to the evolution of Voynichese? It’s something I’ve been meaning to blog about for months.

  21. Hi Nick,

    I have nothing new on this topic beside minor updates to the two pages from around the year 2000 that you have already seen. Earlier this year I re-discovered my old software to do these stats. I re-ran it on the newer ZL transcription file, with basically identical results.

    There is indeed still a lot to be discovered in this area. However, a systematic approach will be a huge task, because of all the combinations and permutations of possibilities that one would need to look at.

  22. davidsch on November 28, 2019 at 4:12 pm said:

    Aaargh, not again William Friedman’s Index of Coincidence (IoC).!!

    Since his death in 1969, some small things changed, for example the first personal computer, or you BBC, or ZX Spectrum saw the light in 1982.

    Why anyone should work with this old and obsolete pencil -and-paper method is perhaps even a bigger mystery than the Voynich manuscript itself !

  23. davidsch: what’s so bad about the IoC? It’s not as if anyone has happily used pencil and paper to calculate an IoC in the last 50 years, but it’s still a simple and reliable test for detecting polyalpha keyword cycles etc.

  24. davidsch: Yes, this list seems to have issues with old and “obsolete” ideas doesn’t it? The criticism by Nick of the copyflo, certainly back then more information than would be easily obtained at other notable institutions even currently, on many, many things. Whether decipherable or not, it gave those of us that were driven to obtain it a view of the full nature of the manuscipt in all its weirdness.(and if you look at it in black and white, it seems much weirder). Also if you are into looking at old “obsolete” ideas of the Gillogly/Reeds Voynich list then, year 1999 seems to be arbitrarily missing. Is it contained elsewhere I am unaware of?

    (ah it seems to have serendipitously manifested itself, good)

    I do hope one of you manages to crack it. In say… the next 400 years?

    Matt Lewis

  25. Matt: if we could learn one or two solid facts about the Voynich in the next decade that manage to creep beyond the merely physical, that would be a tolerably good start. 😉

  26. Nick,

    I held back my reply a little to let things perhaps cool down a bit. I do generally think hard whether I should post an idea, whether that is apparent or not. Anyway, I think the group(and especially you) have certainly made far more than a tolerably good start on find the answers we need. I think we have made an exceptionally(!) good start. I really appreciated your recent posts about Ethel Voynich, and also particularly “the seagull group” messages(not connected that I know), and can say they helped me more than a little bit in finding my own answers about things related perhaps distantly to the VMs . Whether serendipity or mind reading, it was quite helpful. I did put me in the mindset of sort of at what point do we call attention breathlessly to our findings, and when do we hold back on reporting. Not sure I have a solid answer on this, though I do hope you and your readers have a excellent holiday season and get lots of good bookes in your stockings and such! A pretty good year for me, already.

  27. Matt: it is becoming very difficult to notice genuinely new things about the Voynich that haven’t previously been noticed before, and perhaps (after more than a century of people taptapping at every window) that should be expected.

    Having said that, there is still plenty of room for good observation, clear thinking, and good communication. So keep at it! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Post navigation