Top Prev Next
Map Pages Gloss Pics Refs

Analysis Section ( 2/5 ) - Character statistics

2.1 Introduction

This page will first list a number of observations about the Voynich MS character statistics that may be found in the printed literature, and then concentrate on more quantitative analysis results.

2.2 Observations in the printed literature

Tiltman (1967) (1)

(Note: Tiltman treats f as a variant form of k and p as a variant form of t. In the following, characters or sequences in parentheses represent such variant forms).

Currier (1976) (3)

Currier's first observation has been noted independently by several people, and was taken up recently by Brian Cham, who developed the >>curve-line system out of it.

D'Imperio (1978) (4)

2.3 Character frequencies

Oddly enough, there is no consolidated set of this most basic statistic, due to the use of different transcription alphabets and different transcription sources. Several examples may be found in different sources.

One example is found in D'Imperio (1978) (see note 4), Fig. 28 on p.106, from several sources but none covering the entire MS text.

As a very short summary, the single character frequency distribution in the most important transcription alphabets is qualitatively similar to that of texts in normal European languages.

2.4 Vowel/consonant detection

An algorithm for detection of vowels and consonants was designed by B.V. Sukhotin, and Jacques Guy has experimented with this in the 1990's. He published a first English summary of the algorithm in Cryptologia (see note 5), . Results indicated that the characters that look like vowels (a, o, y) also appeared statistically like vowels, though the confidence of the result was not very high. There is also a recent >>blog entry related to running Sukhotin's algorithm on individual pages of the MS.

2.5 Entropy

The concept of entropy has been explained in the introductory page and the reader should have read this introduction in order to have the full advantage of the following.

The entropy of the Voynich MS text was first analysed in detail by the Yale professor William Ralph Bennett Jr. (6). He develops the concept in many easy steps and in more detail than in the above-mentioned introductory page. He first analyses texts in common European languages and then addresses the Voynich MS with his own transcription alphabet (7). He writes (8):

[...] the statistical properties of the Voynich Manuscript are quite remarkable. The writing exhibits fantastically low values of the entropy per character over that found in any normal text written in any of the possible source languages (see Table 5). The values of h1 [i.e. first order entropy - RZ] are comperable to those encountered earlier in this chapter with tables of numbers. Yet the ratio h1/h2 is much more representative of European languages than of a table of numbers alone.

His computed values are as follows (9):

Entropy order Normal languages Voynich MS
First 3.91 - 4.14 3.66
Second 3.01 - 3.37 2.22
Third 2.12 - 2.62 1.86

He finally identifies one language with a set of similarly low entropy values, namely Hawaiian, but is quick to point out that this does not make much sense.

Many more statistics related to entropy may be found in an on-line paper by >>Dennis Stallings: understanding the second-order entropies of Voynich text. This basically confirms the results of Bennett, and may be useful to those who have no access to a copy of Bennett's book.

An alternative method to compute entropy is the so-called 'commas' method, which has been used by Jim Reeds and later by Gabriel Landini. This will be included here shortly.

Jorge Stolfi has set up a tool to visualise the number of bits of entropy per character in the following location: >>Jorge Stolfi: where are the bits?

Finally, I wondered about the question how it is possible that the character and digraph entropy of the Voynich MS text is so much lower than that of, say, Latin, while the word entropy (about which something will be said here) is similar. This is addressed at this page: From digraph entropy to word entropy, which may be a bit hard to understand by itself (and I should re-do it). The short summary is that, counting from the start of each word, the entropy per character is higher for normal languages, but also drops much faster.

There is a critically important conclusion to be drawn from the first- and second-order entropy values reported by various authors. As already mentioned in the analysis section introduction, the entropy values do not change when one consistently replaces characters by others, i.e. in a simple substitution cipher. This tells us something about the possible plain text of the Voynich MS.

  1. It could be that the text is meaningless, i.e. there is no plain text language, and the anomalously low entropy is the result of whatever process was used to generate the strings of characters
  2. If there is a plaintext that was encoded using a simple substitution, then this plaintext must have the same anomalously low entropy values. This then excludes most of the typical languages that might be assumed for a European MS of the 15th Century. In fact, no candidate plaintext language could yet be identified. Hawaiian, the one identified by Bennett, does not match for other reasons (as will become apparent in later pages). Some languages like Hebrew, the various Arabic languages, Persian, Armenian etc. have not yet been tested quantitatively, to my best knowledge.
  3. If there is a plaintext in one of the known languages used in European MSs of the 15th Century, then this text must have been modified by some process changing the statistics quite drastically. This change is indeed so drastic that it is no longer possible to identify the plaintext language from the Voynich MS text, and any attempts of this nature will be invalid.

In general, and quite briefly, any attempt to translate the Voynich MS into something meaningful in Greek, Latin, English, etc. using a simple substitution must fail. As this is the first thing most people will try, we can begin to understand how the MS has resisted all translation attempts.

However, there is much more to this, as we shall see in the following (10).

2.6 Other material

2.6.1 Line-initial/final and word initial/final character properties

Following observations are paraphrased from Currier's papers (see note 3).

The obsernation of Currier that the line appears to be a functional unit was further analysed in 2012 by Elmar Vogt, for which >>see here. One of the most obvious features he shows is that, when using the Eva alphabet, the first word tends to be on average 1 character longer than the second and following words.

2.6.2 Location of gallows (and other) characters

Julian Bunn highlighted the positions of the gallows characters on each folio of the MS in >>a page at his blog, in colour coded graphics. They show a peculiar vertical pattern, which may be related to the observations of Andreas Schinner in his 2007 cryptologia paper (12), which is discussed in a later page.

The following page by Sean V. Palmer gives a very visual representation of the feature that many characters have very preferential positions inside the words of the MS: >>Voynich MS glyph position stacks.

Notes

1
See Tiltman (1967).
2
For Tiltman's roots and suffixes, see here. Additional observations are listed there.
3
See here or here.
4
See D'Imperio (1978).
5
See Guy (1991), also >> online here.
6
See Bennett (1976), Chapter 4, pp. 103-198.
7
For which see here.
8
On p.193
9
Combining Table 5 on p.140 and Table 12 on p.193
10
For the curious reader: here.
11
This statement is not fully understood, and it seems worthwhile to understand what he means. The only character that typically occurs at line ends is mentioned separately, so perhaps he means character combinations or groups..
12
Currier almost certainly means m.
13
See Schinner (2007).
Top Prev Next
Map Pages Gloss Pics Refs

Copyright René Zandbergen, 2016
Comments, questions, suggestions? Your feedback is welcome.
Latest update: 26/01/2016