I’ve been calculating the most frequently recurring sequences of words in the c.60000 books of the Early English Books Online collection, in preparation of a joint paper (with Michael Burke and Dan McIntyre) for the conference Formulaic Language in Historical Research and Data Extraction in Amsterdam in February 2024.
What do you think will be the most frequent 4-, 5- and 6-word sequences? These clusters are also known as n-grams, or more specifically in this case, 4-, 5- and 6-grams.
It turns out that there is more than one answer in each category, depending on how you calculate them, and what you filter out and exclude.
The EEBO texts on which this work is based are available with open access from the Oxford text Archive collections, and you can find out more about them at the EEBO-TCP website.
I used the orthographically normalized forms of words from the SAMUELS Project, which are in a format as shown in the fragment in Figure 1 below:
I calculated 4-, 5- and 6-grams, for words, and also for dictionary headwords (lemmas), parts of speech (POS), lempos (lemma-POS pairs), and phrase-grams (with one wild card for ‘any word’). I used NLTK tools in a simple python script to generate all of the n-grams, then sorted, counted, filtered and tidied them up using Linux command line tools. I made some variant lists with different parameters applied, to deal with things like upper and lower case, and non-lexical or borderline lexical items like numbers and punctuation.
So, what are the most frequent clusters of words? It turns out that there are at least 27 different answers to that question.
After trying various ways of calculating clusters in various ways, my favourite, which gives perhaps the best insight into the most frequent repeated sequences of words in the corpus, is this one:
As you can probably quickly see, the most striking thing about this, and most of the lists, is that the there is a strong bias towards formulaic language from Christian theology and ritual. This certainly tells us something about the sorts of texts that are in the corpus!
To explore all the options, with lists created with the different parameters, take your pick from the table below. Each cell contains a link to an image showing the top 20 words in each category.
4 | 5 | 6 | |
Step 1 (all the raw words, including punctuation) | step01-4grams | step01-5grams | step01-6grams |
Step 2 (all the raw words, minus punctuation) | step02-4grams | step02-5grams | step02-6grams |
Step 2i (all the raw words, minus punctuation, case insensitive) | step02i-4grams | step02i-5grams | step02i-6grams |
Step 3 (lemmas – with all tagged as ‘NUM’) | step03-4grams | step03-5grams | step03-6grams |
Step 3nonum (lemmas – without numbers) | step03nn-4grams | step03nn-5grams | step03nn-6grams |
Step 4 (lempos-grams) | step04-4grams | step04-5grams | step04-6grams |
Step 5 (POS-grams) | step05-4grams | step05-5grams | step05-6grams |
Step 6 (phrase-grams, with slots for ‘any word’ in one position) | step06-4grams | step06-5grams | step06-6grams |
Step 6nonum (phrase-grams, omitting those including numbers) | step06nn-4grams | step06nn-5grams | step06nn-6grams |
Thanks to EEBO Text Creation Partnership, Oxford Text Archive, SAMUELS Project and UCREL at Lancaster University for creating and making the digital texts available.
If you want to investigate how any of these phrases are used in texts, you could start with the interface to query the EEBO texts via CQPWeb at Lancaster University, where you can query 44,422 of the texts used in this study. A somewhat smaller selection of the texts can also be queried in Sketch Engine, available if you are a subscriber.
The full lists of the clusters are available for researchers according to the terms of an open accces licence from the Oxford Text Archive at the following persistent online location:
http://hdl.handle.net/20.500.14106/2570
Researchers are welcome to take the files to explore, investigate and find out more!