Originally posted at blogs.it.ox.ac.uk on November 6, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.
The following is a transcript of a contribution to a panel discussion at the TEI Members’ Meeting in Rome in October 2013 on the topic ‘How could the TEI community benefit from TEI-specific query solutions? What should they look like?’
I think that there is a problem that too many of the people working on text encoding and tools for querying encoding texts are contributing to a proliferation of different complex platforms which the user has to adopt before they can submit the simplest query, and which are all mutually incompatible. We’re just building digital silos, and getting further away form a solution to some key problems.
There is potential for transformation in the way that we do research in the humanities. Recent discussions about distant reading, and combining it with close and scalable reading, revolve around how we can exploit the marvelous opportunity with which we are presented today to ask new questions in the study of languages, literature, history, and in other disciplines. I’d like to get to a situation where we can ask new and big questions, and I’d like us to be in a position to accumulate knowledge about language, and from texts, by investigating more features, more genres, more languages etc. I want us to lower the barriers to digital research so that more people can do it, more outputs can be compared and connected. I don’t want to see more diversity: more alternatives to the TEI, more frameworks, more annotation schemes. Building a new tools is a computer science project, and has no place in the humanities. Three are barriers to progress: (i) “not invented here”, (ii) reinventing the wheel, and (iii) the search for the perfect metalanguage.
Converting texts to a common format has been suggested, but it is not an option when it comes to exploiting big data. The texts that we want to query live on different parts of the network. Persuading data repositories (including publishers, Google, Amazon) to provide TEI XML output is feasible, and would be a step forward. Converting all texts to formats optimized for linguistic search is impossible.
Now is the time to act, and action is overdue. Since I first heard about the TEI more than 20 years ago, I thought, “that’s a good idea – where are the tools?”. I could still ask the same question. There are now some tools for editing and operations on individual small documents, but what tools that can easily be used or deployed for cross-searching collections of texts and corpora?
The vision for new forms of digital research requires not only tools but interoperable resources. Linguists say we can’t agree “what is a phrase”. Well then you can’t have interoperable resources for the study of grammar, or resources which make use of grammatical analysis as a basis for analysis at other levels. Now, I have to admit that I have spent quite a lot of my time in my career up until now arguing with computer scientists that we don’t have agreed basic concepts, and this is the correct state of affairs because humanistic research is basically about discussing and problematizing the way that we conceptualize and discuss things. If we can’t agree on representation and categorization of linguistic features, then there isn’t much we can do by way of digital scholarship. There is more to be gained from accepting imperfect models than there is to gain from trying to perfect them.
The TEI and (probably ISOcat) offers us the basic technical preconditions for moving forward, for creating and sharing interoperable resources. Let’s take the opportunity and develop a culture of sharing resources tools, resources, categorizations and methods. Here is a challenge. At the OTA we have all of the texts from ECCO-TCP which are in the public domain freely available at persistent URLs in TEI P5. I’m happy to make the British National Corpus (BNC) available in this way as well, although it is interesting that no-one has ever asked us to do this. I have a challenge for the participants here and for the community – the texts are out there, so please deploy tools to search them as reliable and persistent services. I’ll come back next year and see what’s available!
There are different research agendas: problematizing the notion of the phrase, or the book, are perfectly legitimate research questions, and things that theoreticians might quite legitimately decide to do with their time, but they should not be raised as barriers to developing digital tools. We need to decide our priorities, and I think that more resources should be devoted to exploiting the current opportunity for posing new large-scale research questions, rather than re-posing fundamental questions about categorizations and models.
We can see what the average researcher wants. We can see a multitude of relevant use cases. it is perfectly possible to examine published research in the humanities for statements and questions which are susceptible to empirical study in digital text collections. If we concentrate our efforts on making available for cross-searching all the digital texts which we can lay our hands on, and the tools to query them and analyse the results, then there are research topics for all PhDs in many literary, linguistic and historical disciplines for the next 20 years or so, and it can reinvigorate the humanities. I’m excited by that prospect. We are on the cusp of making it possible and we can be the people who do it.