Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Analyzing the Language of Social Media

Originally posted at blogs.it.ox.ac.uk on March 25, 2015 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The eighth one-day event Corpus Linguistics in the South was held at the University of Reading on Saturday 15th November 2014, and focussed on research analysing the language of social media.

Dawn Knight (Newcastle University) spoke about the ‘spoken-ness of e-language’, exploring the positioning of online discourse in relation to the norms of spoken and written language. In short, is language online more like speech or writing?

There are several interesting aspects to Dawn’s project. Explicit permission to re-use and share the data was obtained from all contributors, who were active and popular participants in online discourses. Anonymization of private personal data was carried out, but the data set does not seem to be available for other scholars to use. Funding for the project came from Cambridge University Press, who, it appears, are not willing to share the data.

One starting hypothesis is that there is a continuum of formality for interactive language, with writing at one end and speech on the other (see, for example, David Crystal, English as Global Language, 2003). Can we map different forms (blogs, tweets, email, discussion forums, SMS) onto this continuum?

Pronouns and deictic markers are interesting. In spoken interaction there are typically references to people, actions and things in the shared immediate context, and (probably as a result of this) pronouns and deictic markers are typically more common in speech. Corpora also show that personal pronouns, adverbs and interjections are more common in speech. In this sense, the e-language corpus looks more like speech than writing, despite asynchronicity of the discourse and the lack of shared space. Dawn suggests that there might be an over-compensation, since in online forums we are more reliant, almost exclusively reliant, on language for interactional aspects of communication.

Some results went against expectations – ‘shall’ and ‘must’ are thought to be generally in decline, particularly in informal registers, but proved to be more frequent than expected in SMS language. Discussion forums proved to be most like speech in many ways, despite their low interactivity and asynchronicity.

Dawn’s approach looks promising, and the initial results are suggestive. Further research could involve visualization of the multidimensional comparisons between corpora, for example, to explore more fine-grained identifications of similarities between different e-language and language types in a reference corpus.

The next two papers in the morning focussed on the analysis of particular online forums. Daniel Hunt (QMU) explored the language of an online forum for sufferers of anorexia. An approach based on keywords showed ways in which participants in the forum present their illness as an entity external to themselves, thus presenting themselves as passive and unaccountable. Amanda Potts (Lancaster University) presented an exploration of ‘queer’ sexual innuendo in an area of discourse and human activity that was new to me – commentaries accompanying highly popular videos of Minecraft games. It wasn’t clear to me that either project was able to draw conclusions that they couldn’t have drawn from simply reading the texts in their relatively small datasets, nor that there was anything particularly interesting or significant about the particular issues, themes and areas of social media which they had chosen.

Amy Aisha Brown (Open University) set out to examine English in Japan via Twitter, but the analysis seemed to be of tweets in Japanese which included references to the English language, so I’m afraid that I was a bit lost. She seemed to draw the conclusion that fluency in English is generally considered in a positive way in the Japanese twittersphere. Amy used a Windows desktop application Tweet Archivist Desktop and a programme called KH Coder for cleaning and analysis. Again, the data is not being shared and no indication whether this might ever be possible.

Alison Sealey and Chris Pak (Lancaster University) reported on a small-scale analysis of references to animals on Twitter, part of a larger project examining discourse about animals. The project used an online service called Topsy to find tweets, but it wasn’t clear to me how the analysis and the results are arrived at, are or what the research questions are.

Rachelle Vessey (Newcastle University) took a more theoretical tack, characterising mainstream corpus linguistics as being mainly concerned with, and focussed on, notions of stability and normativity, and on standard languages. The idea of ‘superdiversity’, was presented as a cultural successor to multiculturalism, with an assumption of more diverse and fast-changing cultural formations. She has pursued this issue in the context of Canadian language politics, examining tweets relating to a recent controversy known as ‘pastagate’. The data in this project, like others here today, was somewhat complicated by the large number of retweets. The somewhat underwhelming conclusion was that the largely separate English and French language communities operate separately in Twitter.

Yin Yin Lu (Oxford Internet Institute, University of Oxford) is investigating the linguistics of the Twitter hashtag. She has used the Streaming API, which offers access to a restricted number of tweets according to filters. She used a keyword filter with the hundred most frequently used words in English, sampled in one-hour slots over a two-week period. (Interestingly, she didn’t have access to a server to do this on in Oxford, and used a server at a different University thanks to a family connection.) Her analysis focussed on a few examples of how hashtags were used in activist campaigns such as #bringbackourgirls.

The final talk from Diana Maynard (University of Sheffield) introduced a research project, ‘Decarbonet‘, which aims not only to investigate what people think about climate change, but to ‘raise awareness’ and foster ‘behavioural change’. Diana’s part in this is to analyse discourse about climate change in social media.

Overall, it was clear that there are still some serious hurdles to accessing social media data, processing it for use with standard text analysis tools, and sharing the results and datasets. I was hoping to find examples of workflows which could access big datasets and analyze them in close to real time, but I haven’t found that yet. I’ll keep looking!

Collective Intelligence

This post was originally composed February 23, 2015, in the wake of the event ‘Digital Humanities Collective Intelligence: a workshop to foster international cooperation’ held in the Anatomy Theatre and Museum at King’s College London on the 21st and 22nd February 2011. It was posted on arts-humanities.net, a site which disappeared, then posted on the IT Services blog at the University of Oxford, where it was deleted sometime around 2019. It is the blog post that will not die.

A two-day workshop at King’s College, London in February explored the idea of ‘Collective Intelligence’ in relation to DARIAH and the Digital Humanities. Two dozen participants, representing numerous countries, organisations, domains and backgrounds were in attendance, including DARIAH partners from London, Oxford and Dublin. The workshop kicked off with the presentation of position papers from Jan-Cristophe Meister (participating remotely), Andrew Prescott and Susan Schreibman.

Jan-Cristophe Meister (Hamburg University) outlined the plans of the Association for Literary and Linguistic Computing (ALLC) to relaunch its website with three major functions, namely provide a moderated Digital Humanities information platform for the association’s members and affiliates, that will offer a “one-stop” overview on current DH activities, funding opportunities and services, with links to more detailed external repositories.

As a precondition to the wider sharing of such data, Meister emphasised the need “to define a data curation protocol stipulating standards for the moderation and validation of DH information by information gatherers
and providers”, warning that without such a protocol there would be too much variation in the shared information, making it obsolete and “creating ‘white noise’ that will frustrate information seekers.”

Meister therefore proposed “the definition of a DH atlas or a DH taxonomy enabling us to systematize DH information”.

Andrew Prescott (University of Glasgow) proposed that we need a new generation of tools that will work with publishers and other content providers, and enable new perspectives on data and humanities research questions.

Susan Schreibman (Digital Humanities Observatory, Ireland) outlined the detailed and extensive work done in the DRAPIer: Digital Research And Projects in Ireland to scope and describe digital humanities work and act as a collaboration space to share expertise. Susan proposed greater use of Web 2.0 technologies in future initiatives in this area.

There was also a presentation of the arts-humanities.net portal, and a discussion of the lessons to be learned from its six years of existence. The possibility of preferring to follow a design path more oriented towards ‘apps’, ‘gadgets’ or ‘toolkits’ was considered.

Group discussions considered how to move forward to create more interoperable metadata. Do we already have adequate standards and procedures for sharing information? Do we need the carrot or the stick to encourage data creators to follow them? Do we need to link communities and expect the metadata to follow, or vice versa? Some concrete suggestions emerged for potential ways forward to capture, disseminate and use the potential knowledge that is embedded in our currrent and past activities. An aggregation of information about events was strongly promoted, and the idea of a service for mining the collected knowledge of past discussions on relevant email lists and forums was mooted. There are plenty of organisations and initiatives producing useful information, that there is a general willingness to share, but due to various factors there is a certain inertia tending to block efforts to do so. Measures to overcome this inertia and to make it easier to exploit our collective intelligence should be a key guiding priniciple of our next steps.

Advising DigHumLab

Originally posted at blogs.it.ox.ac.uk on September 30, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

University of Oxford researchers from IT Services and the Oxford Internet Institute are playing a key role in advising an important national project in Denmark, and learning a lot about different ways of building and sustaining research infrastructure along the way.

DigHumLab is a national initiative in Denmark to set up a collaboration to advance digital research in the arts and humanities. Staring in 2010 with the drawing up of a roadmap of research infrastructures for Denmark, DigHumLab was awarded €4 million Euro for five years in 2011. DigHumLab encompasses the Danish contribution to the CLARIN and DARIAH European research infrastructures. I was asked to join the small international Advisory Board for the project, and to attend a mid-term meeting in Copenhagen in September 2014, to offer advice to the project.

The vision for DigHumLab is to take actions to strengthen research in the humanities and humanistic social sciences, to improve access to data, develop methods and tools, promote collaboration and support emerging areas of digital research. The project goals are to:

  • create virtual portal, access point, and potential partner for international collaborations
  • create a knowledge hub
  • become a provider of software and technical solutions
  • act as a national political advisor on matters relating to digital research in the humanities.

As well as activities to establish these outputs and services, the project includes a significant amount of effort spent on three research themes:

  1. language resources and technologies
  2. media tools
  3. interaction and design

The project has kicked off with participation from four Danish universities, but the intention is not to create a club closed to other universities or research bodies. The project also aims to build links and to coordinate activities with national services for high performance computing, research data management, e-Science, as well as with the National Library and the European research infrastructures. It wasn’t possible in this meeting to find out what measures are being taken to achieve these goals, but it was encouraging that the meeting was hosted by the National Library, in their impressive modernist ‘Black Diamond’ building, with participation by senior staff from the library.

The Black Diamond building housing the Danish Royal Library

After spending an initial period establishing the working groups and themes, the project is now moving into a period with a focus on building generic services such as online research environments, awareness raising, a survey of requirements, outreach activities to various researech communities, establishing teaching programmes and increased student involvement.

The first theme, language resources and tools, was presented by Lene Offersgard from the University of Copenhagen, who outlined the key activities, including the establishment of a data repository, now certified with the Data Seal of Approval and CLARIN ‘B’ Centre status, with an accompanying helpdesk, tools for the analysis and annotation of data, and a user engagement programme. There are also PhD teaching modules for students at the University of Copenhagen.

The second theme, audio-visual data and tools in various media, was presented by Niels Brügger and Per Jauert from Aarhus University. Work on this theme acknowledges that “the digital comes in a variety of forms.”, and which they sub-divide into:

  • Digitized
  • Born-digital
  • Reborn-digital

The enhanced web archive is an example of the latter, where digital materials have been collected, reassembled and made available with metadata as research data. The focus of this work is on web archives, but it occurred to me that it is a characterization which fits the modern linguistic corpus as well. The team have developed the Digital Footprints software, which is still in beta, but is in use for studying online material. As well as developing ways to examine and to improve access to Netarkivet, the national web archive, researchers are working together in international collaborations, including with the British Library, and the Oxford Internet Institute, and establishing a transnational European research infrastructure for the study of archived we materials. The NetLab Forum provides wiki space for research projects using the tools so that they can communicate and share experiences, expertise and results. It was pointed out DigHumLab has been crucial in providing the funding for an IT developer, without whom this work would not have been possible, and on whom ongoing work is reliant. Another risk to the viability of ongoing work was flagged – independent legal advice is needed on the risks associated with access, use and redistribution of online materials.

The theme also encompasses work on audio-visual data and tools. Following on from research projects such as LARM, a research infrastructure has already been established for the use of audio materials, and the challenge is to integrate with other DigHumLab services. This work has been built on the national library media collections of radio and televsion programmes. Advance services already offer streaming access, and ongoing research projects are using these services for research.

Johannes Wagner of University of Southern Denmark introduced the third theme, the “little brother” of the DigHumLab siblings, focussing on “experiential research”, or analysis of human interactions and activities via digital capture and analysis. An example is the VELUX project on non-verbal communication. The experience of the researchers in this area is that “if you build it they will come” doesn’t work in this context. Face-to-face and hands-on bespoke support are needed to engage with researchers and to meet their requirements.

In the discussion with the Advisory Board, Eric Meyer (Oxford Internet Institute) asked the penetrating question of how are the success stories of flagship projects disseminated to other researchers who could potentially engage with DigHumLab. Demonstrators are much more compelling and convincing when they have been used for real research that has been finished and can be shared. Too many e-science case studies have been based on toy data or invented problems, making it was difficult for the people who might want to use these tools to envisage real uses, or to deploy the solutions. A variety of instruments are currently used to involve researchers, including travelling workshops, PhD courses, journal articles, lectures, and short courses. The question of how, or whether, to attempt to address all disciplines and all communities in the humanities remains an open one. It was agreed that robust showcases modelled from the user point of view were vital to promote uptake.

The afternoon session focussed on the thorny question of possible business models for the sustainability of DigHumLab beyond it’s current phase of funding. From 2017 DigHumLab aims to focus on the refinement and improvement of services, including prioritization of research areas, marketing of services and the recruitment of users, and the development of a viable financial model for sustainability.

One model would be for DigHumLab to be based on a core of generic services, with research themes changing over time. Eric Meyer offered a cautionary tale, the generic services and service centres developed as part of the e-Social Science programme in the UK no longer exist. I added the further example of the Arts and Humanities Data Service.

There was also some discussion of how to enter into collaborations with computer scientists. It was agreed that it was important not to try to treat computer scientists as “code monkeys”. Computer scientists need to address research questions and to publish in high-impact journals relevant to their discipline. We need to approach collaboration as an inter-disciplinary research project as with equal academic standing for all partners. Sometimes we just want to build a website or an interface or install some software, and then we need to find a developer, but this is different to an inter-disciplinary collaboration.

Sten Runar Ludvigsen from the University of Oslo made the interesting point that although distributed services can have a certain robustness, a centralized lab means that you only need to change the culture in one place, not in every lab, to run services for the community in a collaborative spirit, and might therefore be more realistic. He also made the crucial point that, as a small country, the Danish humanities community could benefit from focussing on a small number of areas Clearly they have already done this with the three themes in the current phase of DigHumLab. It would be useful to have further reflection on whether these are the right areas, and then to communicate clearly to stakeholders how the scope of the project will be constrained in future.

To summarize the day, I proposed the following three points for the project, after discussion and in agreement with, the other members of the Advisory Board.

1. DigHumLab would should articulate a vision and a mission relating to the use of digital data, tools and methods situated firmly within the wider project of the mission(s) of humanistic research. A strategic vision about what and who should be included, what the priorities are and why, and what is not included. A decision needs to be made on whether it would make sense to focus on a small number of strategic areas, or to try to engage with all areas of the humanities, and the former seems likely to be more successful. These statements about vision, mission and scope can be informed by asking where do you want to be in 10 years time. The project is nicely focussed already on specific themes – do you plan to continue to restrict the scope to these or expand to other areas of research?

2. A flexible and robust business model needs to be able to survive the withdrawal of a funder, institution, partner, academic domain, key individuals, etc.. Staking everything on the support of a ministry or a national funding body is a risky, all or nothing strategy. Flexibility means a range of funders can be accommodated (e.g. national, local funders, programmes for libraries, research data management, research grants, e-science, network/conference funds, etc.). The key to this is that various institutions and people want to buy into and sustain the mission, and are prepared to align local strategies of sustainable institutions with the common aims. This way, there is the opportunity to repurpose existing resources and funding streams to fulfill the aims of DigHumLab, rather than the more difficult task of seeking additional funding on a long-term basis.

3. It would be useful to clarify and define how DigHumLab supports digital research at the various stages of the research life-cycle (initiating, carrying out, connecting, disseminating and sustaining research). Do you want to be involved in some or all of these? How are you adding value to these activities?

You can see and read more about DigHumLab at http://dighumlab.com/.

The Oxford Text Archive and the British National Corpus: an annual report (2014)

Originally posted at blogs.it.ox.ac.uk on September 22, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The Oxford Text Archive continues to deliver open access to language resources to the academic community, via the website at http://ota.ox.ac.uk/. This year there were 5278 downloads of datasets from the OTA. An exciting development in this period was the arrival of the British National Corpus (BNC) in the OTA collection. This major reference work for the English language is now available from the OTA website, and was downloaded 397 times by researchers from around the world after it went online in January 2014. Two subsets of the corpus, BNC Baby and the BNC Sampler, are also available. Thousands of texts created as part of the Eighteenth Century Collections Online Text Creation Partnership (ECCO-TCP) are available via the OTA in high-quality XML format, and many thousands more will be available in 2015 from the Early English Books Online Text Creation Partnership (EEBO-TCP).

Two new services, introduced as the result of a collaboration with the Oxford e-Research Centre, offer new ways for users to access and use the literary and linguistic texts in the OTA. Users can download certain texts (including the BNC) without waiting for manual authorization of their requests by using their institutional single sign-on, thanks to Shibboleth federated access and identity management. At the moment, only users who are members of an institution which is part of the UK Access Management Federation can use this facility, but we are working to open it to cross-border access to more users throughout Europe via the CLARIN and EduGain federations. More than 300 instant downloads have been made already using this facility.

The second new service is BNCweb, a sophisticated online interface to the BNC, which allows researchers, teachers and language learners across the University to submit queries to identify and analyse distributions and patterns of usage in this large dataset of English speech and writing. In the coming year, we will start to implement an enhanced service offering access to more datasets via a common interface.

The OTA obtained certification as a CLARIN Centre in 2014, which confirms and strengthens its role as a key hub in the European research infrastructure. As a result of the collaboration with CLARIN, OTA resources can now be found via the Virtual Language Observatory, an online research portal, which offers access to electronic language resources held in repositories worldwide.

The development of these services, and the expertise in these areas, has enabled staff from IT Services to offer specialized teaching and support in digital methods to members of the University, including teaching on Masters course in English Language, induction sessions for new postgraduate students, and a course open to all in the IT Learning Programme on corpus linguistics.

Corpus Linguistics, Context and Culture

Originally posted at blogs.it.ox.ac.uk on May 22, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is an edited transcript of a very short introduction to a panel discussion in which I participated with Bas Aarts, Stefan Gries, Andrew Hardie, Christian Mair, Peter Stockwell on Corpus Linguistics, Context and Culture on 2nd May 2014.

We are now tantalisingly close to being able to process and analyse very large-scale textual resources with relative ease; these resources represent significant sections of the human cultural record; the opportunities for digital transformations of research in many disciplines are enormous.

Researchers are starting to use these resources to find and ask new research questions, as well as to address some old questions with more and new data, on a bigger scale, more authoritatively, more systematically; this is starting to happen and will happen with or without corpus linguists.To engage more effectively in these new forms of interdisciplinary research, we should focus more of our resources and attention on overcoming some important technical and methodological barriers; the main technical barriers are a lack of professional, reliable, persistent and sustainable services open to all – this is what CLARIN is trying to achieve; in terms of methodology, humanities scholars need to take a step towards working on some connected and common research programmes, addressing questions susceptible to big data approaches – this is down to us, the research community.

Popular Representations of Development

Originally posted at blogs.it.ox.ac.uk onMay 22, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

A few weeks ago, I was invited to join a panel discussion at Wolfson College, Oxford, to discuss the new book Popular Representations of Development: Insights from Novels, Films, Television and Social Media, edited by David Lewis,  Dennis Rodgers and Michael Woolcock. The book aims to open up a new method of analysis for development studies by treating popular representations of development issues as a data source, and engaging in interdisciplinary research with various disciplines in the humanities to analyse these representations. See more about the event at http://www.fljs.org/popular-representations-of-development. Below is an edited transcript of what I said, or meant to say.

‘Popular Representations of Development’ is convincing on the key point of argument, that artistic and fictional representations can be useful and important data resources, since they can influence, shape or reflect public perceptions and debates. It’s fairly straightforward to see how a popular novel or film might have rather more impact than social scientific scholarship, certainly as far as popular discourses and perceptions are concerned.

Through the study of representations of various aspects relevant to Development Studies, in various media and from various time periods, the research papers in this volume make illuminating, and sometimes contentious points, about development, about the representations, and about the relationship between them and about methodologies for pursuing this question. For me, they also raise some questions about methodology in an emerging interdisciplinary field. I have some experience of participating in and studying the ways in which new methods emerge, and are contested, particularly in interdisciplinary areas which are transformed by the introduction of digital methods, such as corpus linguistics and digital humanities. I will try to bring some of this experience to bear and make some observations about how this new field might grow.

At the risk of some oversimplification and crudeness, I would say that the research showcased in this volume demonstrates a methodology whereby the researcher hand-picks an example and analyses it according to their chosen methodology, throwing in an overlay of their chosen ideological approach. This raises questions about these choices: questions about bias, representativeness, balance, scope, sampling, and the importance and impact on perceptions and debates of the representations chosen for study. What is an important film or novel? High relevance, popularity, critical acclaim, artistic merit, sociological integrity? Any given representation is unlikely to tick all of these boxes. What is more, now we are at a time when we have the opportunity to use the opportunities presented by the large amounts of available texts and media, applying approaches currently characterized by buzzwords like big data, linked data, smart data, and enabling us to ask different questions, develop new methods, and engage in different types of interdisciplinary collaboration.

I will now examine a little more depth what I mean by these points.

How representative are the works examined? Aren’t they just hand-picked examples to back up your points? To take one example from the book, MissingUnder FireThe Year of Living Dangerously, and Salvador do clearly constitute an interesting tendency, a new sub-genre starring the investigative journalist or war reporter amid political turmoil in the Third World. The chapter in question convincingly relates the emergence of this sub-genre to the early stages of the break-up of cold war certainties in the Third World. But do they help us to understand these processes better, or are they just crude popularizations of certain aspects (from the point of view of the Western media)? What about the influence of other mainstream adventure films with more conventional (and maybe more misleading) narratives? How representative are these films of Hollywood output, and what are the norms that they diverge from, and what are the dominant forms of discourse and representation that they react against?

A further question begged by this new approach relates to the scope of representations of development. There are many places and time periods to examine, various different media, different artistic forms, many theoretical approaches. It’s possible to construct a powerful argument to justify the inclusion of outliers like The Wire by defining urban decay in the USA as a development issue, but it might be difficult to connect the debate about that with studies of popular film in India, and with poster campaigns in 1930s Britain, and then also with representations of genocide and war in central Africa. With such diversity, and particularly when there is a lot of focus on outliers, marginal and non-prototypical cases, no coherent picture of the central representations and discourses is built up, and the findings of each research project or paper don’t necessarily relate to each other in any way. It’s difficult to build an academic discipline on the basis of a series of largely unconnected research studies. This is a problem shared with the digital humanities, where the objects of study and the methods are so wide and varied that there is little possibility to move forward in understanding in any useful way.

An interesting question is raised, either explicitly or implicitly, by a number of the chapters. Do artistic representations and other narratives merely reflect opinions and debates, or can they somehow provide special insights? Do they inevitably just reflect dominant (and occasionally minority or marginalised) discourses? And if not, how can they do that? Do novelists and film-makers have better insights than social scientists? As noted by the authors, representations emanating from the developing countries in question might be based on more local knowledge of everyday life than the social scientists can muster. One could add that writers of fiction have better story-telling skills. So you can argue that representations can easily be more popular, and more engaging, but can they be more right?

It is an interesting parallel to compare debates around the nineteenth century realist novel. One popular theory is that the great realist novelists such as Balzac, wove stories that told in narrative form the story of how industrial capitalist society worked, dramatizing the interplay of economic and social forces and the effects on people’s lives, and the role of the human subject and the ability to shape their destiny and that of society. And in an era before sociology, such fictions are often seen as key texts to understand society. I think that The Wire aims to do something similar (although now also partly informed by sociological literature), and it partially achieves this, although I would argue that the wrong conclusions can be drawn if you treat it as a data source. The chapter on this topic asserts several times that it is the withdrawal of the state from US post-industrial inner cities that is the problem. An alternative narrative, with a wider scope and drawing different conclusions, could draw attention to a longer historical trend which includes the story of the state’s attempts to overcome the exclusion of the black population by intervention, with the effect of undermining traditional forms of civil society. This alternative story point to the eradication through state intervention of the effective mechanisms for the exercise of autonomous action by ordinary people in the inner cities, and draw attention to this as the more fundamental problem? And is this not a possible reading of The Wire in any case?

A further general problem with interdisciplinary studies is the difficulty of engaging with the cutting edge of research methods in all fields, and the danger of adopting a rather conservative or simplified method in the field in which one is dabbling. In some cases, the studies in this volume could be said to be a little conservative in their methods. The humanities are now grappling with new methods of investigation and interpretation which are being made possible by the availability of massively larger amounts of the human cultural record now in digital form, and the possibilities of searching and analysing these records with computational tools. So there is the danger here of only using the microscope to look at tiny details when we have the opportunity to use new instruments which can show us big pictures, and significant patterns and tendencies which appear when we look at lots of representations at the same time.

I am not suggesting a return to a quantitative approach, which is partly what this new approach is trying to get away from. In my view there seems to be rather too rigid a divide between qualitative and quantitative approaches in the social sciences. In the humanities, I think that it is rather taken for granted that all research needs to be qualitative in some sense, to be soundly based on a firm understanding of sources, of their provenance, contect, value and meaning. Digital and quantitative contribute as an additional set of tools and approaches, not as a replacement. Digital humanities, at its best, is developing techniques which can blend qualitative and quantitative approaches. Scalable reading is now much discussed as an approach which makes use of tools to analyse the big picture, patterns and trends (distand reading), with the ability to zoom and and examine meaning in texts in detail (close reading). In linguistics we have found the need for instruments that can count frequencies, spot trends, but also support close analysis and the interpretation of meaning.

We should also be open to the opportunities to trace present debates in the past, via digital records now coming available and online. Apart from anything else, historical data is often more easily available, thanks to lapsed licensing restrictions and intellectual property rights.  But there are new possibilities not being properly exploited now. Culturomics using Google N-grams is bad social science (and bad linguistics). New, more scholarly initiatives such  Red Hen Lab might show some possibilities for media studies. CLAROS shows how you can do different studies of ancient art when you have all of the data in one place online; CLARIN is starting to show how literary, linguistic and historical studies are being transformed by the possibility of asking new questions of large datasets, and linking data in new ways, as well as old questions in new ways, more systematically, more authoritatively. This opens up the possibility to ask important and central questions – not just the hidden voices, the unusual cases, the margins – and this is necessary if we are to build a research community where it is possible accumulate knowledge, contest and debate issues, to conduct research which builds on earlier findings.

We might look back at this in a few years time and say, “Hey, do you remember, that was when we used to look at one film, or one novel, or just a handful of posters at a time, in order to try to understand development?”.

CLARIN for Beginners

Originally posted at blogs.it.ox.ac.uk on April 11, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

What is CLARIN?

CLARIN is a network of people, centres and research activities which support advanced digital research based on language data and tools. Formally, it is the Common Language Research Infrastructure, and it exists as a legal entity, a European Research Infrastructure Consortium, with a base in Utrecht in the Netherlands, but CLARIN is really built on important national initiatives in a growing number of countries across Europe. These are building up data centres, connecting resources together and with online tools, creating advisory and support services, and promoting research programmes which make use of them.

Who is CLARIN for?

It’s primarily for anyone interested in digital research in the humanities and social sciences who wants to make use of linguistic data and tools. We’re also very open to scholars from other disciplines, and interested in supporting the use of the infrastructure in teaching and by the general public. In fact, we’re pretty sure that there are lots of cool uses that this stuff could be put to which we haven’t even thought of yet. The funding comes from national and European sources to provide services across the EU, but we’re also keen to make international alliances, and to make as much as possible free for anyone to use. We know that research communities cross many boundaries, and we want to break down barriers, not build them.

What can CLARIN do for me?

It depends who you are and what you’re looking for. If you are a researcher, you could use CLARIN to find services or people to help you to use language resources and tools more effectively, or to ask new research questions. If you create language resources, you might like to deposit them with one of the CLARIN centres so that they can be curated by professionals, and then found and used by many more researchers. If you run a repository, think about registering as a CLARIN centre and making your resources discoverable and usable via CLARIN services like the Virtual Language Observatory, or the Federated Content Search. If you develop or work with language software, you might want to try to get it integrated into the CLARIN architecture.

What can I do for CLARIN?

That’s more like it. CLARIN needs people to build the tools, services and infrastructure that we need. We also need to hear from researchers what they want from the infrastructure. You could also let us know if CLARIN has helped you, so that we can tell our funders about that. If you are in a country which hasn’t joined CLARIN, such as the UK, ask the funding agencies and policymakers why not!

Why should I be interested?

Whatever you do, you probably write, read or otherwise manipulate language in your job, and some of the resources and tools in CLARIN might be useful for you. Want to see how particular words are usually used in English (or French or German or Estonian)? Need to identify the language of a text? Need to identify all of the people and places in a text? Want to get hold of an expert in Dutch dialects? CLARIN is building a one-stop shop for solutions to these sorts of questions. Furthermore, you might not be interested right now in language technology, but you might be interested in how we are trying out novel approaches to building a virtual infrastructure to support research in the humanities and social sciences. This involves cutting edge technologies for authorizing access to resources, expertise in digital curation, new ways to describe, find and share electronic resources online, overcoming legal, administrative and financial barriers to build cross-border infrastructure services, lobbying for more access rights to copyright material for research, and lots more. Visit www.clarin.eu regularly and watch the story unfold.

What has CLARIN actually achieved?

Here are a few examples: the Virtual Language Observatory, the Federated Content Search, a service provider federation allowing cross-border log-ins to resources, numerous training events (like this), research projects facilitating collaborations (like these) and really cool websites like this: http:www.dwds.de/.

CLARIN.eu website

What has it go to do with you, Martin?

I’m Director for User Involvement for CLARIN at the European level, on a 3-year part-time secondment 2013-2016, as well as having been one of the founders and architects back when it first started. So explaining what CLARIN is and encouraging people to get involved is part of my job. Get in touch if you want to know more.

Using Large-scale Text Collections for Research

Originally posted at blogs.it.ox.ac.uk on April 10, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.I participated in a recent workshop in Würzburg on using large-scale text collections for research. The workshop was organised as part of the activities of NeDiMAH, the Network of Digital Methods in the Arts and Humanities.

I had the opportunity to give a short introduction on some aspects of my interest in this topic. I outlined how the current problems include the fragmentation currently available resources in different digital silos, with a variety of barriers to their combination and use, plus a lack of easily available tools for textual analysis of standardized online resources, and I briefly referred to the plans of the CLARIN research infrastructure to address some of these problems.

Christian Thomas explained how the Deutsches Textarchiv (DTA) is facilitating and making possible research with large-scale historical German text collections. The DTA has funding 2007-15, and now includes resources with more than 200 million words from the period 1600 to 1900. There are images and text, and automatic linguistic analysis is possible. The DTA is a CLARIN-D service centre. Integration in the CLARIN infrastructure means that resources can be discovered via the Virtual Language Observatory (VLO), can be searched via the Federated Content Search (FCS), and analysed and processed via WebLicht worklows. The DTA also contributes to discipline-specific working groups to work with as an outreach and dissemination strategy. The majority of texts are keyed in (see more at http://www.deutschestextarchiv.de/dtaq/stat/digimethod) The workflow for OCR texts is interesting – structural markup is added to the electronic text (using a subset of TEI P5), and then OCR errors are corrected. They find that it is easier to identify and correct errors in structured text. The Cascaded Analysis Broker provides a normalization of historical forms to allow for orthography-independent and lemma-based corpus searches, and this is integrated into the DTAQ quality assurance platform. Christian’s slides can be found here.

The DTA is also a key partner in the Digitales Wörterbuch der deutschen Sprache (DWDS), an excellent concept allowing cross-searching of resources in different centres, and very well implemented. This offers a view of the future of corpus linguistics and the study of historical texts online.

Jan Rybicki from the Jagiellonian University in Kraków told us about a benchmark English corpus to compare the success or failure of stylometric tools. There was a very interesting discussion of the idea of how to build representative and comparable literary corpora, which put me in mind of the work of Gideon Toury in descriptive translation studies. There was also discussion of a possible project to build comparable benchmark corpora for multiple European literary traditions.

Rene van Stipirian (Nederlab) outlined the backgroud of how the study of history in the Netherlands is characterised by a fragmented environment of improvised resources. The project Nederlab will be funded by the NWO 2013-17 to address the integration of historical textual resources for research. Some very interesting statistics were presented: for the period to the end of the twentieth century there are 500 million surviving pages printed in Dutch, and 70 million of these are digitized, but only 5-10 million have good quality text – most are rather poor quality OCR. Nederlab brings together linguists, literary scholars and historians, and integrated access to resources will go online in the summer of 2015.

Allen Riddell from Dartmouth College in the US took an interesting and highly principled approach to building a representative literary corpus. He randomly selected works from bibliographic indexes, then went out and found works and scanned them if necessary. This seems to me to be a positive step, in contrast to the usual rather more opportunistic approach of basing the corpus composition of the more easily available texts. The approach to correcting the OCR text was also innovative and interesting – he used Amazon Mechanical Turk. Allen also referred to a paper on this topic at http://journal.code4lib.org/articles/6004.
This also raised an interesting question – can a randomly selected corpus be representative, or do we need more manual intervention in selection (at the risk of personal bias)?

Tom van Nuenen from Tilburg University described how he scraped professional travel blogs from a Dutch site and starting to analyse the language. Puck Wildschut from the Uni Nijmegen described the early stages of her PhD work on comparing Nabokov novels using a mixture of corpus and cognitive stylistic approaches.

The discussion at the end of the first day focussed on an interesting and important question: how do we make corpus-building more professional? Reusability was seen to be key, and dependent on making sure that data was released in an orderly way, with clear documentation, and under a licence allowing reuse. And since what we are increasingly dealing with is large collections of entire texts (rather than the sampled and truncated smaller corpora of the past), then we should ensure that the texts that make up corpora should be reusable, so that others can take them to make different ad hoc corpora. This requires metadata at the level of the individual texts, and would be enhanced by the standardization of textual formats.

Maciej Eder from the Institute of Polish Studies at the Pedagogical University of Kraków introduced and demonstrated Stylo, a tool for stylometric analysis of texts. In this presentation, and one on the following day, I found some of the assumptions underlying stylometric research difficult to reconcile with what I think of as interesting and valid research questions in the humanities. How many literary scholars are comfortable with notions that the frequencies of word tokens, and the co-occurrence of these tokens give an insight into style? And the conclusion of a stylometric study always seems to be about testing and refining the methods. Conclusions like “stylometric methods are too sensitive to be applied to any big dataset” don’t actually engage with anyone outside of stylometry. Until someone comes up with a conclusion more relevant to textual studies, this is likely to remain a marginal activity, but maybe I’ve missed the point.

The focus on looking for and trying to prove the differences between the writing of men and women also strikes me as a little odd, and certainly contentious. Why prioritize this particular aspect of variation in the writers? Why try to essentialize the differences between men and women, and why not other factors? I’d be more interested in an approach which identified stylistic differences and then tried to find what the relevant variables might be, rather than an initial starting point assuming that men and women write differently, and trying to “prove” that by looking for differences.

On the second day of the workshop, Florentina Armaselu from the Centre Virtuel de la Connaissance de l’Europe (CVCE) described how they are making TEI versions of official documents on EU integration for research use. I suggested that there might be interesting connections with the Talk of Europe project, which will be seeking to connect together datasets of this type for research use with language technologies and tools.

Karina van Dalen-Oskam from the Huygens Institut in the Netherlands, one of the workshop organisers, introduced the project entitled The Riddle of Literary Quality which is investigating whether literariness can be identified in distributions of surface linguistic features. The current phase is focussing on lexical anbd syntactic features which can be identified automatically, although a later phase might investigate harder-to-identify stylistic features, such as speech presentation. In the discussion Maciej Eder suggested that the traces of literariness might reside not in absolute or relative frequencies of features, but in variation from norms (either up or down).

Gordan Ravancic (Institute of History in Zagreb joined us via Skype to introduce his project on crime records in Dubrovnik, “Town in Croatian Middle Ages”, which was fascinating, although not clearly linked to the topic of the workshop, as far as I could tell.

Some interesting notions and terminological distinctions were raised in discussions. Maciej Eder suggested that “big data” in textual studies is data where the files can’t be downloaded, examined or verified in any systematic way. This seems like a useful definition, and it immediately raised questions in the following talk. Emma Clarke from Trinity College Dublin presented work on topic modelling. This approach to distant reading can only be used on a corpus that can be downloaded, normalized and categorized, and would be difficult to use on the type of big data as defined by Eder, although it could potentially used as a discovery tool to explore indeterminate datasets. Christof Schlöch from the Computerphilologie group in Würzburg differentiated “smart data” from “big data”, and suggested that this was what we mostly wanted to be working with. Smart data is cleaned up and normalized to a certain extent, and is of known provenance, quality and extent.

The workshop concluded with discussions about potential outcomes of this and a previous NeDiMAH workshop. A possible stylometry project to build benchmark text collections and to promote the use of stylometric tools for genre analysis and attribution was outlined, with perhaps the ultimate goal of an ambitious European atlas of the history of the style of fiction. We also discussed the possible publication of a companion to the creation and use of large-scale text collections.

Read more about the workshop on the NeDiMAH webpages at http://www.nedimah.eu/call-for-papers/using-large-scale-text-collections-research-workshop-university-wurzburg-1st-and-2nd

Changes to the distribution of the British National Corpus

Originally posted at blogs.it.ox.ac.uk on January 13, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

In January 2014 there will be some changes in the way that the British National Corpus (BNC) is distributed.

It is now possible to download the British National Corpus at no cost from the Oxford Text Archive at the following URL:

http://www.ota.ox.ac.uk/desc/2554 [now http://hdl.handle.net/20.500.12024/2554 – updated October 2020]

BNC Baby, a 4-million word sample of the BNC is also available:

http://www.ota.ox.ac.uk/desc/2553 [now http://hdl.handle.net/20.500.12024/2553 – updated October 2020]

Click on the ‘apply for approval’ link to request a copy. The BNC continues to be subject to the same user licence conditions, which can be viewed at http://www.natcorp.ox.ac.uk/docs/licence.html. If you have already paid for permission to use the BNC, you should consider that this continues to be valid in perpetuity.

There is an even simpler download option if you have a login ID from a UK or eduGAIN Shibboleth identity provider (usually, this applies to all members of UK universities, and many European institutions). You can follow the links at the locations above to download the corpus directly without applying for approval. We hope that this facility will soon be extended to users from other countries who participate in the CLARIN Federation.

It will remain possible to order the BNC on disks from the University of Oxford until the end of March 2014, with the current administrative charges still applying, from the following URL:

http://www.oxforduniversitystores.co.uk/browse/category.asp?compid=1&modid=1&catid=1049

As part of this process, I have to announce that the University of Oxford can no longer offer any support for the XAIRA software, which has for many years been made available with the corpus. We have tried to offer support on a ‘best efforts’ basis in recent years, but we do not have the resources or expertise to help with the installation or use of XAIRA on the latest hardware and software. Users of XAIRA are encouraged to visit http://xaira.sourceforge.net/ and check out the forums and mailing lists which you will find there. The future of XAIRA depends on a committed user community, so please get involved if you have questions or can contribute expertise.

There are excellent services offering instant online access to the BNC, such as those listed at http://www.natcorp.ox.ac.uk/. I am convinced that there is still further potential for the integration and use of the corpus in online services and web applications. There are plans to integrate access to the BNC with the emerging CLARIN infrastructure, enabling a range of applications and web services to be used in conjunction with this and many other corpora. See https://www.clarin.eu/ for more details.

If you know of other ways of using the BNC, or have any more ideas about its future, I would welcome a discussion on this email list, or email me.

The Oxford Text Archive in 2013

Originally posted at blogs.it.ox.ac.uk on January 3, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The New Year promises to be an exciting one for the Oxford Text Archive. As well as new accessions to the archive, new services and new collaborations, we plan to integrate the archive further into the new research data management services at the University of Oxford. This will involve working more closely with the Bodleian Libraries, who are embarking on a number of ambitious projects to serve the requirements of researchers for working with digital data.

The last year has seen the biggest ever expansion in the archive, with the accession of more than 2,000 texts from the Eighteenth Century Collections Online text creation partnership. These are made available under Creative Commons licences, another new venture for the OTA, and we plan to release future accessions with the relevant CC licence. These texts, along with all other XML resources, are now made available in a variety of formats, including popular ebook formats, converted automatically by the Oxgarage web service. We are planning future releases of Early English Books Online (EEBO) texts as they come into the public domain.

The Oxford Text Archive has taken over the management and distribution of the British National Corpus. We are not able to give support for the Xaira software, which continues as an open source project, but we continue to distribute copies of the corpus. In 2013 we will open a consultation on how to open access to the corpus with the corpus linguistics community and other stakeholders. We aim to make more widely available a BNCWeb service hosted by the National e-Infrastructure Service with secure authentication for users in educational establishments. The excellent online services listed at http://www.natcorp.ox.ac.uk/ continue to be available online.

The OTA also hopes that in 2013 we will be able to make more links with CLARIN infrastructure services and projects. OTA resources are already visible via the CLARIN Virtual Language Observatory, and we hope to participate in the federated content search demonstrator which is being built now. However, proper participation for service centres like the OTA, and for other institutions and individual researchers, does require that the UK funders and policymakers finally acknowledge the importance of the emerging European research infrastructure. Regretfully, attempts to engage research councils, JISC and the UK Access Management Federation in these processes continue to founder. Let’s hope for more progress in 2013, and that policy-makers start to act on their promises about building and promoting digital research infrastructure in the UK.

Text encoding, text collections, and the potential to transform the Humanities

Originally posted at blogs.it.ox.ac.uk on November 6, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a transcript of a contribution to a panel discussion at the TEI Members’ Meeting in Rome in October 2013 on the topic ‘How could the TEI community benefit from TEI-specific query solutions? What should they look like?’

I think that there is a problem that too many of the people working on text encoding and tools for querying encoding texts are contributing to a proliferation of different complex platforms which the user has to adopt before they can submit the simplest query, and which are all mutually incompatible. We’re just building digital silos, and getting further away form a solution to some key problems.

There is potential for transformation in the way that we do research in the humanities. Recent discussions about distant reading, and combining it with close and scalable reading, revolve around how we can exploit the marvelous opportunity with which we are presented today to ask new questions in the study of languages, literature, history, and in other disciplines. I’d like to get to a situation where we can ask new and big questions, and I’d like us to be in a position to accumulate knowledge about language, and from texts, by investigating more features, more genres, more languages etc. I want us to lower the barriers to digital research so that more people can do it, more outputs can be compared and connected. I don’t want to see more diversity: more alternatives to the TEI, more frameworks, more annotation schemes. Building a new tools is a computer science project, and has no place in the humanities. Three are barriers to progress: (i) “not invented here”, (ii) reinventing the wheel, and (iii) the search for the perfect metalanguage.

Converting texts to a common format has been suggested, but it is not an option when it comes to exploiting big data. The texts that we want to query live on different parts of the network. Persuading data repositories (including publishers, Google, Amazon) to provide TEI XML output is feasible, and would be a step forward. Converting all texts to formats optimized for linguistic search is impossible.

Now is the time to act, and action is overdue. Since I first heard about the TEI more than 20 years ago, I thought, “that’s a good idea – where are the tools?”. I could still ask the same question. There are now some tools for editing and operations on individual small documents, but what tools that can easily be used or deployed for cross-searching collections of texts and corpora?

The vision for new forms of digital research requires not only tools but interoperable resources. Linguists say we can’t agree “what is a phrase”. Well then you can’t have interoperable resources for the study of grammar, or resources which make use of grammatical analysis as a basis for analysis at other levels. Now, I have to admit that I have spent quite a lot of my time in my career up until now arguing with computer scientists that we don’t have agreed basic concepts, and this is the correct state of affairs because humanistic research is basically about discussing and problematizing the way that we conceptualize and discuss things. If we can’t agree on representation and categorization of linguistic features, then there isn’t much we can do by way of digital scholarship. There is more to be gained from accepting imperfect models than there is to gain from trying to perfect them.

The TEI and (probably ISOcat) offers us the basic technical preconditions for moving forward, for creating and sharing interoperable resources. Let’s take the opportunity and develop a culture of sharing resources tools, resources, categorizations and methods. Here is a challenge. At the OTA we have all of the texts from ECCO-TCP which are in the public domain freely available at persistent URLs in TEI P5. I’m happy to make the British National Corpus (BNC) available in this way as well, although it is interesting that no-one has ever asked us to do this. I have a challenge for the participants here and for the community – the texts are out there, so please deploy tools to search them as reliable and persistent services. I’ll come back next year and see what’s available!

There are different research agendas: problematizing the notion of the phrase, or the book, are perfectly legitimate research questions, and things that theoreticians might quite legitimately decide to do with their time, but they should not be raised as barriers to developing digital tools. We need to decide our priorities, and I think that more resources should be devoted to exploiting the current opportunity for posing new large-scale research questions, rather than re-posing fundamental questions about categorizations and models.

We can see what the average researcher wants. We can see a multitude of relevant use cases. it is perfectly possible to examine published research in the humanities for statements and questions which are susceptible to empirical study in digital text collections. If we concentrate our efforts on making available for cross-searching all the digital texts which we can lay our hands on, and the tools to query them and analyse the results, then there are research topics for all PhDs in many literary, linguistic and historical disciplines for the next 20 years or so, and it can reinvigorate the humanities. I’m excited by that prospect. We are on the cusp of making it possible and we can be the people who do it.

Places in Literature

Originally posted at blogs.it.ox.ac.uk on July 29, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

I note with interest that there are still attempts to kick-start the effective use of geo-spatial technologies and methods in the study of literature. Some years ago, I attended a very interesting workshop at the University of Nottingham on ‘Places in Literature’. The event brought together researchers from various fields to investigate the feasibility of an interdisciplinary project to use geospatial technology to enhance literary research. The event featured presentations, discussion and some hands-on encoding and analysis, and was held in the Centre for Geospatial Science at the University of Nottingham (in a brand new building on the site of the old Raleigh bike factory). Most of the participants were from Nottingham, with a couple of us from Oxford, and two people from Glasgow. The Nottingham participants included specialists in geospatial scientice, English literature, English language, place names, cultural geography and history, and computational linguistics.

It was an excellent opportunity for us to push forward our understanding of the challenges and barriers to developing useful applications which could be used in research. The findings included the following:

  • there is an unbounded number of ways in which narrative texts refer to places; probably most of the time the reference to place in a text is not a reference to a stage in a journey, or a description of the location of an event (e.g. “You’re not in London now!”);
  • references are often non-explicit – Heart of Darkness by Conrad does not say where the action is taking place, although the reader is likely to infer London, Brussels and the Congo;
  • places in fictional worlds may or may not relate in a reliable way to geography in the real world;
  • all texts are historical; mapping places in texts is never the same as mapping in contemporary real-world applications;
  • place names are subject to variation in languages, spelling, change over time, movement of borders, etc.;
  • variations in granularity, and fuzziness, are inherent in place name references in texts;
  • existing geo-coders appear not to be very good at recognizing place names in literary texts (the poverty of named entity recognition);
  • existing geo-coders appear not to be very good at reliably assigning locations to place names (due to the ambiguity of place name references – Lancaster in Lancashire or Lancaster in Pennsylvania?).

It was my conclusion that with the current state of the art, it was not easy, and probably not possible, for literary scholars to use the state of the art geoparsing and mapping to0ls to improve their research or to ask new research questions.

I concluded that future work to improve this situation might involve:

  • applying state of the art named entity recognition software to texts in order to more reliably identify place name references in texts;
  • investigating heuristics to improve the geocoding (or the ranking of possible hits) of place names;
  • producing tools that allow users to investigate, correct and examine the outputs at each stage, as these outputs are likely to require human intervention to improve accuracy to an acceptable level, and because these outputs are likely to be interesting to various types of research;
  • investigating means of applying the techniques to large text collections, or combining them with web searches; developing a focussed, in-depth research topic to make use of these tools in respect of a specific set of texts;
  • combining geocoding tools with historical place name gazetteers and maps;
  • allowing geospatial information to be combined with linguistic information in the text, e.g. locations linked to concordances and collocations of textual references to the location.
  • embedding the relevant tools in language resources infrastructure such as CLARIN, so that the geographical tools can be combined with tailored NLP tools.

Current examplars which I have seen still use very specific text types based on travel itineraries, such as travel writing and guide books. I still haven’t seen an example which has effectively supported research in narrative fiction with automated or semi-automated geographical analysis, but I would be happy to be proved wrong!

CLARIN Germany: Happy First Birthday!

Originally posted at blogs.it.ox.ac.uk on July 10, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

A workshop was held in Leipzig last month to mark the end of the first year of CLARIN-D, the national initiative in Germany to build a research infrastructure as part of the Common Language Resources and Technology Infrastructure. The wider CLARIN effort is Europe-wide and aims to link up repositories, services and researchers in the social sciences and humanities who are making use of the wide range of digital datasets and tools for processing human language. More details of the workshop, including all of the presentations, are available here [update 2019 – previously linked to http://clarin.informatik.uni-leipzig.de/, no longer available].

Greg Crane, the newly appointed Professor of Digital Humanities at the University of Leipzig, kicked off the event with a stimulating presentation which situated CLARIN in the wider context of the evolution of the humanities and, more recently, the digital humanities. Greg suggested that we should provide platforms and tools for students and citizen scholars to contribute to research and to the accumulation of knowledge, culminating in the challenge: “How can we foster a new global Republic of Letters?”.

Erhard Hinrichs (Tübingen), the coordinator of CLARIN-D, introduced the overall initiative as a “web and centre-based research infrastructure for the social sciences and humanities”. CLARIN aims to build an integrated, interoperable, scalable and sustainable research infrastructure via a network of centres. Language resources and tools (LRTs) will be deployed as services for researchers in the social sciences and humanities. CLARIN-D has 9 centres: BAS, University of Munich; BBAW, Berlin; IDS, Mannheim; MPI, Nijmegen; University of Hamburg; University of Leipzig; Saarland University; University of Stuttgart; and University of Tübingen.

Erhard reassured us that CLARIN-D has taken to heart the words of John Wood from the Knowledge Exchange Workshop in Berlin in September 2009:

Research infrastructures that do not take user needs into account from the very start run the risk of becoming empty infrastructures.

There are working groups for 9 humanities and social science disciplines. These discipline-specific working groups act as catalysts, linking CLARIN-D to the research communities. They choose key resources and tools from their communities and advise and supervise their integration into the CLARIN-D infrastructure (in the so-called “curation projects”). CLARIN-D is also working with many of the BMBF-funded eHumanities projects. CLARIN-D also has work packages which are devoted to liaison with the CLARIN-ERIC and with DARIAH, an emerging humanities e-infrastructure, and also on legal and ethical issues, support and helpdesk, and training and education.

Dieter van Uytvanck (MPI Nijmegen) introduced the distributed technical architecture of CLARIN in the context of an infrastructure to support researchers throughout the life-cycle of their work. He also situated CLARIN in the context of a (European) ecosystem of infrastructures:

Community Services – CLARIN
Cross Community Services – DASISH
Compute Services – DEISA
Data Services – EUdat
Grid Services – EGI
Network Services – GEANT

Dieter outlined the services which are available now:

  • WebLicht for resource processing and workflow management;
  • the Virtual Language Observatory for resource discovery;
  • tools to support resource creation and enhancement;
  • European Persistent Identifier Consortium (EPIC) service;
  • repository services in the centres for archiving, preservation and sharing;
  • federated identity management (including a CLARIN Identity Provider, a service provider federation and cross-federation)


Services that will be available in the future include:

  • Federated Content Search (in development)
  • Monitoring (currently in alpha)
  • Center Registry (alpha)
  • Virtual Collection Registry (alpha)
  • Workspaces + SimpleStore (alpha)
  • Safe Replication (alpha)

The workshop then moved on to consider the various projects associated with CLARIN-D. Angelika Storrer (TU Dortmund) spoke about her experiences in corpus-based language analysis in research and teaching. The requirements which she identified were of particular interest:

  • One common interface with a German language version and German online tutorials
  • Tools to further work with the results of search queries (clean-up and search again; manually annotate and search again; interface to statistic tools)
  • Word sense disambiguation / semantic clustering tools
  • Orthographic variation tools: important issue when dealing with historical corpora or with computer-mediated communication, e.g. Stress / Streß

Annette Hautli (Konstanz) is part of a team is aiming to tackle the problem with an innovative combination of methods coming from three disciplines: Linguistics, Visual Analytics and Political Science. It is clear that the proposed process of automatic pragmatic annotations of naturally occurring speech data is ambitious, and it is not yet clear that effective results can be obtained. Furthermore, the data set used, which seemed to be transcripts of interviews carried out by the political scientists in the project, is not really the sort of “naturally occurring” speech events that the linguistic methods were developed to deal with, and the eradication of biases and formulation of appropriate interpretations of the data will be difficult. In this sense, it will be an interesting collaboration between the social sciences and other disciplines. On a technical note, it has been noted that a multi-layered annotation approach would be useful, although they don’t have the tools at present.

Eva-Maria Wunder (Augsburg) introduced her PhD work on searching for evidence of second-to-third language interference in language learners (e.g. if a Chinese speaker learns English and then German, how does this effect their German pronunciation?). While she didn’t address the methodological problem that looking for English influence in pronunciation is difficult when “English” is not one accent, this probably wasn’t the place for such discussions, and she introduced the CLARIN tools Wikispeech and WebMAUS which are supporting her work.

Kirsten Bergmann spoke about the challenges of integrating multimodal resources into the CLARIN infrastructure, such as the SaGa speech and gesture corpus, sign language materials, and “sociable machines” under development in Bielefeld.

Ingmar Schuster (Leipzig) described one of the curation projects, which aims to build a “reproducible research platform”, to support “reproducible data-driven linguistics”. The platform is a development of the Potsdam Mind Research Repository (PMR2), and incorporates pre-prints Open Journal Systems (including OAI-PMH, CMDI plug-in); author submission system (reducing the admin load of the centre supporting the system); data publication; “non-significant” (presumably negative) results; R integration, with a web application variant, since most researchers in this field use R.

Christian Mair (Freiburg) described the integration of the Virtual Linguistics Campus (VLC, a suite of online distance learning resources) into CLARIN. The aim is to create an accessible digital resource for a mass market, expanded by a large number of users (to build a web-based community of practice). This could evolve into a multi-functional digital language resource from an e-learning resource: from teaching through research-based teaching to research. There are ongoing issues of quality control, and as yet unexplored potential and obvious synergies with other CLARIN ventures, e.g. the integration of distributed corpora.

Thomas Gloning (Gieẞen) described another curation project, on the integration of German historical philological resources, ultimately aiming to integrate the textual resources of the 15th to the 19th centuries into a reference corpus of historical German, and including a workflow for future integration of further resources. Integrating various textual resources will not provide a corpus in a strict sense but rather a huge repository, from which users can use metadata to build up subcorpora from the repository, according to relevant criteria, e.g. text type (newspaper reports, plant descriptions), decade (texts from the 1680s), topic (texts on alchemy, cookery, medicine, etc.). Anticipated outcomes of making such a resource available include a new historical dictionary of New High German from the 17th-21st centuries based on corpus principles. Innumerable projects on more specific themes would also result, for example investigating the history of foreign words, emergence of specialized vocabulary, evidence of language change, etc., leading to new models and theories.

Alexander Geyken (BBAW) announced plans to write a user manual or handbook (Benutzerhandbuch) for CLARIN-D services. The target audience sectors will be:

  • researchers who have/want to develop Language Resources, Tools and Services (LRTS) and want to make them CLARIN-D compatible;
  • researchers who want to learn more about the solutions adopted in CLARIN-D;
  • technical staff supporting researchers in resource development and migration.

The manual will aid the migration LRTS to the CLARIN-D infrastructure, with the following benefits:

  • linking to larger community / visibility or resources;
  • interoperability;
  • long-term preservation by CLARIN-D service centres.

Among the challenges presented by the plan are the relations of this manual with the emerging standards and procedures of the CLARIN ERIC, which are intended to be Europe-wide in their application. Also, centres and research creation projects will need to make decisions at particular points in time regarding standards, which might be made difficult by the nature of the handbook as a “living document” with constant updates and changes. Nevertheless, this work should provide an excellent foundation for future work in documenting CLARIN procedures.

Frank Wiegand (BBAW) explained the project to build the Deutsches Textarchiv (DTA), which will identify and integrate distributed text resources into a large reference corpus for German (1650-1900). Some of the work to produce editions for the corpus is being done in de.wikisource.org.

Thomas Eckart (Leipzig) reported on infrastructural and CLARIN-related aspects of the eAqua project which is working on the extraction of structured knowledge from ancient sources. The project aims to develop tools as small independent components available as services via SOAP and REST, to support the reuse of data and algorithms, and promote interaction and interchange with existing projects in Digital Humanities, and to allow the integration of existing data resources. They aim to use existing standards, and so plain text and TEI have been selected as input formats for the CLARIN workspace, and have built a TEI text integrator, which automatically sucks in texts to a repository, allocates a PID, generated CMDI metadata (which is then pushed to the Virtual Language Observatory aggregator), the full text will be offered to the CLARIN Federated Content Search, with output in TCF/txt/XML/HTML.

After the presentation of this impressive array of projects, Erhard Hinrichs returned to the stage to introduce the CLARIN ERIC, the new legal and organisational framework underpinning the Europe-wide CLARIN research infrastructure, and its relationship with CLARIN-D. In short, ERICs are reliant on national funding and national infrastructure initiatives. The challenges will be to integrate numerous national infrastructures of varying size, scope and maturity into a coherent European infrastructure. The CLARIN ERIC started operation in the Spring of 2012, with nine founding members – Austria, Bulgaria, Czech Republic, Denmark, Estonia, Germany, The Netherlands, Nederlandse Taalunie (the Dutch Language Union, an international organization based in Flanders and the Netherlands), and Poland. Six additional members are expected by the end of 2012: Croatia, Finland, Latvia, Lithuania, Norway, Slovenia.

Thomas Zastrow (Tübingen) introduced the EUDAT Data Project, which brings together a consortium of research communities and national data and high performance computing centers, aiming to contribute to the production of a collaborative data infrastructure (CDI) to support Europe’s scientific and research data requirements, and to deal with the “data tsunami” – note that no longer merely a deluge! As well as CLARIN, there are participants from Earth sciences (EPOS), Climate sciences (ENES), Environmental sciences (LIFEWATCH), and Biological and medical sciences (VPH).

Erik Ketzan and Pawel Kamocki (IDS, Mannheim) introduced the CLARIN-D legal helpdesk and “Three Important Legal Concepts for Language Scientists in Germany”.
The first two of these concepts represented encouraging news about the relatively liberal provisions of German law for personal scientific use and implied licences. However, we should note that services built on the these exceptions will pose problems for the CLARIN infrastructure, the boundaries of which are EU-wide (at least). It remains to be seen how we can deal with problems of identifying the relevant legal jurisdictions for complex workflows involving cross-national collaborations and distributed architectures. It might prove necessary to base services on the assumption of the lowest common denominator of EU-wide legal principles, rather than on those of the most liberal country. (By the way, the third concept was the potential landmine of database rights!)

In summary, it was extremely encouraging to see the plans of CLARIN, first conceived many years ago, start to come to fruition. The connections now being made in Germany with key communities of academic researchers is of paramount importance, and will need to be carried on in other countries. There were a few niggling doubts in this respect – it would have been good to find out more about connections with literary scholars, and with TextGrid and DARIAH. But overall, CLARIN-D shows a remarkable level of maturity, at both technical and organisational levels. There are numerous key challenges ahead, but this community seem well-equipped to address them. We have seen the future of language resources, tools and services, and it works!

Silos or fishtanks?

Originally posted at blogs.it.ox.ac.uk on April 6, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a partial summary of a presentation given at the Interedition Symposium in the Hague in March 2012 on the topic of Scholarly Digital Editions, Tools and Infrastructure.

People are often talking about digital silos in the context of digital resources in the humanities. The problem is that resources, although valuable in themselves, are located in different locations on the web, where they might be difficult to find, and they all have their own individual interfaces and registration procedures, and are not connected with similar or related resources. So you can’t easily search the Old English Corpus (available either for download with no software from the OTA, or online via numerous university library portals to local users). Some resources, like the ARCHER corpus, you can’t access at all unless you’re friends with someone at the University of Manchester.

Silo image from Doc Searls (dsearls)

This is clearly far from ideal. But what alternative, more connected, architectures are most appropriate to achieving interoperability and sustainability of the arena of digital textual scholarship? The emergence of fast and high capacity networks, a deluge of data, and web service APIs mean that it is increasingly possible to imagine and build distributed architectures for scholarly services, where data, tools, computing resources, and the outputs of annotation and analysis live in different parts of the network but can be brought together virtually in the user’s desktop environment. The current concerns about ‘digital silos’, in which the outputs of digital humanities projects are deployed online unconnected to other resources, and with limited sustainability, are directly addressed by this vision.

I want to put forward put the argument for distributed architectures, while reviewing some of the risks and problems, and survey some current moves towards such an infrastructure. And I also want to suggest another metaphor as an alternative to the ‘silo’.

An open and fully distributed architecture where the resources are located in different places can have the advantages of allowing the following services to be created:

  • potentially unlimited functionality, since developers can deploy content and tools that they want to use, and which can interoperate with other data, tools and infrastructure services;
  • building ad hoc collections and corpora across different repositories;
    complex workflows, for example piping together web services from different locations;
  • protected resources (e.g. works in copyright, sensitive data) curated in situ yet still analysed online via web applications which access the data via a secure infrastructure.

All of this can happen in a situation with a better division of labour than we typically have now: the repositories don’t have to worry about tools; tool and content developers don’t have to worry about creating the entire online environments; tool developers don’t have to worry about data management; users don’t have to install software; etc.. The emergence of an ‘ecosystem’ with numerous actors providing content, tools, computing resources, and other infrastructure services, provides a flexibility and resilience and the potential for sustainability which is not possible for a single-site or other more closed or monolithic system.

So let’s consider the unconnected, problematic online resource as a fishtank rather than a silo.

Goldfish image from Praveen Gupta (praveengupta)

There are lots of fishtanks out there, and they can be very large, elaborate, pretty, sophisticated, long-standing and sustainable. But they’re all in different places and they are not connected with each other. If you want to see a variety of fish, you have to visit a lot of houses, try to negotiate access to their fishtanks, and make use of whatever facilities they have for viewing or otherwise analysing the fish. Some places are better than other to visit – aquariums might have very good facilities and lots of information, but you still can’t view the fish in one aquarium alongside the fish in another, and it’s hard to compare them.

And if I want to keep a fish I have to build a fishtank and maintain a fishtank, or I could find someone else’s fishtank to put it in, but then it’s difficult for me to get access and control the environment. And who’s going to carry on feeding the fish? We can probably agree that it’s better if we don’t all try to make and look after our own fishtanks, at least not if our main goals are to enable as many people as possible to get into looking after, breeding and sharing fish, and if we want to be able to see a wide variety of fish. Wouldn’t it be better to have an ecosystem where we can all set our fishes free to swim together?

Marine Ecosystem image from www.sciencelearn.org.nz

This way, everyone can access all of the riches of the deep and it’s a lot easier to get into fish research.

Of course, ecosystems can be dangerous places, with predators and diseases, and they can be fragile. You could also argue that what fishkeepers really want is the experience of nurturing their own fish, and the enjoyment of setting up and maintaining their own micro-infrastructure, and therefore fishtanks are the best solution. But there a limits to the applicability and relevance of any metaphor.

There are potential disadvantages to distributed infrastructures, and many of them relate to the additional complexity that they introduce into the access and identity management arrangements. Arranging access to services in one location can be hard enough, but authorization to use, for example, textual data in more that one repository might require passing of information between institutions. It is also the case that while there are reasonably well-established technologies and procedures and agreements for controlling access to online content, the authorization of web services is not such a well-established area. Furthermore, authorization to access online content cannot easily be passed on to authorize access to the computer processing power that is necessary to carry out an online textual analysis, if this is being provided by another centre in the distributed infrastructure. In summary, the fact that distributed services are reliant on cross-institutional agreements and arrangements adds an extra hurdle to be crossed to participate, as data provider or user, and a layer of complexity and additional layers of risk to the robustness of services.

Other potential disadvantages of distributed infrastructures include:

  • Registering persistent identifiers with a shared service becomes desirable to sustain the interoperability of content and applications, thus adding another level of complexity to the curation of the data;
  • Monitoring of usage is difficult, since operations are being carried out on remote servers not under the control of the repository;
  • Monitoring of the availability of services is difficult – it might be possible to test the status of individual components but not a complex workflow;
  • Although underlying interoperability is essential, there is no impetus towards consistency in user interfaces, and even a tendency towards heterogeneity, and therefore fragmentation of services is likely to be maintained or even made worse;
  • Various further questions also remain (at least partially) unanswered in many cases, relates to where and how the computer processing is carried out, and how usage and services are monitored and logged.

We also need agreement at some level about our categories, formats and concepts. To get to the promised land, we need to agree on some standards. Linking datasets requires interoperability at the levels of the linguistic representations, annotations and metadata. Visualization of large datasets requires a reduction of variables, and deciding what is important and what is not. There is a tendency in the humanities for everyone to think that their way of looking at things and of categorizing things is unique. Annotations do sometimes embody the unique intellectual work of identification, categorization and interpretation of phenomena, and these are vital operations in the humanities, so it is not a surprise that this is problematic.

Another problem is that building infrastructure takes time and involves addressing complex and difficult administrative, legal, financial, political, technical barriers, often by making international agreements. So, usually, it’s easier to make ad hoc work-arounds. And building tools can be more attractive and rewarding. But actually, it’s a false opposition – enhanced infrastructure should help with tool development and deployment. An infrastructure providing a range of simple solutions for connecting together data and tools, deploying them as reliable services, managing authentication and authorization, licensing, access to computing power, monitoring availability, connection to virtual research environments, etc.

The mistake would be to try to build the perfect all-purpose tool, or to claim to provide services for end-users which solve all of the infrastructure issues. Or to put it another way, building the biggest and best fishtank in the world doesn’t solve the problem, because you can’t get all the fish in the world in there, allow everyone access to view every kind of configuration and interaction in there. But all too often this is what people try to do, rather than contributing a part of a wider, distributed system. Understandably people are impatient and our efforts and resources go into building new fishtanks, which can be fun to make, and which look good when people come to visit.

What are the Digital Humanities

Originally posted at blogs.it.ox.ac.uk on March 30, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The Day of Digital Humanities on 27th March this year has provoked numerous conversations about the nature of Digital Humanities (DH). Some believe DH is a discipline or community, with its own methods, resources, communities of practice, journals, standards of evidence, etc.

Others prefer simply to use the term as a way of looking at activity across a number of humanities-related disciplines which has a significant digital component, and while it is useful to trace connections in terms of methods, resources and tools, it is preferable for digital research in the humanities to live within the historic academic disciplines. It could be argued, for example, that the work of ‘digital classicists’ should be primarily related to addressing research questions in the mainstream of classics (or relevant sub-discipline), not primarily focussed on interacting with an interdisciplinary ‘digital humanities’.

But this is simplistic: digital research can be transformative, allowing new research questions to be formulated and posed, thus transforming existing communities. DH can enable new forms of inter-disciplinary research. Geographical Information Systems (GIS), together with large historical datasets in digital form, can allow visualizations of spatial data in ways that allow new questions to be asked in, for example, economic history, literature, history of science, linguistics, toponymy, climate studies, etc.. New points of contact between these disciplines are created, and also with scientists, social scientist, engineers and technologists in geographical sciences.

Where are the Digital Humanities?

Digital research in the humanities takes place in a variety of institutional frameworks, from isolated individuals in otherwise non-digital faculties to large specialist centres. There are 22 member organisations in the ‘Network of Expert Centres in the Digital Humanities in Britain and Ireland’, but there is no common template. To give a few partial examples:

  • The Oxford e-Research Centre has a strong DH team and project portfolio, but is not exclusively humanities-focussed, by any means, and the vast majority of DH activity in the university is outside of this department;
  • CRASSH at Cambridge is focussed on the arts and humanities, but is not exclusively digital;
  • The Department of Digital Humanities at KCL is an academic faculty which comes out of a merger of centres and groups who focussed on infrastructure, teaching, and technical development work on research projects;
  • Institute for Historical Research offer a wide range of facilities and services which assist the researching, teaching, writing and dissemination of history, not all of them digital;
  • Archaeology Data Service runs a data repository and associated services to support research, learning and teaching in Archaeology

In fact, while there are strong overlaps in activities and organizational forms between many of the centres, there is no easily discernible common factor which is true for all centres.

This network of ‘centres’ risks failing to connect with the large number and wide range of academics engaged with digital research in the humanities who are not associated with one of these centres. The problem is writ larger at the international scale with the wider centernet network. The answer is not necessarily to create and connect more ‘centres/centers’ to encompass the wide range of activity currently outside of them. There is no consensus on what a center should do and how it should fit into an institution, and the very existence of a centre risks detaching practitioners of digital research from the mainstream of their disciplines.

DH@OX aims to provide a view of the wide range of DH activity across the University, and to support this activity in various ways, including facilitating communication and collaboration between researchers, and building better infrastructure and support services, but without imposing any particular boundaries, organisational models or definitions on the ‘digital humanities’.

It remains to be seen which approaches will prove most fruitful in the long term. The Day of Digital Humanities is likely to be a recurrent catalyst ongoing relections and discussions for many years to come.