Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Towards a new platform for texts

Currently the Oxford Text Archive offers a repository from which texts can be downloaded, and the Electronic Enlightenment offers a platform for exploring and reading historical correspondence. This post explores some of the ways in which we are experimenting with new ways to explore these texts, moving towards a new common platform for textual resources in the Bodleian Libraries.

Searching for Rhetorical Figures

We might recognize rhetorical figures when we read them, but how do we cross-search the collections looking for examples? We carried out some experiments by putting collections of EE and OTA texts into a variety of corpus search engines. This threw up examples both well-known and unexpected.

Alliteration

Searching for sequences of at least four words starting with the same letter reveals dozens of examples. e.g.:

[ota_ee_english:1636/descreCU0030050b1c] A bow bends back because, when the shape of…[ota_ee_english:1791/burkedOU0010304a1c] … all others the most horrid; that of betraying by being betrayed.

The second is an example of chiasmus (inverting the order of repeated words or phrases) as well as aliteration. Finding letters with examples of one sort of rhetorical figure is likely to reveal a rich source of other figures. The letter at http://www.e-enlightenment.com/item/burkedOU0010304a1c, for example also features more alliterations and various types of repetition. e.g. “an academy of cabal and confusion”, ” It is dangerous for you fully to trust those by whom you are not fully trusted”, “fear of the French faction”.

A similar rich seam was found with the following letter from Hester Lynch Piozzi:

[1798/piozheUD0020481a1c] … seems to have a Feeling of Fear, however high his heart may beat with Indignation.

And this letter also contained a number of parallelisms and repetitions of various sorts, e.g.:

“Another Source of Comfort which opened upon us as we drove along was the encouraging Sight of universal Loyalty, and Ardour in the honourable Cause of defending ourselves against French Threatenings, French Plunder and French Principles.”)

Some further examples of alliteration:

[1815/piozheUD0050400a1c] … teaching Children are our boast, yet we have hard hearts here too; and little Tradesmen…

[1698/fitzwiVH0010364a1c] … & some a better price, my own stemmed sweet scented so far beyond, that I…

[1802/piozheUD0030375a1c] Let not a Heart so sensible so sincere want that Support from within…

The last example also including the following:

“Sick or well, sorry or glad; nobody Sure does write such Letters as our dear Mrs. Pennington — it is because nobody else writes from the Heart.— I suppose. Let not a Heart so sensible so sincere want that Support from within which we are told the Consciousness of Rectitude will always bestow; and let no ill Thoughts steal in to destroy the Operation of virtuous Talents and Conduct.”

[1724/popealOU0020236a1c] … to be turnd into a Line of Wilderness with wild winding walks for the convenience of passing…

[1816/bentjeOU0080506b1c] … haveable for taking first measurements: and then what with weather what with languor and inaptitude,…

So, in summary, while alliteration in and of itself might not be of primary interest, it offers a way into exploring the texts.

Further explorations of rhetorical figures

The tools also allow searching by part-of-speech, for example for repeated instances of one particular word class. In this example, William Cowper, uses 4 consecutive adjectives:

” have received but one Visit since I came — I don’t mean that I have refused any, but that only one has been offered. This was from my Woollen Drapier,3 a very Healthy Wealthy Sensible Sponsible Man, and extremely Civil. He has a Cold Bath, and has promised me a Key of it, which I shall probably make use of in the Winter. He has undertaken too to get me the St. James’s Chronicle4 3 times a Week, & to shew me Hinchinbrook House,5 and to do every Service for me in his Power; so that I did not exceed the Truth you see when I spoke of his Civility.” (https://doi.org/10.13051/ee:doc/cowpwiOU0010097a1c)

And by the unknown “S.L.”:

“Why soe sullen why soe silent What not a lettre not a word in six long weary wett windy weeks Wher’s the fault?” (https://doi.org/10.13051/ee:doc/lockjoOU0010066a1c)

sermocinatio / hypophora (“the speaker answers the remarks or questions of a pretended interlocutor”, or simply “asking questions and answering them”)

“Now you might say — Why are you against it ? ” and “Were I to say as much Mr White might be in a rage, and Mr Long might say — Mr B. why will you propose such things? — You see it can not be done. The consequence would be, that between Mr White, and Mr Harrison, and Mr Long, possession never would be obtained.” bentjeOU0060303b1c (Bentham, 1800)

“Even in and about London with great deference to the better judgements of Messrs. Pitt and Dundas I cannot bring myself to believe that those whose opinions I value most could reconcile to themselves my signing. My enemies might say, and with propriety too, he has by giving his signature proved Mr. B. ‘not to have been incorrect in his charges.’ In short, exclusive of all this the internal monitor conscience precludes me from signing and this with me must ever be superior to all human opinion.” scotdaRH0020249a1c (David Scott, 1800)

simile

“You must excuse me for giving you a Line of Latin now and then since I find my self in some danger of Losing the Tongue , for I perceive a new Language , like a New Mistress , is apt to make a man forget all his old ones . ” addijoOU0010010a1c (Addison 1699)

One way to explore simile is by searching for the pattern ‘like a’. Not only can concordances can be generated to find examples, but the tools can also be asked to count instances of repeated patterns, as below:

These patterns can be further explored, e.g. by clicking on ‘like a Philosopher’ above to see concordances of the examples:

and ‘like a mistress’:

and compared to ‘like a Lady’:

and ‘like a Man’:

Grammatical structures in French

The above examples focus on a corpus of all of the letters in EE in English. We also looked at letters in French.

Coordinated adjectives before and after the noun

One brief investigation was to explore to what extent certain forms of French grammar were present in the letters. This example is of a form current in earlier forms of the language but which we would expect to be becoming less widely used and possibly sounding archaic in the eighteenth century. We found it was much more likely to be used with ‘homme’ as the subject than with other words.

[1690/lecljeLO0020039a1c.ddc] … dans paris jusqu’a present composée des plus honnestes gens et habiles de paris: elle est…

[1711/lecljeLO0030359a1c]… santé, et qu’il vous donne une longue vie et heureuse, pour l’avantage de…

[1736/voltfrVF0870332a1c]…a pu avoir une ferme envie d’être honnête homme et sage.

[1744/voltfrVF0930113a1c]… unique, elle était adorée d’un mari honnête homme et aimable.

[1757/voltfrVF1010450b1c] Le conseil de guerre l’a déclaré brave homme et fidèle.

[1763/rousjeVF0170062a1c]… quoique Ministre, ne laissoit pas d’être honnête homme et bien intentionné.

[1760/voltfrVF1060298a1c]… vous pas aussi qu’il vaut mieux être honnête homme et aimable, qu’hypocrite et insolent..

[1693/lockjoOU0040689a1c] Le Sieur Smith passe ici pour un honête homme et fidele dans le commerce, et…

Terms in the history of science

We also combined letters (from EE) and works (from the OTA) of certain authors, in order to cross-search, and compare and contrast usage in these text types (i.e. compare their writing in letters versus published works). The following example is a search in the letters and works of Robert Boyle.

We can see the different word forms matching this wild card search term (showing that it would be difficult to search for them one by one and to see all the results side by side):

And we can explore collocates (words co-occurring in the proximity of the ‘volatile’ words more frequently than you would expect by chance):

We can examine the distribution of the occurrences of the ‘volatile’ words, for example by decade, or comparing letters and works (clear picture from the latter!):

Search term ‘volatil*’ in corpus ‘boyle’. Based on classification: Decade of creation
Category [↓]Words in categoryHits in categoryDispersion
(no. texts with 1+ hits)
Frequency [↓]
per million words in category
1600_925,429171 out of 1668.53
1630_947300 out of 30.00
1640_922,39200 out of 390.00
1650_957,31111 out of 2717.45
1660_91,008,1051848 out of 93182.52
1670_9515,6943336 out of 46645.73
1680_9512,54816510 out of 60321.92
1690_9753,9951788 out of 15236.08
Total:2,958,35487834 out of 285296.79
Search term ‘volatil*’ in corpus ‘boyle’. Based on classification: Type of text
Category [↓]Words in categoryHits in categoryDispersion
(no. texts with 1+ hits)
Frequency [↓]
per million words in category
Letters137,37132 out of 23621.84
Works2,820,98387532 out of 49310.18
Total:2,958,35487834 out of 285296.79

The ‘DiaCollo’ tool from the Berlin Brandenburg Academy of Sciences also allows the visualization collocations over time. Here it has calculated the significant collocates of the word ‘philosophy’ in Boyle’s letters and works at different time periods, and shows them as word clouds in a time sequence. Below is a screenshot of that time sequence from the 1660s (suggesting that this is when the term ‘experimental philosopher’ first comes to the fore):

Exploring a word or concept: la guerre

The use of CQPweb allowed us to search across all of the letters in French for a particular word or phrase, and to explore the results in various ways. First a concordance for guerre:

This can be sorted alphabetically on the right co-text to reveal patterns of modification, e.g. guerre de X:

Having decided that this looks like an interesting way to investigate historical topics in the letters we can run the search again restricting the search to patterns of ‘guerre de or guerre d’:

Looking at collocates for guerre we can explore other patterns:

And crucially, the tools always allow the user to drill down and read the text and its context, by reading concordances, metadata and links to the full texts themselves.

In the DiaCollo tool, we can geta different visualizaton of the words co-occurring with guerre. The screenshot below shows evidence of (among other things) references to the ancient Greek fable Batrachomyomachia, also known as la guerre des grenouilles et des rats.

With these experiments, we are starting to see what is possible when annotations are added to the texts, and when more powerful software tools are added to the repositories. The quest is ongoing!

Beyond the Digital Humanities

Originally posted at blogs.it.ox.ac.uk onMay 5, 2015 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The NeDiMAH Conference ‘Beyond the Digital Humanities’ was held at the School of Advanced Study, University of London, on Tuesday 5th May 2015. NeDiMAH has run for four years as a project of the European Science Foundation, with the backing of research funders in a large number of European countries. Outputs of the project include the NeDiMAH Methods Ontology (NeMO), to be sustained by DARIAH, and the Methodology Map of DH in Europe.

I have an interest, having organised a joint CLARIN-NeDiMAH workshop in December 2014 in the Hague, together with colleagues from the University of Passau and Huygens ING, on the topic of ‘Exploring Historical Sources with Language Technology: results and perspectives‘.

The opening keynote of the day was fromLucy Kimbell (University of Brighton) on Open Policy Making in a Digital World: Opportunities and Possibilities for Academic Research, who took on the difficult task of getting the audience excited about the bureaucratic manoeuvrings of the civil service in relation to academic research and innovation. I didn’t feel that Lucy ever quite got to explaining the relevance of initiatives like open policy making, the government digital service, open data institute, GovJam, Policy Lab UK, etc. for the digital humanities. She made it clear that data science and social science research were informing the bureaucracy, but struggled to articulate the role of the arts and humanities, or digital variants thereof, except for the rather bizarre assertion that Ed Miliband’s desperate interview with Russell Brand (aka #Milibrand)  was a ‘cultural intervention’ in the general election campaign, presumably cited as a model for arts and humanities practitioners.

A roundtable on creativity and cultural heritage explored the aspects of the digital humanities relating to art, architecture, design. Alessio Assonitis suggested that there is too much arrangiare (roughly, making do, or makeshift arrangments) in Italian cultural heritage, and too much reliance on digital projects to prop up ailing institutions, and called for a more radical approach to promoting digital research. Helle Porsdam explored the difficulties of ethical and legal issues relating to the digital surrogates of intangible cultural heritage, focussing on the recent example of the prehistoric Chauvet cave. Jon Pratty of Arts Council England brought some scepticism about the ‘smart cities’ agenda, and, in particular, the aspiration or expectation that city-wide content management systems and centralised data dashboards might lie behind a future data-driven society, and made a plea to reorient towards creativity rather than heritage. Teal Triggs of the Royal College of Art (does everyone who works there have to adopt a colour as a forename?) asserted the importance of ‘design’ in data curation and analysis, and in forming the bridge between the physical and the digital.

Brett Bobley from the National Endowment for the Humanities (a US federal funder of research in the humanities) looked back to the ‘Our Cultural Commonwealth’, published almost ten years ago, to see what has changed and what is still relevant. Interestingly he drew attention to the weirdness of the notion of the ‘digital humanist’, not foreseen by the report and still contested. Brett introduced Trans-Atlantic Platform, which is building on the success of the Digging into Data challenge to develop more international funding schemes, and now involving 11 countries.

A panel discussion on ‘new forms of data and collaboration’ featured Keri Facer (University of Bristol), who started with appealing for more involvement of the diversity of humans who do ‘digital humanities’, and talked about AHRC Connected Communities programme. We were treated to the call to ‘check our privilege’ and to start count the number of women and ethnic minorities in the room. Whatever the digital humanities are, I think they need to be part of the humanities, and the humanities need to be informed by the intellectual traditions of the enlightenment, not the political correctness of the students’ union. If this is what beyond the digital humanities means, you can count me out.

A scientist in the audience, Peter Fletcher from the Science and Technology Facilities Council in the UK, suggested that a lot of discussion was about sharing data and tools, and that this needs infrastructure. Various academic communities have come together and agreed priorities for building central repositories and experimental facilities. Milena Zic-Fuchs, a linguist from the University of Zagreb, supported the call for infrastructure to support digital research, and urged the audience to support initiatives such as CLARIN and DARIAH, but also to look towards not just pan-European but global collaborations.

A final panel  on ‘Genres of scholarly knowledge and production’ featured Andrew Prescott, who offered a clear and useful explanation of the polar positions of (i) empirical, data-driven research and (ii) critiquing, questioning and problematizing the assumptions inherent in data and tools, such as canonicity, and post-colonial and environmental critiques. Barry Smith gave an entertaining presentation of work on smells from the Centre for the Study of the Senses, which engaged the public, neuroscientists and restaurant chefs with a philosopher in a humanities research project. Patrik Svensson made an appeal to the builders of infrastructure to cater not just for data and tools, but for the research processes and methods which humanists employ. Rounding off the day, Milena Zic-Fuchs outlined some of the background to NeDiMAH and the concurrent emergence of research infrastructures in the social sciences and humanities.

My overall impression was that the various suggestions put forward to promote the legitimacy of DH were not convincing, apart from Lorna Hughes’s straightforward presentation of an example of exemplary research (http://eira.llgc.org.uk/). This reinforced my view that what we really need are compelling case studies which demonstrate the possibilities of digital transformations and show a real-life success story (warts and all) which stands on its own as a good piece of research in the humanities.

The discussion on the day may have left some with the impression that we are faced with a choice between, on the one hand, the utopian folly of building Procrustean infrastructure, anti-theoretical and populated with non-contextualized data, and, on the other hand, the development of a critical digital humanities with the goal of exposing the folly, puncturing the hubris, limiting environmental impact, and checking the privilege of the digital humanities. I hope there is a middle way.

Collective Intelligence

This post was originally composed February 23, 2015, in the wake of the event ‘Digital Humanities Collective Intelligence: a workshop to foster international cooperation’ held in the Anatomy Theatre and Museum at King’s College London on the 21st and 22nd February 2011. It was posted on arts-humanities.net, a site which disappeared, then posted on the IT Services blog at the University of Oxford, where it was deleted sometime around 2019. It is the blog post that will not die.

A two-day workshop at King’s College, London in February explored the idea of ‘Collective Intelligence’ in relation to DARIAH and the Digital Humanities. Two dozen participants, representing numerous countries, organisations, domains and backgrounds were in attendance, including DARIAH partners from London, Oxford and Dublin. The workshop kicked off with the presentation of position papers from Jan-Cristophe Meister (participating remotely), Andrew Prescott and Susan Schreibman.

Jan-Cristophe Meister (Hamburg University) outlined the plans of the Association for Literary and Linguistic Computing (ALLC) to relaunch its website with three major functions, namely provide a moderated Digital Humanities information platform for the association’s members and affiliates, that will offer a “one-stop” overview on current DH activities, funding opportunities and services, with links to more detailed external repositories.

As a precondition to the wider sharing of such data, Meister emphasised the need “to define a data curation protocol stipulating standards for the moderation and validation of DH information by information gatherers
and providers”, warning that without such a protocol there would be too much variation in the shared information, making it obsolete and “creating ‘white noise’ that will frustrate information seekers.”

Meister therefore proposed “the definition of a DH atlas or a DH taxonomy enabling us to systematize DH information”.

Andrew Prescott (University of Glasgow) proposed that we need a new generation of tools that will work with publishers and other content providers, and enable new perspectives on data and humanities research questions.

Susan Schreibman (Digital Humanities Observatory, Ireland) outlined the detailed and extensive work done in the DRAPIer: Digital Research And Projects in Ireland to scope and describe digital humanities work and act as a collaboration space to share expertise. Susan proposed greater use of Web 2.0 technologies in future initiatives in this area.

There was also a presentation of the arts-humanities.net portal, and a discussion of the lessons to be learned from its six years of existence. The possibility of preferring to follow a design path more oriented towards ‘apps’, ‘gadgets’ or ‘toolkits’ was considered.

Group discussions considered how to move forward to create more interoperable metadata. Do we already have adequate standards and procedures for sharing information? Do we need the carrot or the stick to encourage data creators to follow them? Do we need to link communities and expect the metadata to follow, or vice versa? Some concrete suggestions emerged for potential ways forward to capture, disseminate and use the potential knowledge that is embedded in our currrent and past activities. An aggregation of information about events was strongly promoted, and the idea of a service for mining the collected knowledge of past discussions on relevant email lists and forums was mooted. There are plenty of organisations and initiatives producing useful information, that there is a general willingness to share, but due to various factors there is a certain inertia tending to block efforts to do so. Measures to overcome this inertia and to make it easier to exploit our collective intelligence should be a key guiding priniciple of our next steps.

Advising DigHumLab

Originally posted at blogs.it.ox.ac.uk on September 30, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

University of Oxford researchers from IT Services and the Oxford Internet Institute are playing a key role in advising an important national project in Denmark, and learning a lot about different ways of building and sustaining research infrastructure along the way.

DigHumLab is a national initiative in Denmark to set up a collaboration to advance digital research in the arts and humanities. Staring in 2010 with the drawing up of a roadmap of research infrastructures for Denmark, DigHumLab was awarded €4 million Euro for five years in 2011. DigHumLab encompasses the Danish contribution to the CLARIN and DARIAH European research infrastructures. I was asked to join the small international Advisory Board for the project, and to attend a mid-term meeting in Copenhagen in September 2014, to offer advice to the project.

The vision for DigHumLab is to take actions to strengthen research in the humanities and humanistic social sciences, to improve access to data, develop methods and tools, promote collaboration and support emerging areas of digital research. The project goals are to:

  • create virtual portal, access point, and potential partner for international collaborations
  • create a knowledge hub
  • become a provider of software and technical solutions
  • act as a national political advisor on matters relating to digital research in the humanities.

As well as activities to establish these outputs and services, the project includes a significant amount of effort spent on three research themes:

  1. language resources and technologies
  2. media tools
  3. interaction and design

The project has kicked off with participation from four Danish universities, but the intention is not to create a club closed to other universities or research bodies. The project also aims to build links and to coordinate activities with national services for high performance computing, research data management, e-Science, as well as with the National Library and the European research infrastructures. It wasn’t possible in this meeting to find out what measures are being taken to achieve these goals, but it was encouraging that the meeting was hosted by the National Library, in their impressive modernist ‘Black Diamond’ building, with participation by senior staff from the library.

The Black Diamond building housing the Danish Royal Library

After spending an initial period establishing the working groups and themes, the project is now moving into a period with a focus on building generic services such as online research environments, awareness raising, a survey of requirements, outreach activities to various researech communities, establishing teaching programmes and increased student involvement.

The first theme, language resources and tools, was presented by Lene Offersgard from the University of Copenhagen, who outlined the key activities, including the establishment of a data repository, now certified with the Data Seal of Approval and CLARIN ‘B’ Centre status, with an accompanying helpdesk, tools for the analysis and annotation of data, and a user engagement programme. There are also PhD teaching modules for students at the University of Copenhagen.

The second theme, audio-visual data and tools in various media, was presented by Niels Brügger and Per Jauert from Aarhus University. Work on this theme acknowledges that “the digital comes in a variety of forms.”, and which they sub-divide into:

  • Digitized
  • Born-digital
  • Reborn-digital

The enhanced web archive is an example of the latter, where digital materials have been collected, reassembled and made available with metadata as research data. The focus of this work is on web archives, but it occurred to me that it is a characterization which fits the modern linguistic corpus as well. The team have developed the Digital Footprints software, which is still in beta, but is in use for studying online material. As well as developing ways to examine and to improve access to Netarkivet, the national web archive, researchers are working together in international collaborations, including with the British Library, and the Oxford Internet Institute, and establishing a transnational European research infrastructure for the study of archived we materials. The NetLab Forum provides wiki space for research projects using the tools so that they can communicate and share experiences, expertise and results. It was pointed out DigHumLab has been crucial in providing the funding for an IT developer, without whom this work would not have been possible, and on whom ongoing work is reliant. Another risk to the viability of ongoing work was flagged – independent legal advice is needed on the risks associated with access, use and redistribution of online materials.

The theme also encompasses work on audio-visual data and tools. Following on from research projects such as LARM, a research infrastructure has already been established for the use of audio materials, and the challenge is to integrate with other DigHumLab services. This work has been built on the national library media collections of radio and televsion programmes. Advance services already offer streaming access, and ongoing research projects are using these services for research.

Johannes Wagner of University of Southern Denmark introduced the third theme, the “little brother” of the DigHumLab siblings, focussing on “experiential research”, or analysis of human interactions and activities via digital capture and analysis. An example is the VELUX project on non-verbal communication. The experience of the researchers in this area is that “if you build it they will come” doesn’t work in this context. Face-to-face and hands-on bespoke support are needed to engage with researchers and to meet their requirements.

In the discussion with the Advisory Board, Eric Meyer (Oxford Internet Institute) asked the penetrating question of how are the success stories of flagship projects disseminated to other researchers who could potentially engage with DigHumLab. Demonstrators are much more compelling and convincing when they have been used for real research that has been finished and can be shared. Too many e-science case studies have been based on toy data or invented problems, making it was difficult for the people who might want to use these tools to envisage real uses, or to deploy the solutions. A variety of instruments are currently used to involve researchers, including travelling workshops, PhD courses, journal articles, lectures, and short courses. The question of how, or whether, to attempt to address all disciplines and all communities in the humanities remains an open one. It was agreed that robust showcases modelled from the user point of view were vital to promote uptake.

The afternoon session focussed on the thorny question of possible business models for the sustainability of DigHumLab beyond it’s current phase of funding. From 2017 DigHumLab aims to focus on the refinement and improvement of services, including prioritization of research areas, marketing of services and the recruitment of users, and the development of a viable financial model for sustainability.

One model would be for DigHumLab to be based on a core of generic services, with research themes changing over time. Eric Meyer offered a cautionary tale, the generic services and service centres developed as part of the e-Social Science programme in the UK no longer exist. I added the further example of the Arts and Humanities Data Service.

There was also some discussion of how to enter into collaborations with computer scientists. It was agreed that it was important not to try to treat computer scientists as “code monkeys”. Computer scientists need to address research questions and to publish in high-impact journals relevant to their discipline. We need to approach collaboration as an inter-disciplinary research project as with equal academic standing for all partners. Sometimes we just want to build a website or an interface or install some software, and then we need to find a developer, but this is different to an inter-disciplinary collaboration.

Sten Runar Ludvigsen from the University of Oslo made the interesting point that although distributed services can have a certain robustness, a centralized lab means that you only need to change the culture in one place, not in every lab, to run services for the community in a collaborative spirit, and might therefore be more realistic. He also made the crucial point that, as a small country, the Danish humanities community could benefit from focussing on a small number of areas Clearly they have already done this with the three themes in the current phase of DigHumLab. It would be useful to have further reflection on whether these are the right areas, and then to communicate clearly to stakeholders how the scope of the project will be constrained in future.

To summarize the day, I proposed the following three points for the project, after discussion and in agreement with, the other members of the Advisory Board.

1. DigHumLab would should articulate a vision and a mission relating to the use of digital data, tools and methods situated firmly within the wider project of the mission(s) of humanistic research. A strategic vision about what and who should be included, what the priorities are and why, and what is not included. A decision needs to be made on whether it would make sense to focus on a small number of strategic areas, or to try to engage with all areas of the humanities, and the former seems likely to be more successful. These statements about vision, mission and scope can be informed by asking where do you want to be in 10 years time. The project is nicely focussed already on specific themes – do you plan to continue to restrict the scope to these or expand to other areas of research?

2. A flexible and robust business model needs to be able to survive the withdrawal of a funder, institution, partner, academic domain, key individuals, etc.. Staking everything on the support of a ministry or a national funding body is a risky, all or nothing strategy. Flexibility means a range of funders can be accommodated (e.g. national, local funders, programmes for libraries, research data management, research grants, e-science, network/conference funds, etc.). The key to this is that various institutions and people want to buy into and sustain the mission, and are prepared to align local strategies of sustainable institutions with the common aims. This way, there is the opportunity to repurpose existing resources and funding streams to fulfill the aims of DigHumLab, rather than the more difficult task of seeking additional funding on a long-term basis.

3. It would be useful to clarify and define how DigHumLab supports digital research at the various stages of the research life-cycle (initiating, carrying out, connecting, disseminating and sustaining research). Do you want to be involved in some or all of these? How are you adding value to these activities?

You can see and read more about DigHumLab at http://dighumlab.com/.

Using Large-scale Text Collections for Research

Originally posted at blogs.it.ox.ac.uk on April 10, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.I participated in a recent workshop in Würzburg on using large-scale text collections for research. The workshop was organised as part of the activities of NeDiMAH, the Network of Digital Methods in the Arts and Humanities.

I had the opportunity to give a short introduction on some aspects of my interest in this topic. I outlined how the current problems include the fragmentation currently available resources in different digital silos, with a variety of barriers to their combination and use, plus a lack of easily available tools for textual analysis of standardized online resources, and I briefly referred to the plans of the CLARIN research infrastructure to address some of these problems.

Christian Thomas explained how the Deutsches Textarchiv (DTA) is facilitating and making possible research with large-scale historical German text collections. The DTA has funding 2007-15, and now includes resources with more than 200 million words from the period 1600 to 1900. There are images and text, and automatic linguistic analysis is possible. The DTA is a CLARIN-D service centre. Integration in the CLARIN infrastructure means that resources can be discovered via the Virtual Language Observatory (VLO), can be searched via the Federated Content Search (FCS), and analysed and processed via WebLicht worklows. The DTA also contributes to discipline-specific working groups to work with as an outreach and dissemination strategy. The majority of texts are keyed in (see more at http://www.deutschestextarchiv.de/dtaq/stat/digimethod) The workflow for OCR texts is interesting – structural markup is added to the electronic text (using a subset of TEI P5), and then OCR errors are corrected. They find that it is easier to identify and correct errors in structured text. The Cascaded Analysis Broker provides a normalization of historical forms to allow for orthography-independent and lemma-based corpus searches, and this is integrated into the DTAQ quality assurance platform. Christian’s slides can be found here.

The DTA is also a key partner in the Digitales Wörterbuch der deutschen Sprache (DWDS), an excellent concept allowing cross-searching of resources in different centres, and very well implemented. This offers a view of the future of corpus linguistics and the study of historical texts online.

Jan Rybicki from the Jagiellonian University in Kraków told us about a benchmark English corpus to compare the success or failure of stylometric tools. There was a very interesting discussion of the idea of how to build representative and comparable literary corpora, which put me in mind of the work of Gideon Toury in descriptive translation studies. There was also discussion of a possible project to build comparable benchmark corpora for multiple European literary traditions.

Rene van Stipirian (Nederlab) outlined the backgroud of how the study of history in the Netherlands is characterised by a fragmented environment of improvised resources. The project Nederlab will be funded by the NWO 2013-17 to address the integration of historical textual resources for research. Some very interesting statistics were presented: for the period to the end of the twentieth century there are 500 million surviving pages printed in Dutch, and 70 million of these are digitized, but only 5-10 million have good quality text – most are rather poor quality OCR. Nederlab brings together linguists, literary scholars and historians, and integrated access to resources will go online in the summer of 2015.

Allen Riddell from Dartmouth College in the US took an interesting and highly principled approach to building a representative literary corpus. He randomly selected works from bibliographic indexes, then went out and found works and scanned them if necessary. This seems to me to be a positive step, in contrast to the usual rather more opportunistic approach of basing the corpus composition of the more easily available texts. The approach to correcting the OCR text was also innovative and interesting – he used Amazon Mechanical Turk. Allen also referred to a paper on this topic at http://journal.code4lib.org/articles/6004.
This also raised an interesting question – can a randomly selected corpus be representative, or do we need more manual intervention in selection (at the risk of personal bias)?

Tom van Nuenen from Tilburg University described how he scraped professional travel blogs from a Dutch site and starting to analyse the language. Puck Wildschut from the Uni Nijmegen described the early stages of her PhD work on comparing Nabokov novels using a mixture of corpus and cognitive stylistic approaches.

The discussion at the end of the first day focussed on an interesting and important question: how do we make corpus-building more professional? Reusability was seen to be key, and dependent on making sure that data was released in an orderly way, with clear documentation, and under a licence allowing reuse. And since what we are increasingly dealing with is large collections of entire texts (rather than the sampled and truncated smaller corpora of the past), then we should ensure that the texts that make up corpora should be reusable, so that others can take them to make different ad hoc corpora. This requires metadata at the level of the individual texts, and would be enhanced by the standardization of textual formats.

Maciej Eder from the Institute of Polish Studies at the Pedagogical University of Kraków introduced and demonstrated Stylo, a tool for stylometric analysis of texts. In this presentation, and one on the following day, I found some of the assumptions underlying stylometric research difficult to reconcile with what I think of as interesting and valid research questions in the humanities. How many literary scholars are comfortable with notions that the frequencies of word tokens, and the co-occurrence of these tokens give an insight into style? And the conclusion of a stylometric study always seems to be about testing and refining the methods. Conclusions like “stylometric methods are too sensitive to be applied to any big dataset” don’t actually engage with anyone outside of stylometry. Until someone comes up with a conclusion more relevant to textual studies, this is likely to remain a marginal activity, but maybe I’ve missed the point.

The focus on looking for and trying to prove the differences between the writing of men and women also strikes me as a little odd, and certainly contentious. Why prioritize this particular aspect of variation in the writers? Why try to essentialize the differences between men and women, and why not other factors? I’d be more interested in an approach which identified stylistic differences and then tried to find what the relevant variables might be, rather than an initial starting point assuming that men and women write differently, and trying to “prove” that by looking for differences.

On the second day of the workshop, Florentina Armaselu from the Centre Virtuel de la Connaissance de l’Europe (CVCE) described how they are making TEI versions of official documents on EU integration for research use. I suggested that there might be interesting connections with the Talk of Europe project, which will be seeking to connect together datasets of this type for research use with language technologies and tools.

Karina van Dalen-Oskam from the Huygens Institut in the Netherlands, one of the workshop organisers, introduced the project entitled The Riddle of Literary Quality which is investigating whether literariness can be identified in distributions of surface linguistic features. The current phase is focussing on lexical anbd syntactic features which can be identified automatically, although a later phase might investigate harder-to-identify stylistic features, such as speech presentation. In the discussion Maciej Eder suggested that the traces of literariness might reside not in absolute or relative frequencies of features, but in variation from norms (either up or down).

Gordan Ravancic (Institute of History in Zagreb joined us via Skype to introduce his project on crime records in Dubrovnik, “Town in Croatian Middle Ages”, which was fascinating, although not clearly linked to the topic of the workshop, as far as I could tell.

Some interesting notions and terminological distinctions were raised in discussions. Maciej Eder suggested that “big data” in textual studies is data where the files can’t be downloaded, examined or verified in any systematic way. This seems like a useful definition, and it immediately raised questions in the following talk. Emma Clarke from Trinity College Dublin presented work on topic modelling. This approach to distant reading can only be used on a corpus that can be downloaded, normalized and categorized, and would be difficult to use on the type of big data as defined by Eder, although it could potentially used as a discovery tool to explore indeterminate datasets. Christof Schlöch from the Computerphilologie group in Würzburg differentiated “smart data” from “big data”, and suggested that this was what we mostly wanted to be working with. Smart data is cleaned up and normalized to a certain extent, and is of known provenance, quality and extent.

The workshop concluded with discussions about potential outcomes of this and a previous NeDiMAH workshop. A possible stylometry project to build benchmark text collections and to promote the use of stylometric tools for genre analysis and attribution was outlined, with perhaps the ultimate goal of an ambitious European atlas of the history of the style of fiction. We also discussed the possible publication of a companion to the creation and use of large-scale text collections.

Read more about the workshop on the NeDiMAH webpages at http://www.nedimah.eu/call-for-papers/using-large-scale-text-collections-research-workshop-university-wurzburg-1st-and-2nd

Text encoding, text collections, and the potential to transform the Humanities

Originally posted at blogs.it.ox.ac.uk on November 6, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a transcript of a contribution to a panel discussion at the TEI Members’ Meeting in Rome in October 2013 on the topic ‘How could the TEI community benefit from TEI-specific query solutions? What should they look like?’

I think that there is a problem that too many of the people working on text encoding and tools for querying encoding texts are contributing to a proliferation of different complex platforms which the user has to adopt before they can submit the simplest query, and which are all mutually incompatible. We’re just building digital silos, and getting further away form a solution to some key problems.

There is potential for transformation in the way that we do research in the humanities. Recent discussions about distant reading, and combining it with close and scalable reading, revolve around how we can exploit the marvelous opportunity with which we are presented today to ask new questions in the study of languages, literature, history, and in other disciplines. I’d like to get to a situation where we can ask new and big questions, and I’d like us to be in a position to accumulate knowledge about language, and from texts, by investigating more features, more genres, more languages etc. I want us to lower the barriers to digital research so that more people can do it, more outputs can be compared and connected. I don’t want to see more diversity: more alternatives to the TEI, more frameworks, more annotation schemes. Building a new tools is a computer science project, and has no place in the humanities. Three are barriers to progress: (i) “not invented here”, (ii) reinventing the wheel, and (iii) the search for the perfect metalanguage.

Converting texts to a common format has been suggested, but it is not an option when it comes to exploiting big data. The texts that we want to query live on different parts of the network. Persuading data repositories (including publishers, Google, Amazon) to provide TEI XML output is feasible, and would be a step forward. Converting all texts to formats optimized for linguistic search is impossible.

Now is the time to act, and action is overdue. Since I first heard about the TEI more than 20 years ago, I thought, “that’s a good idea – where are the tools?”. I could still ask the same question. There are now some tools for editing and operations on individual small documents, but what tools that can easily be used or deployed for cross-searching collections of texts and corpora?

The vision for new forms of digital research requires not only tools but interoperable resources. Linguists say we can’t agree “what is a phrase”. Well then you can’t have interoperable resources for the study of grammar, or resources which make use of grammatical analysis as a basis for analysis at other levels. Now, I have to admit that I have spent quite a lot of my time in my career up until now arguing with computer scientists that we don’t have agreed basic concepts, and this is the correct state of affairs because humanistic research is basically about discussing and problematizing the way that we conceptualize and discuss things. If we can’t agree on representation and categorization of linguistic features, then there isn’t much we can do by way of digital scholarship. There is more to be gained from accepting imperfect models than there is to gain from trying to perfect them.

The TEI and (probably ISOcat) offers us the basic technical preconditions for moving forward, for creating and sharing interoperable resources. Let’s take the opportunity and develop a culture of sharing resources tools, resources, categorizations and methods. Here is a challenge. At the OTA we have all of the texts from ECCO-TCP which are in the public domain freely available at persistent URLs in TEI P5. I’m happy to make the British National Corpus (BNC) available in this way as well, although it is interesting that no-one has ever asked us to do this. I have a challenge for the participants here and for the community – the texts are out there, so please deploy tools to search them as reliable and persistent services. I’ll come back next year and see what’s available!

There are different research agendas: problematizing the notion of the phrase, or the book, are perfectly legitimate research questions, and things that theoreticians might quite legitimately decide to do with their time, but they should not be raised as barriers to developing digital tools. We need to decide our priorities, and I think that more resources should be devoted to exploiting the current opportunity for posing new large-scale research questions, rather than re-posing fundamental questions about categorizations and models.

We can see what the average researcher wants. We can see a multitude of relevant use cases. it is perfectly possible to examine published research in the humanities for statements and questions which are susceptible to empirical study in digital text collections. If we concentrate our efforts on making available for cross-searching all the digital texts which we can lay our hands on, and the tools to query them and analyse the results, then there are research topics for all PhDs in many literary, linguistic and historical disciplines for the next 20 years or so, and it can reinvigorate the humanities. I’m excited by that prospect. We are on the cusp of making it possible and we can be the people who do it.

Places in Literature

Originally posted at blogs.it.ox.ac.uk on July 29, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

I note with interest that there are still attempts to kick-start the effective use of geo-spatial technologies and methods in the study of literature. Some years ago, I attended a very interesting workshop at the University of Nottingham on ‘Places in Literature’. The event brought together researchers from various fields to investigate the feasibility of an interdisciplinary project to use geospatial technology to enhance literary research. The event featured presentations, discussion and some hands-on encoding and analysis, and was held in the Centre for Geospatial Science at the University of Nottingham (in a brand new building on the site of the old Raleigh bike factory). Most of the participants were from Nottingham, with a couple of us from Oxford, and two people from Glasgow. The Nottingham participants included specialists in geospatial scientice, English literature, English language, place names, cultural geography and history, and computational linguistics.

It was an excellent opportunity for us to push forward our understanding of the challenges and barriers to developing useful applications which could be used in research. The findings included the following:

  • there is an unbounded number of ways in which narrative texts refer to places; probably most of the time the reference to place in a text is not a reference to a stage in a journey, or a description of the location of an event (e.g. “You’re not in London now!”);
  • references are often non-explicit – Heart of Darkness by Conrad does not say where the action is taking place, although the reader is likely to infer London, Brussels and the Congo;
  • places in fictional worlds may or may not relate in a reliable way to geography in the real world;
  • all texts are historical; mapping places in texts is never the same as mapping in contemporary real-world applications;
  • place names are subject to variation in languages, spelling, change over time, movement of borders, etc.;
  • variations in granularity, and fuzziness, are inherent in place name references in texts;
  • existing geo-coders appear not to be very good at recognizing place names in literary texts (the poverty of named entity recognition);
  • existing geo-coders appear not to be very good at reliably assigning locations to place names (due to the ambiguity of place name references – Lancaster in Lancashire or Lancaster in Pennsylvania?).

It was my conclusion that with the current state of the art, it was not easy, and probably not possible, for literary scholars to use the state of the art geoparsing and mapping to0ls to improve their research or to ask new research questions.

I concluded that future work to improve this situation might involve:

  • applying state of the art named entity recognition software to texts in order to more reliably identify place name references in texts;
  • investigating heuristics to improve the geocoding (or the ranking of possible hits) of place names;
  • producing tools that allow users to investigate, correct and examine the outputs at each stage, as these outputs are likely to require human intervention to improve accuracy to an acceptable level, and because these outputs are likely to be interesting to various types of research;
  • investigating means of applying the techniques to large text collections, or combining them with web searches; developing a focussed, in-depth research topic to make use of these tools in respect of a specific set of texts;
  • combining geocoding tools with historical place name gazetteers and maps;
  • allowing geospatial information to be combined with linguistic information in the text, e.g. locations linked to concordances and collocations of textual references to the location.
  • embedding the relevant tools in language resources infrastructure such as CLARIN, so that the geographical tools can be combined with tailored NLP tools.

Current examplars which I have seen still use very specific text types based on travel itineraries, such as travel writing and guide books. I still haven’t seen an example which has effectively supported research in narrative fiction with automated or semi-automated geographical analysis, but I would be happy to be proved wrong!

Silos or fishtanks?

Originally posted at blogs.it.ox.ac.uk on April 6, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a partial summary of a presentation given at the Interedition Symposium in the Hague in March 2012 on the topic of Scholarly Digital Editions, Tools and Infrastructure.

People are often talking about digital silos in the context of digital resources in the humanities. The problem is that resources, although valuable in themselves, are located in different locations on the web, where they might be difficult to find, and they all have their own individual interfaces and registration procedures, and are not connected with similar or related resources. So you can’t easily search the Old English Corpus (available either for download with no software from the OTA, or online via numerous university library portals to local users). Some resources, like the ARCHER corpus, you can’t access at all unless you’re friends with someone at the University of Manchester.

Silo image from Doc Searls (dsearls)

This is clearly far from ideal. But what alternative, more connected, architectures are most appropriate to achieving interoperability and sustainability of the arena of digital textual scholarship? The emergence of fast and high capacity networks, a deluge of data, and web service APIs mean that it is increasingly possible to imagine and build distributed architectures for scholarly services, where data, tools, computing resources, and the outputs of annotation and analysis live in different parts of the network but can be brought together virtually in the user’s desktop environment. The current concerns about ‘digital silos’, in which the outputs of digital humanities projects are deployed online unconnected to other resources, and with limited sustainability, are directly addressed by this vision.

I want to put forward put the argument for distributed architectures, while reviewing some of the risks and problems, and survey some current moves towards such an infrastructure. And I also want to suggest another metaphor as an alternative to the ‘silo’.

An open and fully distributed architecture where the resources are located in different places can have the advantages of allowing the following services to be created:

  • potentially unlimited functionality, since developers can deploy content and tools that they want to use, and which can interoperate with other data, tools and infrastructure services;
  • building ad hoc collections and corpora across different repositories;
    complex workflows, for example piping together web services from different locations;
  • protected resources (e.g. works in copyright, sensitive data) curated in situ yet still analysed online via web applications which access the data via a secure infrastructure.

All of this can happen in a situation with a better division of labour than we typically have now: the repositories don’t have to worry about tools; tool and content developers don’t have to worry about creating the entire online environments; tool developers don’t have to worry about data management; users don’t have to install software; etc.. The emergence of an ‘ecosystem’ with numerous actors providing content, tools, computing resources, and other infrastructure services, provides a flexibility and resilience and the potential for sustainability which is not possible for a single-site or other more closed or monolithic system.

So let’s consider the unconnected, problematic online resource as a fishtank rather than a silo.

Goldfish image from Praveen Gupta (praveengupta)

There are lots of fishtanks out there, and they can be very large, elaborate, pretty, sophisticated, long-standing and sustainable. But they’re all in different places and they are not connected with each other. If you want to see a variety of fish, you have to visit a lot of houses, try to negotiate access to their fishtanks, and make use of whatever facilities they have for viewing or otherwise analysing the fish. Some places are better than other to visit – aquariums might have very good facilities and lots of information, but you still can’t view the fish in one aquarium alongside the fish in another, and it’s hard to compare them.

And if I want to keep a fish I have to build a fishtank and maintain a fishtank, or I could find someone else’s fishtank to put it in, but then it’s difficult for me to get access and control the environment. And who’s going to carry on feeding the fish? We can probably agree that it’s better if we don’t all try to make and look after our own fishtanks, at least not if our main goals are to enable as many people as possible to get into looking after, breeding and sharing fish, and if we want to be able to see a wide variety of fish. Wouldn’t it be better to have an ecosystem where we can all set our fishes free to swim together?

Marine Ecosystem image from www.sciencelearn.org.nz

This way, everyone can access all of the riches of the deep and it’s a lot easier to get into fish research.

Of course, ecosystems can be dangerous places, with predators and diseases, and they can be fragile. You could also argue that what fishkeepers really want is the experience of nurturing their own fish, and the enjoyment of setting up and maintaining their own micro-infrastructure, and therefore fishtanks are the best solution. But there a limits to the applicability and relevance of any metaphor.

There are potential disadvantages to distributed infrastructures, and many of them relate to the additional complexity that they introduce into the access and identity management arrangements. Arranging access to services in one location can be hard enough, but authorization to use, for example, textual data in more that one repository might require passing of information between institutions. It is also the case that while there are reasonably well-established technologies and procedures and agreements for controlling access to online content, the authorization of web services is not such a well-established area. Furthermore, authorization to access online content cannot easily be passed on to authorize access to the computer processing power that is necessary to carry out an online textual analysis, if this is being provided by another centre in the distributed infrastructure. In summary, the fact that distributed services are reliant on cross-institutional agreements and arrangements adds an extra hurdle to be crossed to participate, as data provider or user, and a layer of complexity and additional layers of risk to the robustness of services.

Other potential disadvantages of distributed infrastructures include:

  • Registering persistent identifiers with a shared service becomes desirable to sustain the interoperability of content and applications, thus adding another level of complexity to the curation of the data;
  • Monitoring of usage is difficult, since operations are being carried out on remote servers not under the control of the repository;
  • Monitoring of the availability of services is difficult – it might be possible to test the status of individual components but not a complex workflow;
  • Although underlying interoperability is essential, there is no impetus towards consistency in user interfaces, and even a tendency towards heterogeneity, and therefore fragmentation of services is likely to be maintained or even made worse;
  • Various further questions also remain (at least partially) unanswered in many cases, relates to where and how the computer processing is carried out, and how usage and services are monitored and logged.

We also need agreement at some level about our categories, formats and concepts. To get to the promised land, we need to agree on some standards. Linking datasets requires interoperability at the levels of the linguistic representations, annotations and metadata. Visualization of large datasets requires a reduction of variables, and deciding what is important and what is not. There is a tendency in the humanities for everyone to think that their way of looking at things and of categorizing things is unique. Annotations do sometimes embody the unique intellectual work of identification, categorization and interpretation of phenomena, and these are vital operations in the humanities, so it is not a surprise that this is problematic.

Another problem is that building infrastructure takes time and involves addressing complex and difficult administrative, legal, financial, political, technical barriers, often by making international agreements. So, usually, it’s easier to make ad hoc work-arounds. And building tools can be more attractive and rewarding. But actually, it’s a false opposition – enhanced infrastructure should help with tool development and deployment. An infrastructure providing a range of simple solutions for connecting together data and tools, deploying them as reliable services, managing authentication and authorization, licensing, access to computing power, monitoring availability, connection to virtual research environments, etc.

The mistake would be to try to build the perfect all-purpose tool, or to claim to provide services for end-users which solve all of the infrastructure issues. Or to put it another way, building the biggest and best fishtank in the world doesn’t solve the problem, because you can’t get all the fish in the world in there, allow everyone access to view every kind of configuration and interaction in there. But all too often this is what people try to do, rather than contributing a part of a wider, distributed system. Understandably people are impatient and our efforts and resources go into building new fishtanks, which can be fun to make, and which look good when people come to visit.

What are the Digital Humanities

Originally posted at blogs.it.ox.ac.uk on March 30, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The Day of Digital Humanities on 27th March this year has provoked numerous conversations about the nature of Digital Humanities (DH). Some believe DH is a discipline or community, with its own methods, resources, communities of practice, journals, standards of evidence, etc.

Others prefer simply to use the term as a way of looking at activity across a number of humanities-related disciplines which has a significant digital component, and while it is useful to trace connections in terms of methods, resources and tools, it is preferable for digital research in the humanities to live within the historic academic disciplines. It could be argued, for example, that the work of ‘digital classicists’ should be primarily related to addressing research questions in the mainstream of classics (or relevant sub-discipline), not primarily focussed on interacting with an interdisciplinary ‘digital humanities’.

But this is simplistic: digital research can be transformative, allowing new research questions to be formulated and posed, thus transforming existing communities. DH can enable new forms of inter-disciplinary research. Geographical Information Systems (GIS), together with large historical datasets in digital form, can allow visualizations of spatial data in ways that allow new questions to be asked in, for example, economic history, literature, history of science, linguistics, toponymy, climate studies, etc.. New points of contact between these disciplines are created, and also with scientists, social scientist, engineers and technologists in geographical sciences.

Where are the Digital Humanities?

Digital research in the humanities takes place in a variety of institutional frameworks, from isolated individuals in otherwise non-digital faculties to large specialist centres. There are 22 member organisations in the ‘Network of Expert Centres in the Digital Humanities in Britain and Ireland’, but there is no common template. To give a few partial examples:

  • The Oxford e-Research Centre has a strong DH team and project portfolio, but is not exclusively humanities-focussed, by any means, and the vast majority of DH activity in the university is outside of this department;
  • CRASSH at Cambridge is focussed on the arts and humanities, but is not exclusively digital;
  • The Department of Digital Humanities at KCL is an academic faculty which comes out of a merger of centres and groups who focussed on infrastructure, teaching, and technical development work on research projects;
  • Institute for Historical Research offer a wide range of facilities and services which assist the researching, teaching, writing and dissemination of history, not all of them digital;
  • Archaeology Data Service runs a data repository and associated services to support research, learning and teaching in Archaeology

In fact, while there are strong overlaps in activities and organizational forms between many of the centres, there is no easily discernible common factor which is true for all centres.

This network of ‘centres’ risks failing to connect with the large number and wide range of academics engaged with digital research in the humanities who are not associated with one of these centres. The problem is writ larger at the international scale with the wider centernet network. The answer is not necessarily to create and connect more ‘centres/centers’ to encompass the wide range of activity currently outside of them. There is no consensus on what a center should do and how it should fit into an institution, and the very existence of a centre risks detaching practitioners of digital research from the mainstream of their disciplines.

DH@OX aims to provide a view of the wide range of DH activity across the University, and to support this activity in various ways, including facilitating communication and collaboration between researchers, and building better infrastructure and support services, but without imposing any particular boundaries, organisational models or definitions on the ‘digital humanities’.

It remains to be seen which approaches will prove most fruitful in the long term. The Day of Digital Humanities is likely to be a recurrent catalyst ongoing relections and discussions for many years to come.

Summit meeting of Digital Humanities Centres

Originally posted at blogs.oucs.ox.ac.uk on 20 July 2010 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

centerNet had their first international summit at King’s College London on the 3rd and 4th July 2010. The summit was supported by the NEH and organized by Neil Fraistat and Kay Walter. The summit was a chance for directors of centers and funders to talk to each other, to to develop collaborations, and to develop regional groups.

I was invited as the initiator of CHAIN as well as wearing my hats as member of the CLARIN Executive Board, member of the steering committee of the Network of Expert Centres in Britain and Ireland, and representative of Oxford University, along with David Robey.

For an overview of the proceedings, I recommend Geoffrey Rockwell’s blog:

http://www.philosophi.ca/pmwiki.php/Main/CenterNet2010

I will focus here on the elements of relevance to Oxford.

I am pleased to say that we are involved in many of the most important initiatives: CLARIN, DARIAH, CHAIN, Bamboo, Network of Centres, centerNet; we certainly seem to be involved in more things than anyone else!

The regional breakout group for Britain and Ireland discussed recommendations to funders. We identified a barrier to collaboration in that  institutions are in competition with each other for funding. And we discussed how this could be addressed by financial incentives for collaboration. There are funds for regional collaboration in devolved countries (e.g. Wales, Scotland) which have produced useful results. One way to foster cooperation would be to give more incentives to share resources and services.

Funders insisting on sustainability plans involving institutional buy-in and embedding (as JISC do, for example) can help to improve institutional policies and develop capacity. Funders could also help with the promotion of infrastructure and standards: they could give a big boost to (bottom-up) initiatives that promote collaboration and cooperation by using grant conditions and recommendations, at least suggesting these them as ways to promote re-use and linking of data, and thus obtain impact and value for money. There would be no cost to funders to do this. But not all institutions can build a DH centre, or a comprehensive institutional repository, or other services. What is the incentive for big centres to collaborate with small ones? What could be a business model for institutions with well-developed infrastructure to support others?

The AHRC have said that they won’t fund or get involved with infrastructure, so there seems little to discuss with them, unless we can suggest small and cheap things to make an impact. Networks and workshops can be useful, but current schemes are directed at new initiatives, and are short term. Short term funding doesn’t help to sustain the outputs of these activities.

It was strongly felt that we, the researchers, should provide evidence of value in terms of improved or transformed research and teaching and other impacts, via “compelling case studies”. And we felt that the current impact agenda, for all of its faults, could be an opportunity, because it may be a route to rewarding reusability and sharing of resources.

Discussions about the mission, structure and business model for centerNet foundered a little on the notion of ‘center‘. I argued that it was not necessarily desirable for an institution to organize itself with a digital humanities centre, but rather that computing in the Humanities could be promoted and supported by other means. Furthermore, the promotion of centres, and the promotion of the ‘discipline’ of Digital Humanities, risk ghettoization and a reduced relevance of digital activities to the mainstream of research in the various disciplines. It seems that the experience and outlook of the University of Queensland, at least, is in line with ours.

Invited speaker Jon Orwant from Google tried to be controversial, and succeeded with the provocative assertion that funders should only promote bottom-up initiatives  I pointed out (the “good question” cited in Geoffrey’s blog!) that we have decades of experience of bottom-up creation of tools and data, which has resulted in fragmentation, with a variety of standards, data formats and licensing arrangements, and that this is currently the biggest barrier to progress. So the provision of some infrastructure, or at least promoting the adoption of some shared policies and standards, is the key challenge today. Although I would agree that this could be done in as light-weight a manner as possible and so as not to thwart innovation and bottom-up initiatives.

In fact, successful infrastructure initiatives, such as CLARIN, are bottom-up in the sense that the researchers and technologists identified the problem of fragmentation and went to the funders asking for money to build research infrastructure.

In summary, I believe that centerNet is a very useful vehicle for us here in Oxford as a way to connect with numerous centres, communities, regions and funders. In particular, our ongoing involvement can play an important role in:

  • linking our services and resources to users;
  • building collaborative projects;
  • dissemination of our research and other activities;
  • advocacy for digital humanities to funders and politicians and other bodies;
  • international expansion of research communities and collaborations.

To get a visual flavour of the proceedings, you can see some photos: John Unsworth’s photos

The centerNet website is at:

http://digitalhumanities.org/centernet/

And the new beta site:

http://digitalhumanities.org/centernet_new/ [visited July 2010]