Advising DigHumLab

Originally posted at blogs.it.ox.ac.uk on September 30, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

University of Oxford researchers from IT Services and the Oxford Internet Institute are playing a key role in advising an important national project in Denmark, and learning a lot about different ways of building and sustaining research infrastructure along the way.

DigHumLab is a national initiative in Denmark to set up a collaboration to advance digital research in the arts and humanities. Staring in 2010 with the drawing up of a roadmap of research infrastructures for Denmark, DigHumLab was awarded €4 million Euro for five years in 2011. DigHumLab encompasses the Danish contribution to the CLARIN and DARIAH European research infrastructures. I was asked to join the small international Advisory Board for the project, and to attend a mid-term meeting in Copenhagen in September 2014, to offer advice to the project.

The vision for DigHumLab is to take actions to strengthen research in the humanities and humanistic social sciences, to improve access to data, develop methods and tools, promote collaboration and support emerging areas of digital research. The project goals are to:

  • create virtual portal, access point, and potential partner for international collaborations
  • create a knowledge hub
  • become a provider of software and technical solutions
  • act as a national political advisor on matters relating to digital research in the humanities.

As well as activities to establish these outputs and services, the project includes a significant amount of effort spent on three research themes:

  1. language resources and technologies
  2. media tools
  3. interaction and design

The project has kicked off with participation from four Danish universities, but the intention is not to create a club closed to other universities or research bodies. The project also aims to build links and to coordinate activities with national services for high performance computing, research data management, e-Science, as well as with the National Library and the European research infrastructures. It wasn’t possible in this meeting to find out what measures are being taken to achieve these goals, but it was encouraging that the meeting was hosted by the National Library, in their impressive modernist ‘Black Diamond’ building, with participation by senior staff from the library.

The Black Diamond building housing the Danish Royal Library

After spending an initial period establishing the working groups and themes, the project is now moving into a period with a focus on building generic services such as online research environments, awareness raising, a survey of requirements, outreach activities to various researech communities, establishing teaching programmes and increased student involvement.

The first theme, language resources and tools, was presented by Lene Offersgard from the University of Copenhagen, who outlined the key activities, including the establishment of a data repository, now certified with the Data Seal of Approval and CLARIN ‘B’ Centre status, with an accompanying helpdesk, tools for the analysis and annotation of data, and a user engagement programme. There are also PhD teaching modules for students at the University of Copenhagen.

The second theme, audio-visual data and tools in various media, was presented by Niels Brügger and Per Jauert from Aarhus University. Work on this theme acknowledges that “the digital comes in a variety of forms.”, and which they sub-divide into:

  • Digitized
  • Born-digital
  • Reborn-digital

The enhanced web archive is an example of the latter, where digital materials have been collected, reassembled and made available with metadata as research data. The focus of this work is on web archives, but it occurred to me that it is a characterization which fits the modern linguistic corpus as well. The team have developed the Digital Footprints software, which is still in beta, but is in use for studying online material. As well as developing ways to examine and to improve access to Netarkivet, the national web archive, researchers are working together in international collaborations, including with the British Library, and the Oxford Internet Institute, and establishing a transnational European research infrastructure for the study of archived we materials. The NetLab Forum provides wiki space for research projects using the tools so that they can communicate and share experiences, expertise and results. It was pointed out DigHumLab has been crucial in providing the funding for an IT developer, without whom this work would not have been possible, and on whom ongoing work is reliant. Another risk to the viability of ongoing work was flagged – independent legal advice is needed on the risks associated with access, use and redistribution of online materials.

The theme also encompasses work on audio-visual data and tools. Following on from research projects such as LARM, a research infrastructure has already been established for the use of audio materials, and the challenge is to integrate with other DigHumLab services. This work has been built on the national library media collections of radio and televsion programmes. Advance services already offer streaming access, and ongoing research projects are using these services for research.

Johannes Wagner of University of Southern Denmark introduced the third theme, the “little brother” of the DigHumLab siblings, focussing on “experiential research”, or analysis of human interactions and activities via digital capture and analysis. An example is the VELUX project on non-verbal communication. The experience of the researchers in this area is that “if you build it they will come” doesn’t work in this context. Face-to-face and hands-on bespoke support are needed to engage with researchers and to meet their requirements.

In the discussion with the Advisory Board, Eric Meyer (Oxford Internet Institute) asked the penetrating question of how are the success stories of flagship projects disseminated to other researchers who could potentially engage with DigHumLab. Demonstrators are much more compelling and convincing when they have been used for real research that has been finished and can be shared. Too many e-science case studies have been based on toy data or invented problems, making it was difficult for the people who might want to use these tools to envisage real uses, or to deploy the solutions. A variety of instruments are currently used to involve researchers, including travelling workshops, PhD courses, journal articles, lectures, and short courses. The question of how, or whether, to attempt to address all disciplines and all communities in the humanities remains an open one. It was agreed that robust showcases modelled from the user point of view were vital to promote uptake.

The afternoon session focussed on the thorny question of possible business models for the sustainability of DigHumLab beyond it’s current phase of funding. From 2017 DigHumLab aims to focus on the refinement and improvement of services, including prioritization of research areas, marketing of services and the recruitment of users, and the development of a viable financial model for sustainability.

One model would be for DigHumLab to be based on a core of generic services, with research themes changing over time. Eric Meyer offered a cautionary tale, the generic services and service centres developed as part of the e-Social Science programme in the UK no longer exist. I added the further example of the Arts and Humanities Data Service.

There was also some discussion of how to enter into collaborations with computer scientists. It was agreed that it was important not to try to treat computer scientists as “code monkeys”. Computer scientists need to address research questions and to publish in high-impact journals relevant to their discipline. We need to approach collaboration as an inter-disciplinary research project as with equal academic standing for all partners. Sometimes we just want to build a website or an interface or install some software, and then we need to find a developer, but this is different to an inter-disciplinary collaboration.

Sten Runar Ludvigsen from the University of Oslo made the interesting point that although distributed services can have a certain robustness, a centralized lab means that you only need to change the culture in one place, not in every lab, to run services for the community in a collaborative spirit, and might therefore be more realistic. He also made the crucial point that, as a small country, the Danish humanities community could benefit from focussing on a small number of areas Clearly they have already done this with the three themes in the current phase of DigHumLab. It would be useful to have further reflection on whether these are the right areas, and then to communicate clearly to stakeholders how the scope of the project will be constrained in future.

To summarize the day, I proposed the following three points for the project, after discussion and in agreement with, the other members of the Advisory Board.

1. DigHumLab would should articulate a vision and a mission relating to the use of digital data, tools and methods situated firmly within the wider project of the mission(s) of humanistic research. A strategic vision about what and who should be included, what the priorities are and why, and what is not included. A decision needs to be made on whether it would make sense to focus on a small number of strategic areas, or to try to engage with all areas of the humanities, and the former seems likely to be more successful. These statements about vision, mission and scope can be informed by asking where do you want to be in 10 years time. The project is nicely focussed already on specific themes – do you plan to continue to restrict the scope to these or expand to other areas of research?

2. A flexible and robust business model needs to be able to survive the withdrawal of a funder, institution, partner, academic domain, key individuals, etc.. Staking everything on the support of a ministry or a national funding body is a risky, all or nothing strategy. Flexibility means a range of funders can be accommodated (e.g. national, local funders, programmes for libraries, research data management, research grants, e-science, network/conference funds, etc.). The key to this is that various institutions and people want to buy into and sustain the mission, and are prepared to align local strategies of sustainable institutions with the common aims. This way, there is the opportunity to repurpose existing resources and funding streams to fulfill the aims of DigHumLab, rather than the more difficult task of seeking additional funding on a long-term basis.

3. It would be useful to clarify and define how DigHumLab supports digital research at the various stages of the research life-cycle (initiating, carrying out, connecting, disseminating and sustaining research). Do you want to be involved in some or all of these? How are you adding value to these activities?

You can see and read more about DigHumLab at http://dighumlab.com/.

Using Large-scale Text Collections for Research

Originally posted at blogs.it.ox.ac.uk on April 10, 2014 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.I participated in a recent workshop in Würzburg on using large-scale text collections for research. The workshop was organised as part of the activities of NeDiMAH, the Network of Digital Methods in the Arts and Humanities.

I had the opportunity to give a short introduction on some aspects of my interest in this topic. I outlined how the current problems include the fragmentation currently available resources in different digital silos, with a variety of barriers to their combination and use, plus a lack of easily available tools for textual analysis of standardized online resources, and I briefly referred to the plans of the CLARIN research infrastructure to address some of these problems.

Christian Thomas explained how the Deutsches Textarchiv (DTA) is facilitating and making possible research with large-scale historical German text collections. The DTA has funding 2007-15, and now includes resources with more than 200 million words from the period 1600 to 1900. There are images and text, and automatic linguistic analysis is possible. The DTA is a CLARIN-D service centre. Integration in the CLARIN infrastructure means that resources can be discovered via the Virtual Language Observatory (VLO), can be searched via the Federated Content Search (FCS), and analysed and processed via WebLicht worklows. The DTA also contributes to discipline-specific working groups to work with as an outreach and dissemination strategy. The majority of texts are keyed in (see more at http://www.deutschestextarchiv.de/dtaq/stat/digimethod) The workflow for OCR texts is interesting – structural markup is added to the electronic text (using a subset of TEI P5), and then OCR errors are corrected. They find that it is easier to identify and correct errors in structured text. The Cascaded Analysis Broker provides a normalization of historical forms to allow for orthography-independent and lemma-based corpus searches, and this is integrated into the DTAQ quality assurance platform. Christian’s slides can be found here.

The DTA is also a key partner in the Digitales Wörterbuch der deutschen Sprache (DWDS), an excellent concept allowing cross-searching of resources in different centres, and very well implemented. This offers a view of the future of corpus linguistics and the study of historical texts online.

Jan Rybicki from the Jagiellonian University in Kraków told us about a benchmark English corpus to compare the success or failure of stylometric tools. There was a very interesting discussion of the idea of how to build representative and comparable literary corpora, which put me in mind of the work of Gideon Toury in descriptive translation studies. There was also discussion of a possible project to build comparable benchmark corpora for multiple European literary traditions.

Rene van Stipirian (Nederlab) outlined the backgroud of how the study of history in the Netherlands is characterised by a fragmented environment of improvised resources. The project Nederlab will be funded by the NWO 2013-17 to address the integration of historical textual resources for research. Some very interesting statistics were presented: for the period to the end of the twentieth century there are 500 million surviving pages printed in Dutch, and 70 million of these are digitized, but only 5-10 million have good quality text – most are rather poor quality OCR. Nederlab brings together linguists, literary scholars and historians, and integrated access to resources will go online in the summer of 2015.

Allen Riddell from Dartmouth College in the US took an interesting and highly principled approach to building a representative literary corpus. He randomly selected works from bibliographic indexes, then went out and found works and scanned them if necessary. This seems to me to be a positive step, in contrast to the usual rather more opportunistic approach of basing the corpus composition of the more easily available texts. The approach to correcting the OCR text was also innovative and interesting – he used Amazon Mechanical Turk. Allen also referred to a paper on this topic at http://journal.code4lib.org/articles/6004.
This also raised an interesting question – can a randomly selected corpus be representative, or do we need more manual intervention in selection (at the risk of personal bias)?

Tom van Nuenen from Tilburg University described how he scraped professional travel blogs from a Dutch site and starting to analyse the language. Puck Wildschut from the Uni Nijmegen described the early stages of her PhD work on comparing Nabokov novels using a mixture of corpus and cognitive stylistic approaches.

The discussion at the end of the first day focussed on an interesting and important question: how do we make corpus-building more professional? Reusability was seen to be key, and dependent on making sure that data was released in an orderly way, with clear documentation, and under a licence allowing reuse. And since what we are increasingly dealing with is large collections of entire texts (rather than the sampled and truncated smaller corpora of the past), then we should ensure that the texts that make up corpora should be reusable, so that others can take them to make different ad hoc corpora. This requires metadata at the level of the individual texts, and would be enhanced by the standardization of textual formats.

Maciej Eder from the Institute of Polish Studies at the Pedagogical University of Kraków introduced and demonstrated Stylo, a tool for stylometric analysis of texts. In this presentation, and one on the following day, I found some of the assumptions underlying stylometric research difficult to reconcile with what I think of as interesting and valid research questions in the humanities. How many literary scholars are comfortable with notions that the frequencies of word tokens, and the co-occurrence of these tokens give an insight into style? And the conclusion of a stylometric study always seems to be about testing and refining the methods. Conclusions like “stylometric methods are too sensitive to be applied to any big dataset” don’t actually engage with anyone outside of stylometry. Until someone comes up with a conclusion more relevant to textual studies, this is likely to remain a marginal activity, but maybe I’ve missed the point.

The focus on looking for and trying to prove the differences between the writing of men and women also strikes me as a little odd, and certainly contentious. Why prioritize this particular aspect of variation in the writers? Why try to essentialize the differences between men and women, and why not other factors? I’d be more interested in an approach which identified stylistic differences and then tried to find what the relevant variables might be, rather than an initial starting point assuming that men and women write differently, and trying to “prove” that by looking for differences.

On the second day of the workshop, Florentina Armaselu from the Centre Virtuel de la Connaissance de l’Europe (CVCE) described how they are making TEI versions of official documents on EU integration for research use. I suggested that there might be interesting connections with the Talk of Europe project, which will be seeking to connect together datasets of this type for research use with language technologies and tools.

Karina van Dalen-Oskam from the Huygens Institut in the Netherlands, one of the workshop organisers, introduced the project entitled The Riddle of Literary Quality which is investigating whether literariness can be identified in distributions of surface linguistic features. The current phase is focussing on lexical anbd syntactic features which can be identified automatically, although a later phase might investigate harder-to-identify stylistic features, such as speech presentation. In the discussion Maciej Eder suggested that the traces of literariness might reside not in absolute or relative frequencies of features, but in variation from norms (either up or down).

Gordan Ravancic (Institute of History in Zagreb joined us via Skype to introduce his project on crime records in Dubrovnik, “Town in Croatian Middle Ages”, which was fascinating, although not clearly linked to the topic of the workshop, as far as I could tell.

Some interesting notions and terminological distinctions were raised in discussions. Maciej Eder suggested that “big data” in textual studies is data where the files can’t be downloaded, examined or verified in any systematic way. This seems like a useful definition, and it immediately raised questions in the following talk. Emma Clarke from Trinity College Dublin presented work on topic modelling. This approach to distant reading can only be used on a corpus that can be downloaded, normalized and categorized, and would be difficult to use on the type of big data as defined by Eder, although it could potentially used as a discovery tool to explore indeterminate datasets. Christof Schlöch from the Computerphilologie group in Würzburg differentiated “smart data” from “big data”, and suggested that this was what we mostly wanted to be working with. Smart data is cleaned up and normalized to a certain extent, and is of known provenance, quality and extent.

The workshop concluded with discussions about potential outcomes of this and a previous NeDiMAH workshop. A possible stylometry project to build benchmark text collections and to promote the use of stylometric tools for genre analysis and attribution was outlined, with perhaps the ultimate goal of an ambitious European atlas of the history of the style of fiction. We also discussed the possible publication of a companion to the creation and use of large-scale text collections.

Read more about the workshop on the NeDiMAH webpages at http://www.nedimah.eu/call-for-papers/using-large-scale-text-collections-research-workshop-university-wurzburg-1st-and-2nd

Silos or fishtanks?

Originally posted at blogs.it.ox.ac.uk on April 6, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a partial summary of a presentation given at the Interedition Symposium in the Hague in March 2012 on the topic of Scholarly Digital Editions, Tools and Infrastructure.

People are often talking about digital silos in the context of digital resources in the humanities. The problem is that resources, although valuable in themselves, are located in different locations on the web, where they might be difficult to find, and they all have their own individual interfaces and registration procedures, and are not connected with similar or related resources. So you can’t easily search the Old English Corpus (available either for download with no software from the OTA, or online via numerous university library portals to local users). Some resources, like the ARCHER corpus, you can’t access at all unless you’re friends with someone at the University of Manchester.

Silo image from Doc Searls (dsearls)

This is clearly far from ideal. But what alternative, more connected, architectures are most appropriate to achieving interoperability and sustainability of the arena of digital textual scholarship? The emergence of fast and high capacity networks, a deluge of data, and web service APIs mean that it is increasingly possible to imagine and build distributed architectures for scholarly services, where data, tools, computing resources, and the outputs of annotation and analysis live in different parts of the network but can be brought together virtually in the user’s desktop environment. The current concerns about ‘digital silos’, in which the outputs of digital humanities projects are deployed online unconnected to other resources, and with limited sustainability, are directly addressed by this vision.

I want to put forward put the argument for distributed architectures, while reviewing some of the risks and problems, and survey some current moves towards such an infrastructure. And I also want to suggest another metaphor as an alternative to the ‘silo’.

An open and fully distributed architecture where the resources are located in different places can have the advantages of allowing the following services to be created:

  • potentially unlimited functionality, since developers can deploy content and tools that they want to use, and which can interoperate with other data, tools and infrastructure services;
  • building ad hoc collections and corpora across different repositories;
    complex workflows, for example piping together web services from different locations;
  • protected resources (e.g. works in copyright, sensitive data) curated in situ yet still analysed online via web applications which access the data via a secure infrastructure.

All of this can happen in a situation with a better division of labour than we typically have now: the repositories don’t have to worry about tools; tool and content developers don’t have to worry about creating the entire online environments; tool developers don’t have to worry about data management; users don’t have to install software; etc.. The emergence of an ‘ecosystem’ with numerous actors providing content, tools, computing resources, and other infrastructure services, provides a flexibility and resilience and the potential for sustainability which is not possible for a single-site or other more closed or monolithic system.

So let’s consider the unconnected, problematic online resource as a fishtank rather than a silo.

Goldfish image from Praveen Gupta (praveengupta)

There are lots of fishtanks out there, and they can be very large, elaborate, pretty, sophisticated, long-standing and sustainable. But they’re all in different places and they are not connected with each other. If you want to see a variety of fish, you have to visit a lot of houses, try to negotiate access to their fishtanks, and make use of whatever facilities they have for viewing or otherwise analysing the fish. Some places are better than other to visit – aquariums might have very good facilities and lots of information, but you still can’t view the fish in one aquarium alongside the fish in another, and it’s hard to compare them.

And if I want to keep a fish I have to build a fishtank and maintain a fishtank, or I could find someone else’s fishtank to put it in, but then it’s difficult for me to get access and control the environment. And who’s going to carry on feeding the fish? We can probably agree that it’s better if we don’t all try to make and look after our own fishtanks, at least not if our main goals are to enable as many people as possible to get into looking after, breeding and sharing fish, and if we want to be able to see a wide variety of fish. Wouldn’t it be better to have an ecosystem where we can all set our fishes free to swim together?

Marine Ecosystem image from www.sciencelearn.org.nz

This way, everyone can access all of the riches of the deep and it’s a lot easier to get into fish research.

Of course, ecosystems can be dangerous places, with predators and diseases, and they can be fragile. You could also argue that what fishkeepers really want is the experience of nurturing their own fish, and the enjoyment of setting up and maintaining their own micro-infrastructure, and therefore fishtanks are the best solution. But there a limits to the applicability and relevance of any metaphor.

There are potential disadvantages to distributed infrastructures, and many of them relate to the additional complexity that they introduce into the access and identity management arrangements. Arranging access to services in one location can be hard enough, but authorization to use, for example, textual data in more that one repository might require passing of information between institutions. It is also the case that while there are reasonably well-established technologies and procedures and agreements for controlling access to online content, the authorization of web services is not such a well-established area. Furthermore, authorization to access online content cannot easily be passed on to authorize access to the computer processing power that is necessary to carry out an online textual analysis, if this is being provided by another centre in the distributed infrastructure. In summary, the fact that distributed services are reliant on cross-institutional agreements and arrangements adds an extra hurdle to be crossed to participate, as data provider or user, and a layer of complexity and additional layers of risk to the robustness of services.

Other potential disadvantages of distributed infrastructures include:

  • Registering persistent identifiers with a shared service becomes desirable to sustain the interoperability of content and applications, thus adding another level of complexity to the curation of the data;
  • Monitoring of usage is difficult, since operations are being carried out on remote servers not under the control of the repository;
  • Monitoring of the availability of services is difficult – it might be possible to test the status of individual components but not a complex workflow;
  • Although underlying interoperability is essential, there is no impetus towards consistency in user interfaces, and even a tendency towards heterogeneity, and therefore fragmentation of services is likely to be maintained or even made worse;
  • Various further questions also remain (at least partially) unanswered in many cases, relates to where and how the computer processing is carried out, and how usage and services are monitored and logged.

We also need agreement at some level about our categories, formats and concepts. To get to the promised land, we need to agree on some standards. Linking datasets requires interoperability at the levels of the linguistic representations, annotations and metadata. Visualization of large datasets requires a reduction of variables, and deciding what is important and what is not. There is a tendency in the humanities for everyone to think that their way of looking at things and of categorizing things is unique. Annotations do sometimes embody the unique intellectual work of identification, categorization and interpretation of phenomena, and these are vital operations in the humanities, so it is not a surprise that this is problematic.

Another problem is that building infrastructure takes time and involves addressing complex and difficult administrative, legal, financial, political, technical barriers, often by making international agreements. So, usually, it’s easier to make ad hoc work-arounds. And building tools can be more attractive and rewarding. But actually, it’s a false opposition – enhanced infrastructure should help with tool development and deployment. An infrastructure providing a range of simple solutions for connecting together data and tools, deploying them as reliable services, managing authentication and authorization, licensing, access to computing power, monitoring availability, connection to virtual research environments, etc.

The mistake would be to try to build the perfect all-purpose tool, or to claim to provide services for end-users which solve all of the infrastructure issues. Or to put it another way, building the biggest and best fishtank in the world doesn’t solve the problem, because you can’t get all the fish in the world in there, allow everyone access to view every kind of configuration and interaction in there. But all too often this is what people try to do, rather than contributing a part of a wider, distributed system. Understandably people are impatient and our efforts and resources go into building new fishtanks, which can be fun to make, and which look good when people come to visit.

CLARIN infrastructure notes – on the record

Originally posted at blogs.it.ox.ac.uk on September 13, 2011 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

In a recent informal meeting involving various members of the CLARIN and other infrastructure initiatives, we had an open, frank and “off the record” discussion about successes and failures so far, and plans for the future. In preparation for the meeting, and to get the discussions going, we were asked to think of five points in response to each of three questions. I’m happy to go “on the record” with mine here!

What were your original impulses and dreams [when CLARIN planning started around 2006]?

1. To build an Arts and Humanities Data Service for Europe, on the model of the AHDS in the UK, to support digital work in the literary and linguistic subject areas, and link with similar emerging initiatives then emerging, e.g. at the CNRS in France.

2. To promote and integrate Central and East European researchers, resources and languages, continuing the work of TELRI project in the previous period.

3. To build new European networks, built on transparency, openness and a real desire to engage with, support and improve research, to replace failed European initiatives which were sometimes built on careerism, croneyism and corruption.

4. To move the focus of language resource & tool creators (especially computational linguists) towards the requirements of Humanities researchers, making it easier for users with little technical support to do simple yet powerful things with key resources.

5. To facilitate the participation of literary and linguistic disciplines in the emerging e-Science agenda.

What are the most important successes and failures so far?

1. Success: the initiative is almost pan-European, although some key countries not involved or not fully integrated (UK, Italy), and a very few not involved at all (Ireland, Switzerland); the integration of former TELRI partners from central and eastern Europe was successfully achieved.

2. Success: we have succeeded in getting enough funding from national funders to make CLARIN happen!

3. Partial failure: we’ve only had fairly small-scale engagement so far of scholars to elecit detailed requirements and to develop use cases.

4. Partial failure: we haven’t made the total shift of focus of the CLARIN community away from traditional concerns (own tools and research) to production infrastructure services for the humanities and social sciences.

5. Partial failure: we have not yet created a standards-oriented ecosystem for resource and tool creators to enable them to contribute to sustainable production services. To put it another way No answer to “How do make CLARIN-conformant resources?” I hope that the forthcoming Reference Manual will at least partially solve this problem.

What are the top priorities for future work?

1. We need to work out ways to lobby for and secure funding, in a situation where, in the Humanities, there is a lack of a critical mass of researchers (in any given discipline) who want research computing infrastructure, or who see it as a top priority. This means that here is a lack of an effective lobby group of influential scholars in most forums. This is one of the disadvantages of the cross-disciplinary nature of linguistics and the language resources and tools field.

2. We need to deliver something urgently to show the relevant communities that we can do it, and to give them a clearer idea of what he intend to do. Access and authentication infrastructure (AAI) is the key to delivering any kind of production service which can show and end-to-end use case, so we should make solutions in this area a logical priority.

3. Where is the data processing going to take place, who is going to pay for it, and how will we do the accounting? We urgently need to make progress towards solutions here as well if we are to create production-quality services.

4. Humanities and social sciences research has global connections. How will we accommodate users and service providers outside of our AAI domain? As CLARIN starts to rely on national funding, there is an increased danger of two-speed progress, with some countries and communities who are currently engaged being pushed out.

5. What will the platforms for users, and who is going to make the user interfaces? Are we going to be able to overcome fragmentation and ‘silo-building’ – can we offer a good user experience while still allowing flexibility and connectiveness? If so, how, and when?

Summit meeting of Digital Humanities Centres

Originally posted at blogs.oucs.ox.ac.uk on 20 July 2010 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

centerNet had their first international summit at King’s College London on the 3rd and 4th July 2010. The summit was supported by the NEH and organized by Neil Fraistat and Kay Walter. The summit was a chance for directors of centers and funders to talk to each other, to to develop collaborations, and to develop regional groups.

I was invited as the initiator of CHAIN as well as wearing my hats as member of the CLARIN Executive Board, member of the steering committee of the Network of Expert Centres in Britain and Ireland, and representative of Oxford University, along with David Robey.

For an overview of the proceedings, I recommend Geoffrey Rockwell’s blog:

http://www.philosophi.ca/pmwiki.php/Main/CenterNet2010

I will focus here on the elements of relevance to Oxford.

I am pleased to say that we are involved in many of the most important initiatives: CLARIN, DARIAH, CHAIN, Bamboo, Network of Centres, centerNet; we certainly seem to be involved in more things than anyone else!

The regional breakout group for Britain and Ireland discussed recommendations to funders. We identified a barrier to collaboration in that  institutions are in competition with each other for funding. And we discussed how this could be addressed by financial incentives for collaboration. There are funds for regional collaboration in devolved countries (e.g. Wales, Scotland) which have produced useful results. One way to foster cooperation would be to give more incentives to share resources and services.

Funders insisting on sustainability plans involving institutional buy-in and embedding (as JISC do, for example) can help to improve institutional policies and develop capacity. Funders could also help with the promotion of infrastructure and standards: they could give a big boost to (bottom-up) initiatives that promote collaboration and cooperation by using grant conditions and recommendations, at least suggesting these them as ways to promote re-use and linking of data, and thus obtain impact and value for money. There would be no cost to funders to do this. But not all institutions can build a DH centre, or a comprehensive institutional repository, or other services. What is the incentive for big centres to collaborate with small ones? What could be a business model for institutions with well-developed infrastructure to support others?

The AHRC have said that they won’t fund or get involved with infrastructure, so there seems little to discuss with them, unless we can suggest small and cheap things to make an impact. Networks and workshops can be useful, but current schemes are directed at new initiatives, and are short term. Short term funding doesn’t help to sustain the outputs of these activities.

It was strongly felt that we, the researchers, should provide evidence of value in terms of improved or transformed research and teaching and other impacts, via “compelling case studies”. And we felt that the current impact agenda, for all of its faults, could be an opportunity, because it may be a route to rewarding reusability and sharing of resources.

Discussions about the mission, structure and business model for centerNet foundered a little on the notion of ‘center‘. I argued that it was not necessarily desirable for an institution to organize itself with a digital humanities centre, but rather that computing in the Humanities could be promoted and supported by other means. Furthermore, the promotion of centres, and the promotion of the ‘discipline’ of Digital Humanities, risk ghettoization and a reduced relevance of digital activities to the mainstream of research in the various disciplines. It seems that the experience and outlook of the University of Queensland, at least, is in line with ours.

Invited speaker Jon Orwant from Google tried to be controversial, and succeeded with the provocative assertion that funders should only promote bottom-up initiatives  I pointed out (the “good question” cited in Geoffrey’s blog!) that we have decades of experience of bottom-up creation of tools and data, which has resulted in fragmentation, with a variety of standards, data formats and licensing arrangements, and that this is currently the biggest barrier to progress. So the provision of some infrastructure, or at least promoting the adoption of some shared policies and standards, is the key challenge today. Although I would agree that this could be done in as light-weight a manner as possible and so as not to thwart innovation and bottom-up initiatives.

In fact, successful infrastructure initiatives, such as CLARIN, are bottom-up in the sense that the researchers and technologists identified the problem of fragmentation and went to the funders asking for money to build research infrastructure.

In summary, I believe that centerNet is a very useful vehicle for us here in Oxford as a way to connect with numerous centres, communities, regions and funders. In particular, our ongoing involvement can play an important role in:

  • linking our services and resources to users;
  • building collaborative projects;
  • dissemination of our research and other activities;
  • advocacy for digital humanities to funders and politicians and other bodies;
  • international expansion of research communities and collaborations.

To get a visual flavour of the proceedings, you can see some photos: John Unsworth’s photos

The centerNet website is at:

http://digitalhumanities.org/centernet/

And the new beta site:

http://digitalhumanities.org/centernet_new/ [visited July 2010]