CLARIN Germany: Happy First Birthday!

Originally posted at blogs.it.ox.ac.uk on July 10, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

A workshop was held in Leipzig last month to mark the end of the first year of CLARIN-D, the national initiative in Germany to build a research infrastructure as part of the Common Language Resources and Technology Infrastructure. The wider CLARIN effort is Europe-wide and aims to link up repositories, services and researchers in the social sciences and humanities who are making use of the wide range of digital datasets and tools for processing human language. More details of the workshop, including all of the presentations, are available here [update 2019 – previously linked to http://clarin.informatik.uni-leipzig.de/, no longer available].

Greg Crane, the newly appointed Professor of Digital Humanities at the University of Leipzig, kicked off the event with a stimulating presentation which situated CLARIN in the wider context of the evolution of the humanities and, more recently, the digital humanities. Greg suggested that we should provide platforms and tools for students and citizen scholars to contribute to research and to the accumulation of knowledge, culminating in the challenge: “How can we foster a new global Republic of Letters?”.

Erhard Hinrichs (Tübingen), the coordinator of CLARIN-D, introduced the overall initiative as a “web and centre-based research infrastructure for the social sciences and humanities”. CLARIN aims to build an integrated, interoperable, scalable and sustainable research infrastructure via a network of centres. Language resources and tools (LRTs) will be deployed as services for researchers in the social sciences and humanities. CLARIN-D has 9 centres: BAS, University of Munich; BBAW, Berlin; IDS, Mannheim; MPI, Nijmegen; University of Hamburg; University of Leipzig; Saarland University; University of Stuttgart; and University of Tübingen.

Erhard reassured us that CLARIN-D has taken to heart the words of John Wood from the Knowledge Exchange Workshop in Berlin in September 2009:

Research infrastructures that do not take user needs into account from the very start run the risk of becoming empty infrastructures.

There are working groups for 9 humanities and social science disciplines. These discipline-specific working groups act as catalysts, linking CLARIN-D to the research communities. They choose key resources and tools from their communities and advise and supervise their integration into the CLARIN-D infrastructure (in the so-called “curation projects”). CLARIN-D is also working with many of the BMBF-funded eHumanities projects. CLARIN-D also has work packages which are devoted to liaison with the CLARIN-ERIC and with DARIAH, an emerging humanities e-infrastructure, and also on legal and ethical issues, support and helpdesk, and training and education.

Dieter van Uytvanck (MPI Nijmegen) introduced the distributed technical architecture of CLARIN in the context of an infrastructure to support researchers throughout the life-cycle of their work. He also situated CLARIN in the context of a (European) ecosystem of infrastructures:

Community Services – CLARIN
Cross Community Services – DASISH
Compute Services – DEISA
Data Services – EUdat
Grid Services – EGI
Network Services – GEANT

Dieter outlined the services which are available now:

  • WebLicht for resource processing and workflow management;
  • the Virtual Language Observatory for resource discovery;
  • tools to support resource creation and enhancement;
  • European Persistent Identifier Consortium (EPIC) service;
  • repository services in the centres for archiving, preservation and sharing;
  • federated identity management (including a CLARIN Identity Provider, a service provider federation and cross-federation)


Services that will be available in the future include:

  • Federated Content Search (in development)
  • Monitoring (currently in alpha)
  • Center Registry (alpha)
  • Virtual Collection Registry (alpha)
  • Workspaces + SimpleStore (alpha)
  • Safe Replication (alpha)

The workshop then moved on to consider the various projects associated with CLARIN-D. Angelika Storrer (TU Dortmund) spoke about her experiences in corpus-based language analysis in research and teaching. The requirements which she identified were of particular interest:

  • One common interface with a German language version and German online tutorials
  • Tools to further work with the results of search queries (clean-up and search again; manually annotate and search again; interface to statistic tools)
  • Word sense disambiguation / semantic clustering tools
  • Orthographic variation tools: important issue when dealing with historical corpora or with computer-mediated communication, e.g. Stress / Streß

Annette Hautli (Konstanz) is part of a team is aiming to tackle the problem with an innovative combination of methods coming from three disciplines: Linguistics, Visual Analytics and Political Science. It is clear that the proposed process of automatic pragmatic annotations of naturally occurring speech data is ambitious, and it is not yet clear that effective results can be obtained. Furthermore, the data set used, which seemed to be transcripts of interviews carried out by the political scientists in the project, is not really the sort of “naturally occurring” speech events that the linguistic methods were developed to deal with, and the eradication of biases and formulation of appropriate interpretations of the data will be difficult. In this sense, it will be an interesting collaboration between the social sciences and other disciplines. On a technical note, it has been noted that a multi-layered annotation approach would be useful, although they don’t have the tools at present.

Eva-Maria Wunder (Augsburg) introduced her PhD work on searching for evidence of second-to-third language interference in language learners (e.g. if a Chinese speaker learns English and then German, how does this effect their German pronunciation?). While she didn’t address the methodological problem that looking for English influence in pronunciation is difficult when “English” is not one accent, this probably wasn’t the place for such discussions, and she introduced the CLARIN tools Wikispeech and WebMAUS which are supporting her work.

Kirsten Bergmann spoke about the challenges of integrating multimodal resources into the CLARIN infrastructure, such as the SaGa speech and gesture corpus, sign language materials, and “sociable machines” under development in Bielefeld.

Ingmar Schuster (Leipzig) described one of the curation projects, which aims to build a “reproducible research platform”, to support “reproducible data-driven linguistics”. The platform is a development of the Potsdam Mind Research Repository (PMR2), and incorporates pre-prints Open Journal Systems (including OAI-PMH, CMDI plug-in); author submission system (reducing the admin load of the centre supporting the system); data publication; “non-significant” (presumably negative) results; R integration, with a web application variant, since most researchers in this field use R.

Christian Mair (Freiburg) described the integration of the Virtual Linguistics Campus (VLC, a suite of online distance learning resources) into CLARIN. The aim is to create an accessible digital resource for a mass market, expanded by a large number of users (to build a web-based community of practice). This could evolve into a multi-functional digital language resource from an e-learning resource: from teaching through research-based teaching to research. There are ongoing issues of quality control, and as yet unexplored potential and obvious synergies with other CLARIN ventures, e.g. the integration of distributed corpora.

Thomas Gloning (Gieẞen) described another curation project, on the integration of German historical philological resources, ultimately aiming to integrate the textual resources of the 15th to the 19th centuries into a reference corpus of historical German, and including a workflow for future integration of further resources. Integrating various textual resources will not provide a corpus in a strict sense but rather a huge repository, from which users can use metadata to build up subcorpora from the repository, according to relevant criteria, e.g. text type (newspaper reports, plant descriptions), decade (texts from the 1680s), topic (texts on alchemy, cookery, medicine, etc.). Anticipated outcomes of making such a resource available include a new historical dictionary of New High German from the 17th-21st centuries based on corpus principles. Innumerable projects on more specific themes would also result, for example investigating the history of foreign words, emergence of specialized vocabulary, evidence of language change, etc., leading to new models and theories.

Alexander Geyken (BBAW) announced plans to write a user manual or handbook (Benutzerhandbuch) for CLARIN-D services. The target audience sectors will be:

  • researchers who have/want to develop Language Resources, Tools and Services (LRTS) and want to make them CLARIN-D compatible;
  • researchers who want to learn more about the solutions adopted in CLARIN-D;
  • technical staff supporting researchers in resource development and migration.

The manual will aid the migration LRTS to the CLARIN-D infrastructure, with the following benefits:

  • linking to larger community / visibility or resources;
  • interoperability;
  • long-term preservation by CLARIN-D service centres.

Among the challenges presented by the plan are the relations of this manual with the emerging standards and procedures of the CLARIN ERIC, which are intended to be Europe-wide in their application. Also, centres and research creation projects will need to make decisions at particular points in time regarding standards, which might be made difficult by the nature of the handbook as a “living document” with constant updates and changes. Nevertheless, this work should provide an excellent foundation for future work in documenting CLARIN procedures.

Frank Wiegand (BBAW) explained the project to build the Deutsches Textarchiv (DTA), which will identify and integrate distributed text resources into a large reference corpus for German (1650-1900). Some of the work to produce editions for the corpus is being done in de.wikisource.org.

Thomas Eckart (Leipzig) reported on infrastructural and CLARIN-related aspects of the eAqua project which is working on the extraction of structured knowledge from ancient sources. The project aims to develop tools as small independent components available as services via SOAP and REST, to support the reuse of data and algorithms, and promote interaction and interchange with existing projects in Digital Humanities, and to allow the integration of existing data resources. They aim to use existing standards, and so plain text and TEI have been selected as input formats for the CLARIN workspace, and have built a TEI text integrator, which automatically sucks in texts to a repository, allocates a PID, generated CMDI metadata (which is then pushed to the Virtual Language Observatory aggregator), the full text will be offered to the CLARIN Federated Content Search, with output in TCF/txt/XML/HTML.

After the presentation of this impressive array of projects, Erhard Hinrichs returned to the stage to introduce the CLARIN ERIC, the new legal and organisational framework underpinning the Europe-wide CLARIN research infrastructure, and its relationship with CLARIN-D. In short, ERICs are reliant on national funding and national infrastructure initiatives. The challenges will be to integrate numerous national infrastructures of varying size, scope and maturity into a coherent European infrastructure. The CLARIN ERIC started operation in the Spring of 2012, with nine founding members – Austria, Bulgaria, Czech Republic, Denmark, Estonia, Germany, The Netherlands, Nederlandse Taalunie (the Dutch Language Union, an international organization based in Flanders and the Netherlands), and Poland. Six additional members are expected by the end of 2012: Croatia, Finland, Latvia, Lithuania, Norway, Slovenia.

Thomas Zastrow (Tübingen) introduced the EUDAT Data Project, which brings together a consortium of research communities and national data and high performance computing centers, aiming to contribute to the production of a collaborative data infrastructure (CDI) to support Europe’s scientific and research data requirements, and to deal with the “data tsunami” – note that no longer merely a deluge! As well as CLARIN, there are participants from Earth sciences (EPOS), Climate sciences (ENES), Environmental sciences (LIFEWATCH), and Biological and medical sciences (VPH).

Erik Ketzan and Pawel Kamocki (IDS, Mannheim) introduced the CLARIN-D legal helpdesk and “Three Important Legal Concepts for Language Scientists in Germany”.
The first two of these concepts represented encouraging news about the relatively liberal provisions of German law for personal scientific use and implied licences. However, we should note that services built on the these exceptions will pose problems for the CLARIN infrastructure, the boundaries of which are EU-wide (at least). It remains to be seen how we can deal with problems of identifying the relevant legal jurisdictions for complex workflows involving cross-national collaborations and distributed architectures. It might prove necessary to base services on the assumption of the lowest common denominator of EU-wide legal principles, rather than on those of the most liberal country. (By the way, the third concept was the potential landmine of database rights!)

In summary, it was extremely encouraging to see the plans of CLARIN, first conceived many years ago, start to come to fruition. The connections now being made in Germany with key communities of academic researchers is of paramount importance, and will need to be carried on in other countries. There were a few niggling doubts in this respect – it would have been good to find out more about connections with literary scholars, and with TextGrid and DARIAH. But overall, CLARIN-D shows a remarkable level of maturity, at both technical and organisational levels. There are numerous key challenges ahead, but this community seem well-equipped to address them. We have seen the future of language resources, tools and services, and it works!

Discovering Babel – final outcomes

Discovering Babel – final outcomes
Posted on October 19, 2011 by Martin Wynne

Originally posted at blogs.it.ox.ac.uk on October 19, 2011 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

This is a summary of some of the key outcomes of the Discovering Babel project, with links to where you can find out more.

Next steps

For those of you looking to find electronic literary and linguistic resources please visit the Oxford Text Archive (OTA) and the CLARIN Virtual Language Observatory. The OTA will shortly relaunch with a new look and feel,and many new resources. The VLO is constantly improving and under development.

Those of you creating and sharing language resources, please join the CLARIN-UK mailing list. This list is a forum for creators and users of linguistic resources and tools to discuss how we can go forward to develop better facilities and shared services, and to gather user requirements.

Evidence of Re-use

The metadata that has been made available as part of the Discovering Babel project is being harvested by the CLARIN Virtual Language Observatory, and can be viewed on their portal. At the moment, we still have some performance issues with delivering the files via OAI-PMH, so there may only be a few records listed there, but we have identified the problem and will be fixing it in the next few days!

The work in Discovering Babel has contributed to an enhanced Oxford Text Archive, with more reliable and more easily discovered catalogue records, and with open access texts at persistent locations. This is designed to allow others to build services on top of our data, in a distributed environment. It has already helped to make possible the JSC-funded Great Writers project, which will, among other things, link to source texts in various formats, including epub, in the OTA.

The OTA is now also working together with the creators of Voyant at the University of Alberta, who have under development exactly the sort of tools that we imagined would bring our texts alive. Visit https://voyant-tools.org/ [link updated in 2019 -was http://voyeurtools.org/] and paste in the following URI to get a flavour of what will be possible:

https://ota.bodleian.ox.ac.uk/repository/xmlui/bitstream/handle/20.500.12024/3253/3253.xml [link updated in 2019 – was http://www.ota.ox.ac.uk/text/3253.xml]

You can see more about this text at http://www.ota.ox.ac.uk/desc/3253. At the beginning of 2011, texts from the OTA were only available on request for download. Already now, thanks in large part to Discovering Babel, we are seeing on our desktop the emergence of seamless access to distributed texts with remote tools in a service-oriented architecture.

Further collaborations with the National Grid Service in the UK to host language resources in the Cloud for UK researchers, with the development of a cross-repository search service for CLARIN, and shared services in Project Bamboo will all be underpinned in part by work done in Discovering Babel.

Skills needed for the project

The basic technical skills needed were for processing XML, e.g. XSLT 1.0 and 2.0, plus installation of modules in an Apache server, including Shibboleth access and identity management software. Various perl scripts were also deployed. Exactly how to do these things in this circumstances in which we were working were not things that anyone in the team had done before. For example, we had to read about and learn the specifications for the Open Archives Initiative Protocol for Metadata Harvesting, and the about the element set for describing language resources from the Open Language Archives Community, as well as the Shibboleth software. We were able to call on expertise in the Oxford University Computing Services for the fundamental technical areas and administrative procedures, and on experts in the CLARIN network across Europe for guidance on implementation in the specific scenarios for sharing language resources. Perhaps more than technical skills, knowledge of the work that was going on in our institution, nationally, and around Europe in the relevant areas were key to the success of the project.

Most significant lessons learned

  • Don’t build a digital silo: engage with infrastructure initiatives, such as CLARIN, and find out about recommendations for good practice in connecting resources, such as the Resource Discovery Task Force, and avoid building an online resource which is difficult to find and unconnected to other data and tools;
  • At the technical level, be flexible. This work touched on fast-changing fields, and we needed to be prepared to learn about new things, and to change the technological solutions which we deployed. This also meant planning for future change in order to make services sustainable;
  • Keep it simple: our successes were not the result of great leaps forward, or building complex and flashy front-ends and tools. Instead, we applied good practice in a systematic way in order to provide reliable services to underpin and fit into a shared services infrastructure. So simply providing crosswalks to Dublin Core from our metadata, and establishing an OAI-PMH service opened many doors. Putting the resource files at accessble URIs on the web allows new types of service to be developed, with much easier access and more powerful functionality.

What are the Digital Humanities

Originally posted at blogs.it.ox.ac.uk on March 30, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The Day of Digital Humanities on 27th March this year has provoked numerous conversations about the nature of Digital Humanities (DH). Some believe DH is a discipline or community, with its own methods, resources, communities of practice, journals, standards of evidence, etc.

Others prefer simply to use the term as a way of looking at activity across a number of humanities-related disciplines which has a significant digital component, and while it is useful to trace connections in terms of methods, resources and tools, it is preferable for digital research in the humanities to live within the historic academic disciplines. It could be argued, for example, that the work of ‘digital classicists’ should be primarily related to addressing research questions in the mainstream of classics (or relevant sub-discipline), not primarily focussed on interacting with an interdisciplinary ‘digital humanities’.

But this is simplistic: digital research can be transformative, allowing new research questions to be formulated and posed, thus transforming existing communities. DH can enable new forms of inter-disciplinary research. Geographical Information Systems (GIS), together with large historical datasets in digital form, can allow visualizations of spatial data in ways that allow new questions to be asked in, for example, economic history, literature, history of science, linguistics, toponymy, climate studies, etc.. New points of contact between these disciplines are created, and also with scientists, social scientist, engineers and technologists in geographical sciences.

Where are the Digital Humanities?

Digital research in the humanities takes place in a variety of institutional frameworks, from isolated individuals in otherwise non-digital faculties to large specialist centres. There are 22 member organisations in the ‘Network of Expert Centres in the Digital Humanities in Britain and Ireland’, but there is no common template. To give a few partial examples:

  • The Oxford e-Research Centre has a strong DH team and project portfolio, but is not exclusively humanities-focussed, by any means, and the vast majority of DH activity in the university is outside of this department;
  • CRASSH at Cambridge is focussed on the arts and humanities, but is not exclusively digital;
  • The Department of Digital Humanities at KCL is an academic faculty which comes out of a merger of centres and groups who focussed on infrastructure, teaching, and technical development work on research projects;
  • Institute for Historical Research offer a wide range of facilities and services which assist the researching, teaching, writing and dissemination of history, not all of them digital;
  • Archaeology Data Service runs a data repository and associated services to support research, learning and teaching in Archaeology

In fact, while there are strong overlaps in activities and organizational forms between many of the centres, there is no easily discernible common factor which is true for all centres.

This network of ‘centres’ risks failing to connect with the large number and wide range of academics engaged with digital research in the humanities who are not associated with one of these centres. The problem is writ larger at the international scale with the wider centernet network. The answer is not necessarily to create and connect more ‘centres/centers’ to encompass the wide range of activity currently outside of them. There is no consensus on what a center should do and how it should fit into an institution, and the very existence of a centre risks detaching practitioners of digital research from the mainstream of their disciplines.

DH@OX aims to provide a view of the wide range of DH activity across the University, and to support this activity in various ways, including facilitating communication and collaboration between researchers, and building better infrastructure and support services, but without imposing any particular boundaries, organisational models or definitions on the ‘digital humanities’.

It remains to be seen which approaches will prove most fruitful in the long term. The Day of Digital Humanities is likely to be a recurrent catalyst ongoing relections and discussions for many years to come.

Silos or fishtanks?

Originally posted at blogs.it.ox.ac.uk on April 6, 2012 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The following is a partial summary of a presentation given at the Interedition Symposium in the Hague in March 2012 on the topic of Scholarly Digital Editions, Tools and Infrastructure.

People are often talking about digital silos in the context of digital resources in the humanities. The problem is that resources, although valuable in themselves, are located in different locations on the web, where they might be difficult to find, and they all have their own individual interfaces and registration procedures, and are not connected with similar or related resources. So you can’t easily search the Old English Corpus (available either for download with no software from the OTA, or online via numerous university library portals to local users). Some resources, like the ARCHER corpus, you can’t access at all unless you’re friends with someone at the University of Manchester.

Silo image from Doc Searls (dsearls)

This is clearly far from ideal. But what alternative, more connected, architectures are most appropriate to achieving interoperability and sustainability of the arena of digital textual scholarship? The emergence of fast and high capacity networks, a deluge of data, and web service APIs mean that it is increasingly possible to imagine and build distributed architectures for scholarly services, where data, tools, computing resources, and the outputs of annotation and analysis live in different parts of the network but can be brought together virtually in the user’s desktop environment. The current concerns about ‘digital silos’, in which the outputs of digital humanities projects are deployed online unconnected to other resources, and with limited sustainability, are directly addressed by this vision.

I want to put forward put the argument for distributed architectures, while reviewing some of the risks and problems, and survey some current moves towards such an infrastructure. And I also want to suggest another metaphor as an alternative to the ‘silo’.

An open and fully distributed architecture where the resources are located in different places can have the advantages of allowing the following services to be created:

  • potentially unlimited functionality, since developers can deploy content and tools that they want to use, and which can interoperate with other data, tools and infrastructure services;
  • building ad hoc collections and corpora across different repositories;
    complex workflows, for example piping together web services from different locations;
  • protected resources (e.g. works in copyright, sensitive data) curated in situ yet still analysed online via web applications which access the data via a secure infrastructure.

All of this can happen in a situation with a better division of labour than we typically have now: the repositories don’t have to worry about tools; tool and content developers don’t have to worry about creating the entire online environments; tool developers don’t have to worry about data management; users don’t have to install software; etc.. The emergence of an ‘ecosystem’ with numerous actors providing content, tools, computing resources, and other infrastructure services, provides a flexibility and resilience and the potential for sustainability which is not possible for a single-site or other more closed or monolithic system.

So let’s consider the unconnected, problematic online resource as a fishtank rather than a silo.

Goldfish image from Praveen Gupta (praveengupta)

There are lots of fishtanks out there, and they can be very large, elaborate, pretty, sophisticated, long-standing and sustainable. But they’re all in different places and they are not connected with each other. If you want to see a variety of fish, you have to visit a lot of houses, try to negotiate access to their fishtanks, and make use of whatever facilities they have for viewing or otherwise analysing the fish. Some places are better than other to visit – aquariums might have very good facilities and lots of information, but you still can’t view the fish in one aquarium alongside the fish in another, and it’s hard to compare them.

And if I want to keep a fish I have to build a fishtank and maintain a fishtank, or I could find someone else’s fishtank to put it in, but then it’s difficult for me to get access and control the environment. And who’s going to carry on feeding the fish? We can probably agree that it’s better if we don’t all try to make and look after our own fishtanks, at least not if our main goals are to enable as many people as possible to get into looking after, breeding and sharing fish, and if we want to be able to see a wide variety of fish. Wouldn’t it be better to have an ecosystem where we can all set our fishes free to swim together?

Marine Ecosystem image from www.sciencelearn.org.nz

This way, everyone can access all of the riches of the deep and it’s a lot easier to get into fish research.

Of course, ecosystems can be dangerous places, with predators and diseases, and they can be fragile. You could also argue that what fishkeepers really want is the experience of nurturing their own fish, and the enjoyment of setting up and maintaining their own micro-infrastructure, and therefore fishtanks are the best solution. But there a limits to the applicability and relevance of any metaphor.

There are potential disadvantages to distributed infrastructures, and many of them relate to the additional complexity that they introduce into the access and identity management arrangements. Arranging access to services in one location can be hard enough, but authorization to use, for example, textual data in more that one repository might require passing of information between institutions. It is also the case that while there are reasonably well-established technologies and procedures and agreements for controlling access to online content, the authorization of web services is not such a well-established area. Furthermore, authorization to access online content cannot easily be passed on to authorize access to the computer processing power that is necessary to carry out an online textual analysis, if this is being provided by another centre in the distributed infrastructure. In summary, the fact that distributed services are reliant on cross-institutional agreements and arrangements adds an extra hurdle to be crossed to participate, as data provider or user, and a layer of complexity and additional layers of risk to the robustness of services.

Other potential disadvantages of distributed infrastructures include:

  • Registering persistent identifiers with a shared service becomes desirable to sustain the interoperability of content and applications, thus adding another level of complexity to the curation of the data;
  • Monitoring of usage is difficult, since operations are being carried out on remote servers not under the control of the repository;
  • Monitoring of the availability of services is difficult – it might be possible to test the status of individual components but not a complex workflow;
  • Although underlying interoperability is essential, there is no impetus towards consistency in user interfaces, and even a tendency towards heterogeneity, and therefore fragmentation of services is likely to be maintained or even made worse;
  • Various further questions also remain (at least partially) unanswered in many cases, relates to where and how the computer processing is carried out, and how usage and services are monitored and logged.

We also need agreement at some level about our categories, formats and concepts. To get to the promised land, we need to agree on some standards. Linking datasets requires interoperability at the levels of the linguistic representations, annotations and metadata. Visualization of large datasets requires a reduction of variables, and deciding what is important and what is not. There is a tendency in the humanities for everyone to think that their way of looking at things and of categorizing things is unique. Annotations do sometimes embody the unique intellectual work of identification, categorization and interpretation of phenomena, and these are vital operations in the humanities, so it is not a surprise that this is problematic.

Another problem is that building infrastructure takes time and involves addressing complex and difficult administrative, legal, financial, political, technical barriers, often by making international agreements. So, usually, it’s easier to make ad hoc work-arounds. And building tools can be more attractive and rewarding. But actually, it’s a false opposition – enhanced infrastructure should help with tool development and deployment. An infrastructure providing a range of simple solutions for connecting together data and tools, deploying them as reliable services, managing authentication and authorization, licensing, access to computing power, monitoring availability, connection to virtual research environments, etc.

The mistake would be to try to build the perfect all-purpose tool, or to claim to provide services for end-users which solve all of the infrastructure issues. Or to put it another way, building the biggest and best fishtank in the world doesn’t solve the problem, because you can’t get all the fish in the world in there, allow everyone access to view every kind of configuration and interaction in there. But all too often this is what people try to do, rather than contributing a part of a wider, distributed system. Understandably people are impatient and our efforts and resources go into building new fishtanks, which can be fun to make, and which look good when people come to visit.

The Oxford Text Archive in 2013

Originally posted at blogs.it.ox.ac.uk on January 3, 2013 by Martin Wynne. IT Services at the University of Oxford has decided to delete a large number of historical blogs, and this is one of a number of posts related to the Oxford Text Archive which are being re-published here, after being laboriously retrieved from the archive provided by the Wayback Machine.

The New Year promises to be an exciting one for the Oxford Text Archive. As well as new accessions to the archive, new services and new collaborations, we plan to integrate the archive further into the new research data management services at the University of Oxford. This will involve working more closely with the Bodleian Libraries, who are embarking on a number of ambitious projects to serve the requirements of researchers for working with digital data.

The last year has seen the biggest ever expansion in the archive, with the accession of more than 2,000 texts from the Eighteenth Century Collections Online text creation partnership. These are made available under Creative Commons licences, another new venture for the OTA, and we plan to release future accessions with the relevant CC licence. These texts, along with all other XML resources, are now made available in a variety of formats, including popular ebook formats, converted automatically by the Oxgarage web service. We are planning future releases of Early English Books Online (EEBO) texts as they come into the public domain.

The Oxford Text Archive has taken over the management and distribution of the British National Corpus. We are not able to give support for the Xaira software, which continues as an open source project, but we continue to distribute copies of the corpus. In 2013 we will open a consultation on how to open access to the corpus with the corpus linguistics community and other stakeholders. We aim to make more widely available a BNCWeb service hosted by the National e-Infrastructure Service with secure authentication for users in educational establishments. The excellent online services listed at http://www.natcorp.ox.ac.uk/ continue to be available online.

The OTA also hopes that in 2013 we will be able to make more links with CLARIN infrastructure services and projects. OTA resources are already visible via the CLARIN Virtual Language Observatory, and we hope to participate in the federated content search demonstrator which is being built now. However, proper participation for service centres like the OTA, and for other institutions and individual researchers, does require that the UK funders and policymakers finally acknowledge the importance of the emerging European research infrastructure. Regretfully, attempts to engage research councils, JISC and the UK Access Management Federation in these processes continue to founder. Let’s hope for more progress in 2013, and that policy-makers start to act on their promises about building and promoting digital research infrastructure in the UK.

Oxford Text Archive

The blog will focus on:

  • announcements of new resources in the Oxford Text Archive, and new features in the web interface
  • explanations of how the OTA works and why, to help users make the most of it (e.g. how to log in, how SAML2 authentication works)
  • announcement of and reports from events with OTA involvement
  • news about collaborations and connections, e.g. with CLARIN and DARIAH, and with developments in the UK research infrastructure

The blog will represent a personal view from the staff behind the archive, with occasional invited posts from others.