Logo OR2015
 

Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Return to conference web site

 
Session Overview
Date: Tuesday, 09/Jun/2015
11:00am - 12:30pmP1B: Cultural Heritage, Museums, and Archives
Session Chair: Carol Minton Morris
Regency E-F 
 

tranScriptorium : computer aided, crowd sourced transcription of hand written text (for repositories?)

Rory McNicholl, Timothy Miles-Board

University of London, United Kingdom

Over the past 10+ years significant investment has been made by various European cultural heritage organisations in digitising historical collections of handwritten documents. The output of these digitisation projects may end up in a repository improving access to document images. Can this access be further enhanced?

The Transcriptorium project is a European Commission FP7 funded project (2013-2015) that brings together a suite of tools for the purpose of computer aided transcription and enhancement of digitized handwritten material. These software tools include those for document image analysis (DIA) developed by National Centre for Scientific Research (Greece), handwritten text recognition (HTR) developed by the Universitat Politecnica de Valencia (Spain) and natural language models (NLM) developed by Institute of Dutch Lexicology, Universiteit Leiden (Netherlands). As the project required that these tools be available to other systems they have been developed to operate as software services.

The project included the development of a desktop application (University of Innsbruck, Austria) and a crowd-sourcing platform (University College London and University of London Computer Centre, UK) that use the DIA, HTR and NLM outputs to arrive at computer aided transcription solutions, designed with the aim of improving efficiency and reducing cost of the transcription of handwritten documents.

McNicholl-tranScriptorium-87_a.doc

The Media Ecology Project – Online Access for Scholars Adds Value to Media Archives

Mark Williams1, John Bell1, Mark Cooper2

1Dartmouth College, United States of America; 2University of South Carolina

The Media Ecology Project (MEP) is a coalition of scholars, archivists, and technologists dedicated to expanding the scope of interaction between the academy and the archive. MEP enables new forms of digital access to and scholarly analysis of moving image collections and visual culture more generally. The scope of MEP’s work toward this goal includes exploring new methods of critical human and computational analysis of media, developing networks between institutions that expose existing archival collections to new audiences, and building tools that facilitate automated sharing of rich cultural data and metadata among software platforms.

MEP is designed to promote efficient cooperation and produce motivated engagement with cultural memory artifacts by academic and scholarly communities. We support close textual studies of the subject matter, production, reception, and representational practices of media. In doing so, MEP also seeks to advance fields of scholarship surrounding these materials and promote a greater understanding of the development and impact of historical media. Raising awareness of these important historical collections is the first step to protecting and sustaining them.

MEP has engaged a wide variety of individuals and institutions to develop a network of stakeholders committed to working to advance its goals.

Williams-The Media Ecology Project – Online Access for Scholars Adds Value-141.pdf

Building the Perfect Repository for Archival Collections: Lessons Learned from the Henry A. Kissinger Papers Project

Kevin L. Glick, Rebecca Hirsch, Steelsen Smith

Yale University Library, United States of America

A vision for the perfect repository necessarily incorporates rights management and system integration, but, more importantly, is based upon the needs of researchers. Archival collections require access to rich descriptive content and easy browsing of a hierarchical arrangements and archival bonds of collections that cannot be adequately represented with systems designed for monographs, serials, or stand-alone still images. This presentation demonstrates how the Kissinger Papers project at Yale was used as an opportunity to conceive of, and develop a repository tailored to these needs that would be flexible enough for all types of archival collections. The presentation also addresses a unique way to handle rights management that allows for the digitization of an entire collection while still maintaining granular control over researcher access and requesting workflows.

Glick-Building the Perfect Repository for Archival Collections-192.pdf
 
1:30pm - 3:00pmP2B: Image Management
Session Chair: Jonathan Markow
Regency E-F 
 

Mirador: A Cross-Repository Image Comparison and Annotation Platform

Robert Sanderson, Stuart Snydman, Drew Winget, Benjamin Albritton, Tom Cramer

Stanford University, United States of America

The Mirador viewer embodies the promise of Open Repositories from which content may be accessed on the web by any system, rather than only by the repository's own user-facing client. In an ecosystem of truly open repositories, which host open access content and are built on common infrastructure and common APIs, Mirador enables new forms of scholarship and publication by bringing disparate content together in a single image viewing platform. As cultural heritage organizations move towards openly sharing content of all types from their repositories, web applications like Mirador can easily enable comparison, analysis and innovation across repository boundaries.

Sanderson-Mirador-226.pdf

Crowdsourcing of image metadata: Say what you see

Claire Knowles1, Sukdith Punjasthitkul2, Scott Renton1, Gavin Willshaw1, Geoff Kaufman2, Mary Flanagan2

1University of Edinburgh, United Kingdom; 2Dartmouth College, United States of America

Digitised archival, museum, gallery and library content often lack metadata on the subjects within the image as the only metadata is for the source object. For example, the name of the author of the digitised book is known, but not that the image shows a bird. Crowdsourcing games allow this missing data to be captured through mass-participation with users describing what they can see. The integration of user-generated tags aids discovery of cultural heritage collections through the enhanced search terms provided.

Tiltfactor Laboratory at Dartmouth College has developed Metadata Games http://www.metadatagames.org as an online platform for gathering user-generated tags on photo, audio, and video collections. In tandem, the University of Edinburgh has developed its own crowdsourcing game to improve discoverability of its collections.

This presentation will discuss our experiences with crowdsourcing games including challenges encountered and workflows for integrating user-generated/folksonomic tags with authoritative data to aid discovery. Preliminary data will be presented illustrating the extent that user-generated tags enhance search at one’s institution and increase traffic to online collections.

Finally, there will be information on how you can use metadata games to enhance the discoverability of your institution's collections.

Knowles-Crowdsourcing of image metadata-73_a.pdf
Knowles-Crowdsourcing of image metadata-73_b.pptx

Authenticated Access to Distributed Image Repositories

Robert Sanderson1, Jon Stroop2, Simeon Warner3, Michael Appleby4

1Stanford University; 2Princeon University; 3Cornell University; 4Yale University

An increasing percentage of the world's cultural heritage is online and available in the form of digital images, served from open repositories hosted by memory, research and commercial organizations. However, access to the digital surrogates may be complicated by a number of factors: there may be paywalls that serve to sustain the host institution, copyright concerns, curatorial arrangements with donors, or other constraints that necessitate restrictions on access to high quality images. Images are also often the carrier for scientific and research information, particularly in the medical and biological domains. In many of these cases the images cannot be openly available because of personal privacy.

The International Image Interoperability Framework (IIIF) has made great strides in bringing the world's image repositories together around a common technical framework. Now with its membership boasting nine national libraries, many top tier research institutions, national and international cultural heritage aggregators, plus commercial companies and other projects, use cases such as those above have raised authentication and authorization to the top of the “must-have” list of features to ensure continued rapid adoption. This presentation will focus on description of the IIIF authentication use cases and challenges, and then outline and demonstrate the proposed solution.

Sanderson-Authenticated Access to Distributed Image Repositories-47.pdf
 
3:30pm - 5:30pmP3B: Managing Rights
Session Chair: Amy Buckland
Regency E-F 
 

[24x7] A Request to Vet System for Opening Potentially Culturally Sensitive Material

​Scott Ziegler

American Philosophical Society, United States of America

Use restrictions are often imposed by donors or copyright law. In our case, it’s a self-imposed starting point as we re-think our relationship with the many Native American communities whose material we hold.

Last year, the American Philosophical Society Library (APS), an independent research library in Philadelphia, adopted protocols that help standardize the use of material that Native American communities consider culturally sensitive. During the same year, a large collection was scanned and added to the APS digital library. Specific items within the collection are likely to be culturally sensitive. To ensure that we act in accordance with our protocols, we will restrict every item until it has been vetted by a staff member.

We have created a “request to vet” process by which members of the scholarly community can request that our staff review a particular item. If an item is cleared of sensitivity concerns, it is freely available through our digital library. If there are questions about its status, additional Native American partners are asked to review it.

This talk discusses the balance between openness and cultural sensitivity and presents our use case for walking the thin line between these two important principles.

Ziegler-24x7 A Request to Vet System for Opening Potentially Culturally Sensitive Material-36.pptx

[24x7] Growing Hydraheads at Yale University Library

Eric Robert James

Yale Library, United States of America

To offer an interface for the library’s digital collections and archives, Yale Library has adopted the hydra stack for what are currently 3 access interfaces, findit, an application currently supporting 9 special collections and containing approximately 700k object, the Henry Kissinger Papers which when complete will contain approximately 1.7m images, and the Yale Indian Papers Project, a small collection of approximately 2k objects . This presentation summarizes key customizations and features including ingest, contextual navigation, fulltext search, image and transcript viewing, and ongoing work with authentication and authorization.

James-24x7 Growing Hydraheads at Yale University Library-20.ppt

YOU MUST COMPLY!!! Funder mandates and OA compliance checking

Richard Jones

Cottage Labs, United Kingdom

Open Access compliance checking is currently a task carried out by humans, and there is no one single place to look for the relevant information. This means that it is time consuming, and a prime candidate for total or partial automation. Being able to quickly and easily check compliance of an article or a set of articles will be of benefit to both institutions and funders.

In this presentation we will look at the main aspects of compliance that funders tend to be looking for, such as licence conditions, embargoes, and self-archiving in repositories. Through 3 projects that have run in the UK over the past year, we will explore the current progress in this space, from the technical underpinnings of solutions (involving connecting out to multiple APIs, and text analysis of articles and metadata) to the more refined user-facing tools that make engaging with the data viable for non-technical users.

Jones-YOU MUST COMPLY!!! Funder mandates and OA compliance checking-116_a.pdf

Panel: DMCA takedown notices: managing practices from the perspective of institutional repositories

Simone Sacchi1, Kathryn Pope1, Katie Fortney2, George Porter3, Donna Ferullo4

1Columbia University, Center for Digital Research and Scholarship; 2University of California, California Digital Library; 3California Institute of Technology, Caltech Library; 4Purdue University, University Copyright Office

A Digital Millennium Copyright Act (DMCA) takedown notice can result in a time-consuming and confusing process for a repository manager. The rights and responsibilities of the repository and the copyright claimant are often clouded by historical changes in copyright law, variations in the law of different countries, and commonly held misconceptions about copyright ownership.

This panel presents an opportunity for repository managers to strategize about best approaches to DMCA takedown notices. The panelists--representing repositories varying in size, scope, and staffing--will recount their experiences with takedown notices, outlining steps taken and policies implemented in response, and evaluating the effectiveness and implications of different philosophical and practical approaches.

The organizers aim to empower repository managers to more proactively respond to takedown notices with an increased understanding of their options under the DMCA.

Sacchi-Panel DMCA takedown notices managing practices from the perspective-151.doc
 

Date: Wednesday, 10/Jun/2015
10:30am - 12:30pmP4B: Supporting Open Scholarship and Open Science
Session Chair: David Wilcox
Regency E-F 
 

Panel: Avalon Media System: Community Implementation and Sustainability

Jon W. Dunn1, Mike Durbin3, Hannah Frost4, Debs Cane2, Julie Rudder2

1Indiana University; 2Northwestern University; 3University of Virginia; 4Stanford University

Indiana University and Northwestern University, in collaboration with nine partner institutions, recently completed the last year of a three-year IMLS-funded effort to build the Avalon Media System, an open source solution for managing and providing access to digital audio and video collections, based on Fedora and the Hydra repository software development framework. As the Avalon platform reaches maturity, several institutions are in the process of implementing Avalon both to replace current time-based media access solutions and to support new use cases. In addition, new funding from the Andrew W. Mellon Foundation will support continued work to develop new features, grow and provide support for the community of adopters, and move Avalon towards organizational and financial sustainability.

This panel will bring together project leaders from Indiana and Northwestern, along with Avalon community members at the University of Virginia and Stanford University, to share experiences of implementing Avalon at their institutions, integrating Avalon with other local systems, and supporting Avalon to enable a variety of use cases in research, teaching, and learning. Panel members will also discuss future development plans and provide a preview of how the project intends to transition from a grant-supported endeavor to a community-sustained solution.

Dunn-Panel Avalon Media System Community Implementation and Sustainability-159_a.pptx

Prototypes of pro-active approaches to support the archiving of web references for scholarly communications

Richard Wincewicz1, Peter Burnhill1, Herbert Van de Sompel2

1University of Edinburgh, United Kingdom; 2Los Alamos National Laboratory, USA

The web is a fluid environment and web pages often change in nature or disappear altogether. Scholarly articles reference web pages to support the author’s arguments but these references are susceptible to ‘reference rot’ and without these references the evidence for the arguments is lost. Reference rot is a combination of link rot and content drift. Link rot occurs when the URI of a reference is no longer available and content drift is caused by the content at the end of the URI differing from what the author originally referenced.

The addition of temporal references and the pro-active archiving of references in an article provides future readers with the ability to examine the supporting evidence for an article, even if the content has ceased to exist in its original location. Ensuring continued integrity and long-term access to the scholarly record.

The Hiberlink project (http://hiberlink.org, #Hiberlink) created plugins for a number of systems that allow for pro-active archiving with the minimum of disruption to the user’s usual workflow. Plugins for Zotero and OJS have been built along with infrastructure that allows references to be archived in multiple locations.

Wincewicz-Prototypes of pro-active approaches to support the archiving of web references-153_a.pptx

Repository Power: How Repositories can support Open Access Mandates

Pedro Principe1, Najla Rettberg2, Jochen Schirrwagen3, Eloy Rodrigues1, José Carvalho1, Paolo Manghi4, Natalia Manola5

1University of Minho, Portugal; 2University of Goettingen, Germany; 3University of Bielefeld, Germany; 4Consiglio Nazionale delle Ricerche, Italy; 5National Kapodistrian University of Athens, Greece

Many funding agencies have Open Access mandates in place, but how often are scientific publications as outputs linked to funding details? The benefits of linking funding information to publications as part of the deposit workflow can assist in adhering to Open Access mandates. This paper examines how OpenAIRE – Open Access Infrastructure for Research in Europe – can ease monitoring Open Access and reporting processes for funders, and presents some results and opportunities. It also outlines how it relies on cleaned and curated repository content, a vital cog in the ever turning wheel of the global scholarly landscape, and the benefits it brings.

Principe-Repository Power-166_b.pdf
 
1:30pm - 3:30pmP5B: Metadata / Exploring Metrics and Assessment
Session Chair: Elin Stangeland
Regency E-F 
 

User Search Terms and Controlled Subject Vocabularies in an Institutional Repository

Scott Hanrath, Erik Radio

University of Kansas, United States of America

Controlled vocabularies are an important mechanism for ensuring consistency in a repository and necessary for maximal collocation for searching by subject. The University of Kansas Libraries' is in the process of implementing FAST as a subject vocabulary for its institutional repository, KU ScholarWorks. Of the metadata fields used for retrieval, subjects are particularly valuable, allowing for a type of collocation less easily achieved through titles or abstracts. However, the quality of subject terms can vary based on policies guiding their selection. If controlled vocabularies present a solution for reducing metadata 'noise', one must also consider the search behavior of the user. How well do user queries align with a controlled vocabulary, and what's the level of effort required to reconcile legacy subject terms with a new vocabulary? Our analyses uses search queries which led users to items in our repository. These queries are reconciled against FAST terms, legacy subject terms, and more broadly across repository records to determine the potential effects on search behavior. The effort required to reconcile legacy metadata will be considered as the repository seeks to reconcile its history of uncontrolled language with a more systematic and extensible vision for the future.

Hanrath-User Search Terms and Controlled Subject Vocabularies-202_b.pptx

Metadata at a crossroads: shifting ‘from strings to things’ for Hydra North

Sharon Farnel

University of Alberta Libraries, Canada

At the University of Alberta Libraries we are currently developing a Digital Asset Management System (‘Hydra North’, built on Hydra and Fedora 4) to bring all of our digital assets into one platform for discovery, access and preservation. The metadata underlying these repositories has been created according to many standards (DC, MODS, EAD, etc.) and varies in level of fullness and overall quality. We find ourselves at a ‘metadata crossroads’ as we attempt to bring this disparate metadata together. We see a solution in a move to RDF and the application of the principles of linked data. In this presentation we will discuss some of the initial questions we asked ourselves as we tried to fully grasp what the move to RDF and linked data would mean for our existing metadata; outline some of the decisions we made along the way, and why, and what the impact has been; provide concrete examples of the thought processes and workflows involved in moving from existing non-RDF metadata to RDF, based on the principles of linked data; provide an update on progress to date; reflect on lessons learned and outline next steps.

Farnel-Metadata at a crossroads-206_a.pdf
Farnel-Metadata at a crossroads-206_b.pptx

"How much?": Aggregating usage data from Repositories in the UK

Ross MacIntyre1, Paul Needham2, Jo Alcock3, Jo Lambert1

1Jisc, United Kingdom; 2Cranfield University, United Kingdom; 3Evidence Base, Birmingham City University, United Kingdom

IRUS-UK is a national standards-based statistics aggregation service for repositories in the UK provided by Jisc. The service processes raw usage data from repositories, consolidating those data into COUNTER-compliant statistics by following the rules of the COUNTER Code of Practice – the same code adhered to by the majority of scholarly publishers. This will, for the first time, enable UK repositories to provide consistent, comparable and trustworthy usage data as well as supporting opportunities for benchmarking at a national level. This talk provides some context to development, benefits and opportunities offered by the service, an institutional repository perspective and future plans.

MacIntyre-How much-5_a.pptx

Incorporating COUNTER compliant download statistics into an EPrints repository

Alan Stiles

The Open University, United Kingdom

Researchers are taking repository download statistics more seriously than ever before, and are citing them in funding bids as evidence of previous impact. The repository staff are receiving more enquiries relating to download statistics as time goes on, so having the most accurate and reliable statistics available is becoming clear. Having them available in a way that requires the least intervention of repository staff would allow for more efficient dealings with other repository tasks, such as deposit review, etc.

In this paper, I look at the issues faced with incorporating, into the repository, statistics from IRUS-UK (Institutional Repository Usage Statistics, UK based) – a JISC supported, centralized system which collates the download statistics for 75 UK repositories, ensuring better than COUNTER compliance. I also consider the choices to be made with regards to comparisons to locally derived statistics (IRStats2) and how to obtain and present them without overburdening the central system.

Finally I look to what opportunities the near future may hold, with reference to my participation on the IRUS-UK community Advisory Group and the NISO SUSHI-Lite Working Group to establish a light-weight RESTful standard for SUSHI services and how these might be incorporated into the repository.

Stiles-Incorporating COUNTER compliant download statistics into an EPrints repository-25_a.pptx
Stiles-Incorporating COUNTER compliant download statistics into an EPrints repository-25_b.docx
 
4:00pm - 5:00pmP6B: Ideas Challenge
Session Chair: Adam Field
Session Chair: Claire Knowles
Regency E-F