Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|Date: Tuesday, 09/Jun/2015|
|8:00am - 9:00am||Breakfast|
|8:00am - 5:30pm||Registration|
|9:00am - 9:30am||Welcome to OR2015|
|9:30am - 10:30am||PLN1: Opening Plenary: Leveraging the Web for Research, Kaitlin Thaney (Mozilla Science Lab)|
|10:30am - 11:00am||Break|
|11:00am - 12:30pm||P1A: Linked Open Data (LOD)|
Session Chair: Simeon Warner
Fedora4: The Open Linked Data Platform
1Duraspace, United States of America; 2Stanford University
Linked Open Data has moved from being a buzzword to a fundamental building block of modern repositories and information systems. Its explosion, taking in this domain the form of scholarly and scientific datasets, publications, annotations, cultural heritage descriptions and other repository-based content, offers unprecedented opportunity for scientific and societal advancement. It is the interconnections that integrate systems and resources, however, that turn disparate ideas into unanticipated solutions.
The Open Repositories community largely understands the value of linked data. The trouble has been in answering the question of “how?” to do this together, rather than "why?" do it at all. In order to be effective, we need to have solid guidelines, common practices, and well specified toolsets and products. A confluence of developments has opened the door to exactly this.
In October of 2012, the initial W3C working draft of the Linked Data Platform 1.0 (LDP) document was published. In July of 2012, the demand for a next-generation Fedora platform was channeled into the Fedora 4 (F4) project, which at the time was termed Fedora Futures. The alignment of these two stars set in motion events that provided both requirements and solutions for the community.
Ozmeka: extending the Omeka repository to make linked-data research data collections for all research disciplines
1University of Technology Sydney, Australia; 2Intersect limited, Australia
The Ozmeka project is an Australian open source project to extend the Omeka repository system. Our aim is to support Open Scholarship, Open Science, and Cultural Heritage via repository software than can manage a wide range of Research (and Open) Data, both Open and access-restricted, providing rich repository services for the gathering, curation and publishing of diverse data sets. The Ozmeka project places a great deal of importance in *integrating with external systems*, to ensure that research data is linked to its context, and high quality identifiers are used for as much metadata as possible. This will include links to the ‘traditional’ staples of the Open Repositories conference series, publications repositories, and to the growing number of institutional and discipline research data repositories. In this presentation we will take a critical look at how the Omeka system, extended with Ozmeka plugins and themes can be used to manage three diverse projects and talk about how this work paves the way for eResearch and repository support teams to supply similar services to researchers in a wide variety of fields. This work intended to reduce the cost of and complexity of creating new research data repository systems.
Why FRBRoo and CIDOC CRM are great for expressing (Linked, Open) Ethnographic Research Data
University of Prince Edward Island, Canada
Can we use a Linked Open Data framework in place of classic static metadata in a Fedora Repository? We describe how we catalogued a collection of interrelated ethnographic research data using the FRBRoo and CRM ontologies. These vocabularies allowed us to express detailed relationships between digital objects and entities (people, places, events, concepts) in a more nuanced way than traditional bibliographic metadata schemas such as MODS and MADS. Our implementation uses a network of entities in a Fedora repository, with the CRM and FRBRoo properties in the Mulgara triplestore, to catalogue data from an ethnographic project in a way that will drive the Islandora display, while allowing the network to be queried by researchers.
We describe the process of transforming our data set, consisting of audio and video recordings; photographs; biographical information; music notation files; and textual content, for inclusion into this RDF-centric repository, and the challenges encountered in modelling born-digital content in vocabularies designed to contain surrogates for physical objects.
|11:00am - 12:30pm||P1B: Cultural Heritage, Museums, and Archives|
Session Chair: Carol Minton Morris
tranScriptorium : computer aided, crowd sourced transcription of hand written text (for repositories?)
University of London, United Kingdom
Over the past 10+ years significant investment has been made by various European cultural heritage organisations in digitising historical collections of handwritten documents. The output of these digitisation projects may end up in a repository improving access to document images. Can this access be further enhanced?
The Transcriptorium project is a European Commission FP7 funded project (2013-2015) that brings together a suite of tools for the purpose of computer aided transcription and enhancement of digitized handwritten material. These software tools include those for document image analysis (DIA) developed by National Centre for Scientific Research (Greece), handwritten text recognition (HTR) developed by the Universitat Politecnica de Valencia (Spain) and natural language models (NLM) developed by Institute of Dutch Lexicology, Universiteit Leiden (Netherlands). As the project required that these tools be available to other systems they have been developed to operate as software services.
The project included the development of a desktop application (University of Innsbruck, Austria) and a crowd-sourcing platform (University College London and University of London Computer Centre, UK) that use the DIA, HTR and NLM outputs to arrive at computer aided transcription solutions, designed with the aim of improving efficiency and reducing cost of the transcription of handwritten documents.
The Media Ecology Project – Online Access for Scholars Adds Value to Media Archives
1Dartmouth College, United States of America; 2University of South Carolina
The Media Ecology Project (MEP) is a coalition of scholars, archivists, and technologists dedicated to expanding the scope of interaction between the academy and the archive. MEP enables new forms of digital access to and scholarly analysis of moving image collections and visual culture more generally. The scope of MEP’s work toward this goal includes exploring new methods of critical human and computational analysis of media, developing networks between institutions that expose existing archival collections to new audiences, and building tools that facilitate automated sharing of rich cultural data and metadata among software platforms.
MEP is designed to promote efficient cooperation and produce motivated engagement with cultural memory artifacts by academic and scholarly communities. We support close textual studies of the subject matter, production, reception, and representational practices of media. In doing so, MEP also seeks to advance fields of scholarship surrounding these materials and promote a greater understanding of the development and impact of historical media. Raising awareness of these important historical collections is the first step to protecting and sustaining them.
MEP has engaged a wide variety of individuals and institutions to develop a network of stakeholders committed to working to advance its goals.
Building the Perfect Repository for Archival Collections: Lessons Learned from the Henry A. Kissinger Papers Project
Yale University Library, United States of America
A vision for the perfect repository necessarily incorporates rights management and system integration, but, more importantly, is based upon the needs of researchers. Archival collections require access to rich descriptive content and easy browsing of a hierarchical arrangements and archival bonds of collections that cannot be adequately represented with systems designed for monographs, serials, or stand-alone still images. This presentation demonstrates how the Kissinger Papers project at Yale was used as an opportunity to conceive of, and develop a repository tailored to these needs that would be flexible enough for all types of archival collections. The presentation also addresses a unique way to handle rights management that allows for the digitization of an entire collection while still maintaining granular control over researcher access and requesting workflows.
|11:00am - 12:30pm||P1C: Re-using Repository Content|
Session Chair: Elin Stangeland
[24x7] A basic tool for migrating to Fedora 4
University of Virginia Library, United States of America
This short presentation will cover the design and features of a simple tool developed to migrate content from earlier version of Fedora to Fedora 4. While focused principally on moving and reformatting metadata, considerations regarding feature compatibility will be touched upon.
[24x7] Virtual Local Repositories
Boston Public Library, United States of America
A viable repository system is often out of reach for smaller cultural institutions due to the financial costs and technical expertise usually associated with such software. But does every institution actually need its own local repository? Why not instead reap the benefits of contributing digital objects to a larger regional system and use APIs and other tools to create a "virtual" local repository? This talk will look at the case of Digital Commonwealth (http://www.digitalcommonwealth.org), a statewide repository system which hosts and aggregates content from over 100 libraries, archives, and museums across Massachusetts, and the turn-key "virtual repository" application that allows member institutions to re-use that content locally.
The primary examples shown in the talk will be on customized, branded virtual repositories for the Boston Public Library and Norman B. Leventhal Map Center, two institutions that have digital objects hosted within the Digital Commonwealth repository. While their data may exist within a statewide system designed to leverage economies of scale with regards to preservation and access, the content has been integrated it into each institution’s web presence as if it was stored locally.
While conceptually not limited to a specific technology, our implementation is based on Fedora/Hydra/Blacklight.
[24x7] The Food Map - Adding Value to Re-Purposed Data
University of Guelph, Canada
Private industry, government, academic researchers and society at large have needs for the specialized expertise that resides on our campuses. The research accomplishments of our scholars are buried in myriad closed or narrowly-defined data stores. By collating this existing information in an openly-available, comprehensive catalogue of research activity we serve as brokers between 'need' on one hand and 'expertise' on the other, enabling partnerships that can result in the creation of new knowledge. As a major research theme at the University of Guelph, food research is a multidisciplinary field that crosses many boundaries. Despite the prevalence of this theme, there was previously no single authoritative source of information on food research. After identifying this need, the Food Institute and the University of Guelph Library partnered to create the Food Map, a project designed to leverage existing data already present in the institutional repositories and numerous other data stores distributed throughout the institution. Further, the Food Map adds value to project metadata through augmentation and by providing a focused, indexed, and searchable interface. Through these strategies the Food Map facilitates the formation of new connections between researchers, industry, government, the public, and the media.
Leverage DSpace for an enterprise, mission critical platform
We would like to share with the DSpace Community some useful tips, starting from how to embed DSpace into a larger IT ecosystem that can provide additional value to the information managed. We will then show how publication data in DSpace - enriched with a proper use of the authority framework - can be combined with information coming from the HR system. Thanks to this, the system can provide rich and detailed reports and analysis through a business intelligence solution based on the Pentaho’s Mondrian OLAP open source data integration tools.
We will also present other use cases related to the management of publication information for reporting purpose: publication record has an extended lifecycle compared to the one in a basic IR; system load is much bigger, especially in writing, since the researchers need to be able to make changes to enrich data when new requirements come from the government or the university researcher office; data quality requires the ability to make distributed changes to the publication also after the conclusion of a validation workflow.
Finally we intend to present our direct experience and the challenges we faced to make DSpace easily and rapidly deployable to more than 60 sites.
10 years of „Bielefeld Academic Search Engine“ (BASE): Looking at the past and future of the world wide repository landscape from a service providers perspective
Bielefeld University, Germany
The conference paper will describe the challenges for repositories from a service providers perspective. OAI-PMH and OAI-DC were successful in increasing the number of repositories and the dissemination of their documents in the web. BASE as the biggest OAI-PMH service provider world wide has accompanied this development for more than 10 years, but now the limits of this approach are obvious. Based on the analysis of the provided data formats and the content of more than 3,300 repositories we will illustrate the need for the further development and interoperability of repositories and external services. Data and service providers have to think about improved protocols, extended metadata formats and the increasing number of external linkable data sources, if they want to play a key role for supporting scholarly communication in the future.
|11:00am - 12:30pm||P1D: Developer Track 1|
Session Chair: Adam Field
Munging your data in Java with five times less code
ORCID, United States of America
Approx. Duration: 10-15mins
Java has some useful tools for creating XML/JSON APIs, and for storing data in relational databases such as Postgres. Like many similar apps, ORCID uses JAXB to represent XML/JSON as Java classes, and JPA to do the equivalent job for data from database tables.
To implement the ORCID REST API, however, we need to seamlessly translate from one to the other (in both directions!) - how do we do this without invoking a monstrous gob of code?! Enter Orika, a Java library that we use to reduce our JAXB to JPA mapping code from nearly 1000 lines to under 200.
This presentation will include:
- Basic introduction of Orika and brief demo
- Comparison of Orika to other tools
- Advanced customizations
- Examples showing how Orika is used in ORCID
Microservices with Docker and Go
University of Edinburgh, United Kingdom
Technologies like Docker are becoming more common and allow developers to speed up their development workflows and get robust code into production much faster. Docker is written in Go and creating microservices in Go allows for very lightweight components that are easy to deploy. Connecting these together with a messaging system creates an environment that is flexible, distributed and elastic.
My presentation will involve demonstrating a number of Go-based microservices that work in concert to process content ingested into a Fedora repository. I will also demonstrate how the system can be scaled to deal with increased traffic. I anticipate that this will take around 20 mins to explain and demonstrate.
DSpace UI enhancements, visualizations and Python scripting
After launching our DSpace-based repository for research outputs, we added many small, but convenient UI features like citation counts, SHERPA/RoMEO status checker, RefWorks export, citation generator, Ex Libris bX (related articles), which move us from a barebones repository a few steps towards a perfect one. We'll show what technology is behind them, which APIs were used and how it was integrated into XMLUI.
Last but not least, we used Python to build some of the above and more. Although using Java instead would be possible, Python made the prototyping faster and more fun. I'll show several ways how you can leverage Python to work with the DSpace Java API and Solr.
Vagrant-DSpace Live Demo
University of Missouri, United States of America
Vagrant is a tool for building complete development environments. With it, you can set up a development environment quickly, reproducibly, and in a way that is readily sharable with others. Vagrant-DSpace harnesses this power to help developers quickly ramp up to working with DSpace, and to faciltate sharing that work with others. Don't believe it? Let me show you. I will live demo Vagrant-DSpace in action. If you bring your notebook and have a Vagrant Cloud login, maybe we can even do some impromptu pair programming? Let's do this.
SobekCM : A true standards-based, structured, user-friendly approach to APIs and open data
Sobek Digital Hosting & Consulting, LLC, United States of America
The Open Source SobekCM digital repository is approaching its tenth year of development and third year of being released Open Source. Recently, the SobekCM development community has focused on a major architectural revolution, moving towards configurable micro-services and a clear separation between the engine and standard web interface, to enable greater ease for installation and administration while adding support for research data. SobekCM continues to represent a unified, structured approach while retaining its commitment to standards-compliance, retaining core METS/MODS and embracing other metadata formats. In addition, community involvement around the software has continued to develop, with the official community framework draft released in early 2015.
This presentation will introduce the new REST APIs and show ways to “hack the API” to quickly build a new user interface over the SobekCM engine to replace the standard, included user interface. The presentation will showcase the API for searching and sharing research data which will include building a research data portal. The presentation will cover issues encountered and solutions developed when implementing changes to increase configurability, modularity, and customization while remaining true to the core set of beliefs that founded SobekCM.
|12:30pm - 1:30pm||Lunch|
|1:30pm - 3:00pm||P2A: Integrating with External Systems: the use case of ORCID|
Session Chair: Maureen Walsh
Panel: So we all have ORCID integrations, now what?
1University of Notre Dame; 2University of Missouri System; 3Dryad Digital Repository; 4Digital Repository Services Ltd; 5mire
In the past year, the major groundwork has been laid for repository systems to support ORCID identifiers. DSpace, Hydra, and EPrints all have support for storing and managing ORCIDs. However, we are still in the early stages of ORCID adoption. Only a small fraction of repository content is annotated with ORCIDs, and most end-users have not yet realized any benefit from the features based on ORCID.
This panel will bring together representatives of major repository systems to relate the current status of ORCID implementations, discuss plans for future work, and identify shared goals and challenges. The panelists will discuss how ORCID support provides practical benefits both to repository staff and end-users, with a focus on features that exist now or will exist in the next year.
Horizontal vs vertical organizations of repositories
ORCID, Inc, United States of America
Over its lifetime, an organization’s structure often transitions between a vertical (hierarchical) one and a horizontal (flat) one. Each structure has its benefits and challenges as it relates to how information is communicated, how work gets done, and how collaborative or self-contained the organization can or must be in interacting with others. This session explores these models as applied to repositories in the interconnected ecosystem in which they exist. Specifically it will consider:
* What are the attributes of a “vertical” or highly-aggregated repository? How does this differ from a “horizontal” or highly-distributed one? What are some examples of each type?
* What are the benefits and challenges that each model provides?
* In what situations does it make sense for the two models work together?
* A case study: how some traditionally “vertical” repositories are benefiting from the “horizontal” information model being provided by tools like ORCID.
|1:30pm - 3:00pm||P2B: Image Management|
Session Chair: Jonathan Markow
Mirador: A Cross-Repository Image Comparison and Annotation Platform
Stanford University, United States of America
The Mirador viewer embodies the promise of Open Repositories from which content may be accessed on the web by any system, rather than only by the repository's own user-facing client. In an ecosystem of truly open repositories, which host open access content and are built on common infrastructure and common APIs, Mirador enables new forms of scholarship and publication by bringing disparate content together in a single image viewing platform. As cultural heritage organizations move towards openly sharing content of all types from their repositories, web applications like Mirador can easily enable comparison, analysis and innovation across repository boundaries.
Crowdsourcing of image metadata: Say what you see
1University of Edinburgh, United Kingdom; 2Dartmouth College, United States of America
Digitised archival, museum, gallery and library content often lack metadata on the subjects within the image as the only metadata is for the source object. For example, the name of the author of the digitised book is known, but not that the image shows a bird. Crowdsourcing games allow this missing data to be captured through mass-participation with users describing what they can see. The integration of user-generated tags aids discovery of cultural heritage collections through the enhanced search terms provided.
Tiltfactor Laboratory at Dartmouth College has developed Metadata Games http://www.metadatagames.org as an online platform for gathering user-generated tags on photo, audio, and video collections. In tandem, the University of Edinburgh has developed its own crowdsourcing game to improve discoverability of its collections.
This presentation will discuss our experiences with crowdsourcing games including challenges encountered and workflows for integrating user-generated/folksonomic tags with authoritative data to aid discovery. Preliminary data will be presented illustrating the extent that user-generated tags enhance search at one’s institution and increase traffic to online collections.
Finally, there will be information on how you can use metadata games to enhance the discoverability of your institution's collections.
Authenticated Access to Distributed Image Repositories
1Stanford University; 2Princeon University; 3Cornell University; 4Yale University
An increasing percentage of the world's cultural heritage is online and available in the form of digital images, served from open repositories hosted by memory, research and commercial organizations. However, access to the digital surrogates may be complicated by a number of factors: there may be paywalls that serve to sustain the host institution, copyright concerns, curatorial arrangements with donors, or other constraints that necessitate restrictions on access to high quality images. Images are also often the carrier for scientific and research information, particularly in the medical and biological domains. In many of these cases the images cannot be openly available because of personal privacy.
The International Image Interoperability Framework (IIIF) has made great strides in bringing the world's image repositories together around a common technical framework. Now with its membership boasting nine national libraries, many top tier research institutions, national and international cultural heritage aggregators, plus commercial companies and other projects, use cases such as those above have raised authentication and authorization to the top of the “must-have” list of features to ensure continued rapid adoption. This presentation will focus on description of the IIIF authentication use cases and challenges, and then outline and demonstrate the proposed solution.
|1:30pm - 3:00pm||P2C: Developing and Training Staff|
Session Chair: Sarah Shreeves
Panel: Hacking the Community: A Model for Open Source Engagement
1DuraSpace; 2York University; 3Stanford University
Open source software isn’t really free. This might seem obvious to some, but there are many members of open source communities that consume rather than contribute; they use the software but are either unwilling or unable to engage with the community to write code, submit use cases, create documentation, or do any of the other things that make an open source project a success. Fortunately, things don't have to be this way.
Over the past two years, the Fedora project has undertaken a great effort to revitalize not only the software but the community itself. By maintaining open, transparent communication, soliciting use cases, development, and testing from community members, and establishing a clear project governance structure, we have laid the groundwork for a successful community source project. At the same time, the Islandora and Hydra communities have pursued similar strategies to build and sustain their own communities and the broader Fedora community. This panel will feature a discussion on the recent successes of the Fedora community and future plans to continue raising the level of community engagement and project ownership.
Preparing for the 21st Century Repository: Needs, Practices and Frameworks for Library-based Repository Staff
1Maynooth University, Ireland; 2University of Limerick, Ireland
Librarians supporting digital repositories may perform a number of technical functions and so require a heterogeneous range of competencies. This paper considers how such competencies are acquired, developed and supported among library staff in institutions in Ireland and in the Hydra and Islandora communities of practice. A survey and needs assessment identifies the technical tasks repository staff perform. Finally a framework and draft curricula is proposed to provide these skills.
|3:00pm - 3:30pm||Break|
|3:30pm - 5:30pm||P3A: Integrating with External Systems|
Session Chair: Maureen Walsh
[24x7] ORCID Integration: Services to Create and Use ORCID IDs at KAUST
King Abdullah University of Science and Technology, Saudi Arabia
As ORCID IDs become accepted as a global standard, the potential benefits to researchers and institutions of making use of them are multiplying. At King Abdullah University of Science and Technology (KAUST) we adopted the first institutional open access policy in the Arab region in June 2014 and are integrating with ORCID as part of services to researchers that improve the preservation and dissemination of their research. We started by using ORCID IDs to identify authors in our institutional repository. Our next step was to join ORCID as an institutional member and then set up a plan to support our faculty, researchers and students in the creation and use of ORCID identifiers. This presentation will look at the choices made and lessons learned during this process, focusing on the tools we developed to interact with the ORCID member API and the ways in which introducing ORCID has complemented other repository initiatives, such as the implementation of our institutional open access policy.
[24x7] From archives to repository: an archival collection management system and repository integration case study
Tufts University, United States of America
At the Digital Collections and Archives (DCA) at Tufts University we have designed, built, and integrated our archival collection management system and repository’s administrative interface to facilitate ingesting archival objects into our Fedora based repository. This 24x7 session briefly explores the assumptions and functional requirements we have used to guide this development work. The DCA’s unique position as an archives that is one of the key stakeholders and users of the Tufts institutional repository has enabled us to meet this integration challenge. The session describes how the integration of our archival collection management system and our repository relies on the ability to flexibly move metadata from one system to another.
[24x7] 77 Lines of XSLT and a Roll of Duct Tape: ArchivesSpace as Metadata Hub in a Multi-Repository Environment
University of Denver Libraries, United States of America
ArchivesSpace is enjoying increasing adoption among archives and special collections units as a collection management system. Its API supports a variety of metadata exports, making it a tempting place for archives engaged with digitization and digital repository management efforts to create and manage metadata for those projects. While the API is powerful, the differences in exports among the various content modules can make building this shared metadata infrastructure difficult, especially in a hosted environment.
This talk will focus on efforts at the University of Denver Libraries to build metadata pipelines connecting ArchivesSpace to other data repositories, particularly our XTF finding aid database and our Islandora-based institutional repository. Challenges we encountered along the way, successes, and ongoing thorns in our side will be addressed. An objective of this talk is to start, or contribute to, a conversation among other ArchivesSpace institutions about how we can improve its support for repository initiatives in archives and special collections.
Making Connections: The SHARE Notification Service and the Open Science Framework
Center for Open Science, United States of America
The Center for Open Science (COS) is a non-profit technology startup seeking to enhance the openness, integrity, and reproducibility of scientific research. One way COS approaches this mission is by building infrastructure to facilitate good scientific practice, disseminate scholarly communication and scientific research, and facilitate the accurate accumulation of scientific knowledge. COS looks to connect all parts of the research lifecycle: planning, execution, reporting, archiving, and discovery. Its free, open source, flagship product, The Open Science Framework, leverages application programming interfaces (APIs) from various services to unite resources and increase the efficiency of researchers. In partnership with the Association of Research Libraries, COS is building the SHared Access Research Ecosystem (SHARE) Notification Service. This talk will detail how the Open Science Framework connects various services and repositories and incorporates the SHARE notification service. It will include information about future connections, and how the community can influence these connections.
“Headless” Metadata for Library Discovery: NYU’s Ichabod Project
NYU, United States of America
From DSpace to Drupal, NYU has a variety of systems to ingest and display curated digital content. To make this content discoverable centrally, we developed a tool for metadata ingest, transformation, and discovery based on a popular open-source software stack: Fedora, Hydra, Solr, and Blacklight. Called “Ichabod,” this tool has allowed us to ingest, normalize, and enrich metadata from diverse systems of record and make it consumable by our main discovery tool, which is powered by the Ex-Libris product Primo. We developed Ichabod using the Agile methodology and involving developers from three distinct NYU Libraries groups. The software will lay the groundwork for future innovation in the areas of metadata management and discovery for repository content. The relationships we established have already made it possible for a similar collaboration arrangement on two other projects, with more to come in the future.
Integration with external systems and services: challenges and opportunities
Jisc, United Kingdom
Services that support open access need to interoperate effectively with each other and with external systems if they are to succeed in their mission to help institutions to become as effective and efficient as possible in capturing and sharing their research. Jisc, which provides shared services along these lines to UK institutions, has identified this as a particular challenge. It works with a number of partner organizations, including the University of Nottingham (Sherpa Services), EDINA, Mimas and the Open University (Knowledge Media Institute), in developing these services.
Achieving integration between these partner services and institutional systems has provided a number of benefits and opportunities, and has also highlighted a number of challenges. These include how best to integrate with local workflows, technical compatibility, reliance on data from publishers and, in turn, questions of trust in the services built on this data. This presentation looks at these issues from the point of view of a provider of shared services, but also includes examples from the institutional perspective.
|3:30pm - 5:30pm||P3B: Managing Rights|
Session Chair: Amy Buckland
[24x7] A Request to Vet System for Opening Potentially Culturally Sensitive Material
American Philosophical Society, United States of America
Use restrictions are often imposed by donors or copyright law. In our case, it’s a self-imposed starting point as we re-think our relationship with the many Native American communities whose material we hold.
Last year, the American Philosophical Society Library (APS), an independent research library in Philadelphia, adopted protocols that help standardize the use of material that Native American communities consider culturally sensitive. During the same year, a large collection was scanned and added to the APS digital library. Specific items within the collection are likely to be culturally sensitive. To ensure that we act in accordance with our protocols, we will restrict every item until it has been vetted by a staff member.
We have created a “request to vet” process by which members of the scholarly community can request that our staff review a particular item. If an item is cleared of sensitivity concerns, it is freely available through our digital library. If there are questions about its status, additional Native American partners are asked to review it.
This talk discusses the balance between openness and cultural sensitivity and presents our use case for walking the thin line between these two important principles.
[24x7] Growing Hydraheads at Yale University Library
Yale Library, United States of America
To offer an interface for the library’s digital collections and archives, Yale Library has adopted the hydra stack for what are currently 3 access interfaces, findit, an application currently supporting 9 special collections and containing approximately 700k object, the Henry Kissinger Papers which when complete will contain approximately 1.7m images, and the Yale Indian Papers Project, a small collection of approximately 2k objects . This presentation summarizes key customizations and features including ingest, contextual navigation, fulltext search, image and transcript viewing, and ongoing work with authentication and authorization.
YOU MUST COMPLY!!! Funder mandates and OA compliance checking
Cottage Labs, United Kingdom
Open Access compliance checking is currently a task carried out by humans, and there is no one single place to look for the relevant information. This means that it is time consuming, and a prime candidate for total or partial automation. Being able to quickly and easily check compliance of an article or a set of articles will be of benefit to both institutions and funders.
In this presentation we will look at the main aspects of compliance that funders tend to be looking for, such as licence conditions, embargoes, and self-archiving in repositories. Through 3 projects that have run in the UK over the past year, we will explore the current progress in this space, from the technical underpinnings of solutions (involving connecting out to multiple APIs, and text analysis of articles and metadata) to the more refined user-facing tools that make engaging with the data viable for non-technical users.
Panel: DMCA takedown notices: managing practices from the perspective of institutional repositories
1Columbia University, Center for Digital Research and Scholarship; 2University of California, California Digital Library; 3California Institute of Technology, Caltech Library; 4Purdue University, University Copyright Office
A Digital Millennium Copyright Act (DMCA) takedown notice can result in a time-consuming and confusing process for a repository manager. The rights and responsibilities of the repository and the copyright claimant are often clouded by historical changes in copyright law, variations in the law of different countries, and commonly held misconceptions about copyright ownership.
This panel presents an opportunity for repository managers to strategize about best approaches to DMCA takedown notices. The panelists--representing repositories varying in size, scope, and staffing--will recount their experiences with takedown notices, outlining steps taken and policies implemented in response, and evaluating the effectiveness and implications of different philosophical and practical approaches.
The organizers aim to empower repository managers to more proactively respond to takedown notices with an increased understanding of their options under the DMCA.
|3:30pm - 5:30pm||P3C: Developing and Training Staff (continued)|
Session Chair: Sarah Shreeves
Panel: Building a culture of distributed access in shared digital repository services
1Colorado Alliance of Research Libraries; 2LYRASIS; 3University of Oregon Libraries
Partnerships for shared repositories offer the promise of repository services at a decreased cost due to shared infrastructure and staff. In practice, reduced costs for shared repositories often require tradeoffs in security or access for the shared system.
Staff working in a shared system may be geographically distributed or may work for different institutions with different priorities and reporting lines. Effective use of shared services requires thoughtful communication and tools that help maintain consistency and prevent conflicts when multiple people work in the same system.
In this panel, shared repository service managers for multisite Islandora installations and a Hydra partnership will discuss methods for distributing system access and communicating with staff who work at our parent organizations, partner institutions, and third-party vendors. Each panelist will discuss the methods used so that distributed staff can have the level of access necessary to use the repository’s unique functions, while also ensuring that widely distributed system access doesn’t result in data loss or system failures.
Panel: Evolve: From Project Manager to Service Manager
1Northwestern University, United States of America; 2Penn State; 3Stanford University; 4University of Connecticut
As institutions develop and implement increasingly complex and mature repository systems and tools for managing and delivering digital content, they have the need for dedicated staff to ensure these systems and tools satisfy the research and teaching demands of their various communities. These staff also must manage the challenge of moving a system or application from a development project to a formal production service, with sufficient operational coverage and capacity to scale, while balancing ongoing development and enhancement needs, as well as user support. We refer to this emerging, dynamic area as digital library service management.
This session seeks to expand and broaden the conversation around service management to the wider repository community. The panelists will define the role of digital library service manager at their respective institutions, discuss the overlap with previously established roles like that of the project manager and address the challenges and solutions in training staff. Panelists will also offer ideas and questions about the future evolution of service management.
|3:30pm - 5:30pm||P3D: Developer Track 2|
Session Chair: Claire Knowles
Metadata Extraction as a Service
Mendeley, United Kingdom
Generating metadata records for IR deposits should not have to be a manual process. I'll demo some technology that Mendeley has developed which uses machine learning to automatically extract article metadata from PDFs and show how this can be used as a service within your own IR. Might not have a working DSpace plugin to show by then, but I'll have something appropriately hacky. The catalog enrichment part might be the most interesting and unique thing for attendees, for more on that, see: https://krisjack.wordpress.com/2015/03/12/how-well-does-mendeleys-metadata-extraction-work/
Doing DevOps for a Perfect Repository Environment
University of Cincinnati Libraries, United States of America
The University of Cincinnati implemented their Hydra-based repository, Scholar@UC, using a DevOps approach. DevOps is a development methodology that promotes communication and collaboration between the software developers and IT operations. The partnership between the Libraries’ developers and UC's IT staff fostered a stable and robust hosting environment for the repository.
Glen will discuss the journey UC took from code to deployment and highlight what worked and what didn't work so well. He will also share the many tools used to develop Scholar@UC's hosting and deployment environment including Vagrant, Puppet, GitHub, and Bamboo. Glen will also explain the importance of communication and share the steps UC took to make sure everyone had a clear understanding of what needed to get done.
Archidora: Leveraging Archivematica preservation services with an Islandora front-end
Artefactual Systems, Inc., Canada
Archidora was co-developed by Islandora developers, Discovery Garden and Archivematica developers, Artefactual Systems, and sponsored by the University of Saskatchewan Libraries. Stated simply, files uploaded to Islandora pass from Fedora to Archivematica, where they are processed for preservation. Once the archival packages are stored, Islandora is notified. This presentation would describe the current workflow, as well as discuss the opportunities it creates for development of features like PREMIS and DDI integration, Fedora support and integrity checks.
Time for presentation: 20 minutes
Data Citation Box
Charles University in Prague, Czech Republic
Citing submissions is important but citing data submissions has not an established format yet.
We have created a citation service based on DSpace OAI-PMH endpoint implementation which returns citations of resources specified by PID and in desired format like simple html styled text. We display the citation box in DSpace item view but also in external applications.
Publishing Datasets from an Open Access Repository As Linked Data
Oregon State University, United States of America
Exposing research data from traditional repository systems such as DSpace in Linked Data has numerous benefits such as increasing visibility, prompting open access, and interlinking datasets with elements such as authors. For a successful implementation, it is crucial to preserve object structure (e.g., hierarchical and related) in Linked Data in addition to bibliographic metadata.
This demonstration showcases a case study of migrating datasets from DSpace to a newly development institutional repository with Hydra technology including lessons learned from data modeling and approaches for metadata cleanup and controlled vocabulary enrichment. The estimated time for the demonstration is between 10 to 15 minutes.
|6:00pm - 8:00pm||SOC1: Poster Reception|
Session Chair: Amy Buckland
A case-study: Using the pdfminer python package and fuzzy matching to triage digitized legacy theses and dissertations for repository ingest.
University of Illinois, United States of America
This poster illustrates a detailed case study of an automated metadata solution using text extraction developed for the digitized legacy theses and dissertation collection into IDEALS, the institutional repository of the University of Illinois at Urbana-Champaign. To associate digitized legacy dissertations with department specific collections, we developed a text extraction process based on fuzzy matching to identify the granting department of the dissertations authors degree. This procedure was able to identify granting department for eighty-six percent of the more than 19000 dissertations for which the procedure was used.
A Permanent Home for Ephemeral Policies: Integrating DSpace with an Enterprise Content Management System in an Institutional Archive
Icahn School of Medicine at Mount Sinai, New York, NY
This presentation will discuss the successful integration of a DSpace instance with an enterprise content management system, examining some of the challenges faced in connecting two disparate information systems for the purpose of long-term preservation.
In 2014, IT staff at the Mount Sinai Medical Center implemented a content management system (CMS), based on the open source Liferay portal, to centralize the management of policies and procedures across all Medical Center departments. As the permanent home for expired policies, the Mount Sinai Archives was a stakeholder in this project. Archives staff worked with members of the CMS team to develop tools and workflow for automatically exporting policies from the CMS into the Archives’ existing DSpace digital repository, which had previously been used primarily for the storage and dissemination of manually cataloged digital objects. The integration process required addressing a number of technical and descriptive challenges, including crosswalking descriptive metadata, automating the creation of preservation copies, and dealing with inaccuracies in user-submitted metadata. The end result was a bespoke combination of automated data transfer and manual staff intervention by which changes to the CMS are regularly ingested into the repository.
Accelerating Access - Making Open Access Policies Work From Day One
1Symplectic, United Kingdom; 2California Digital Library
Successful recruitment of published content into institutional repositories relies on three key components: 1) funder mandates or institutional policies requiring deposit of papers; 2) a means of ensuring that deposit occurs as soon as possible after a paper’s acceptance or publication; and 3) an efficient, intuitive mechanism for helping faculty fulfill their deposit requirements with minimal effort.
Research management systems play an increasingly important role in the scholarly publishing ecosystem, helping collate information about scholars’ publications and enabling institutions to effectively implement and monitor open access policies. . This presentation examines how a research management system can be set up to provide faculty with the tools they need to easily comply with their institutional OA policy and to help repository managers track policy compliance rates across the institution.
Achieving Ambitious Agendas with Limited Means at the University of Cincinnati
University of CIncinnati, United States of America
Scholar@UC - scholar.uc.edu - is the faculty self-submission repository currently in development at the University of Cincinnati (UC). Using the Hydra framework, this system comes in an environment of dramatic change: new partnerships across campus and with other entities, new engagement with faculty and stakeholders, growing needs for internal staff job development, and development of new researcher services. The UC Libraries is lean on staffing in comparison with its peers, so we face unique challenges that require flexibility and creativity. We embrace both nimble processes and a strong sense of risk-taking, to ensure that Scholar@UC becomes a critical enterprise system. This panel reflects on three aspects of our engagement and development efforts. First, we will discuss outreach efforts to bring together a small set of “early adopter” faculty, and the process of assembling feedback in a personalized, interview-based setting. Then, we will discuss the process to transform this feedback into functional use cases that prioritize needs and desires. Finally, we will discuss building a small and high-functioning software development team, and collaboration with UC’s central IT department and other local/national development efforts. We think this presentation will offer insight for other institutions with ambitious agendas and limited means.
Adding content reporting to DSpace
1The University of Waikato, Hamilton, New Zealand; 2AgResearch Limited, Hamilton, New Zealand
This poster presents a content reporting add-on to DSpace, developed for AgResearch Ltd by the IRR support team at the University of Waikato's Information Technology Services Division. We outline the motivation for developing this add-on, give a high-level description of its implementation and report initial insights on its reception and uptake.
AGGREGATING MULTIPLE SOURCES TO PREPOPULATE NEW REPOSITORIES
MyScienceWork, United States of America
We present the process to prepopulate a new repository by discussing this in the context of MyScienceWork’s POLARIS platforms. We introduce the aggregation service that uses an internal database, uploads from users and several external services to centralize and structure metadata. We also introduce the process of assigning confidence rates to both metadata from different sources and to author-publication matching processes. In certain cases, manual validation is required when the confidence rate is low
An Implementation of Technical Revision in DSpace Allowing Open Educational Resource Browser Access
Federal University of Rio Grande do Sul - UFRGS – Data Processing Center, Brazil
This work shows how the DSpace software was changed to create the technical revision step in the workflow submission of the Open Educational Resources (OER) community in Lume – Digital Repository of UFRGS. The main goal of technical revision step is allow access the OER directly on browser. OER installation is not required.
Better metrics through metadata: a study of the use of persistent identifiers in IRs
1Impactstory, USA/Canada; 2Mississippi State University Libraries, USA
An increasing number of institutional repositories are partnering with services like PlumX to track citations and altmetrics for the content they host, but are all the pieces in place to ensure that such metrics are tracked successfully? Persistent identifiers like DOIs are used by altmetrics services to track accurate metrics for both repository-hosted content and related versions of the content that are hosted elsewhere (on publisher websites, aggregators like PubMed Central, and so on). Yet our case study--based on an analysis of the metadata for items held in an institutional repository at a CIC institution--finds that persistent identifiers are missing from more than 75% of all item records (where such identifiers exist). This poster shares the full results of our analysis (based on persistent identifiers including DOIs, PubMed IDs, and ArXiv IDs) and also benchmarks staff hours needed to find and add these identifiers to item records. While the study was limited in scope, it provides a framework for resource allocation planning for institutional repositories interested in adding altmetrics services.
Building CALiO™ – A Repository and Digital Library for Field Practitioners
National Children's Advocacy Center, United States of America
Creating a digital library designed for field professionals is much different from amassing collections for academic and research purposes. A digital collection that delivers resources which specifically address the time-sensitive information needs of practitioners, quickly, easily and reliably, becomes an invaluable tool for professional practice and improves outcomes. Repositories can play an active, pivotal role in addressing the needs of practitioners locally, regionally, nationally and internationally. This presentation describes the mission, development and implementation of CALiO™, the Child Abuse Library Online, which is the primary digital library for multi-disciplinary teams of professionals throughout the United States and in dozens of other countries.
The front face of CALiO™, and the primary digital collection for the large cadre of professionals nationally and internationally, is the repository, CALiO™ Collections. The repository supplements locally produced resources with open access resources to optimize user ability to obtain evidence-based information for decision making. This presentation discusses the important differences between a digital collection intended for practitioners vs academic/research clientele, the basic principles guiding design and implementation, and strategies employed for collections and services.
Building Research Data Repositories for the Humanities and Area Studies with CKAN (and Some Extensions)
Academia Sinica, Taiwan
We have sought to use CKAN, a free software package originally developed for sharing datasets, as the basis of a repository of research materials for the humanities and area studies. As the research materials are often diverse in data types and in domain semantics, we defined a set of control vocabularies for metadata usage in this repository. The controlled domain vocabularies help locate research resources for users, and help ensure the quality of metadata when importing these resources into the repository. For historical maps stored in the repository, we have sought ways to extract place name information in them such that place names can be searched by time periods and spatial extents, and spatiotemporal mappings of current or historical places can be performed in CKAN.
COAR Roadmap: Future Directions for Repository Interoperability
1COAR (Confederation of Open Access Repositories), Canada; 2University of Bielefeld
Scholarly communication is undergoing fundamental changes, in particular with new requirements for open access to research outputs, new forms of peer-review, and alternative methods for measuring impact. In parallel, technical developments, especially in communication and interface technologies, facilitate bi-directional data exchange across related applications and systems.
The success of repository services in the future will depend on the seamless alignment of the diverse stakeholders at the local, national and international level. The roadmap identifies important trends and their associated action points for the repository community and will assist COAR in identifying priority areas for our interoperability efforts in the future.
This document is the culmination of over a year’s work to identify priority issues for repository interoperability. The preparation of the roadmap was spearheaded by Friedrich Summann from Bielefeld University in Germany, with support from a COAR Editorial Group and input from an international Expert Advisory Panel.
Emerging Practices for Researcher Name Disambiguation in Institutional Repositories
Virginia Tech University Libraries
It has been observed that name disambiguation presents a significant, ongoing challenge for institutional repositories (Salo, 2009). The increase in digital scholarly resources has not always coincided with an increase in institutions’ implementation of authority control mechanisms for researchers' names (Ibid). As institutional repository content has increased, so has the occurrence of non-unique personal names, the presence of which hinders researchers’ ability to accurately and efficiently discover content. Although traditional library cataloging leverages multiple systems to disambiguate personal names, such as LC/NACO and ISNI, name authority control remains a challenging issue for institutional repositories.
As of January 2015, several pilot projects are exploring concrete, actionable solutions for implementing name authority control in institutional repositories. This poster will present a comparative analysis of these projects, illustrate the advantages and challenges institutional repositories face in adopting name disambiguation workflows and systems, and present ideas for the future development of name authority control functionality in institutional repositories.
Reference: Salo, D. (2009). Name Authority Control in Institutional Repositories. Cataloging & Classification Quarterly, 47(3-4), 249-261. doi: 10.1080/01639370902737232
Introducing the FSD’s repository management & discovery tools and software development approach
Finnish Social Science Data Archive, Finland
The Finnish Social Science Data Archive (FSD) has two new repository management tools. Data service portal Aila facilitates access to data and serves as the tool for data dissemination. One of its key features is the ability to control access to datasets according to the conditions set by data producers. Metka manages the data archive’s metadata production process, and provides the FSD’s other systems with the metadata they need. In addition to descriptive DDI2-metadata, it facilitates creation of structural and long-term preservation metadata. Aila and Metka define the software platform used when building new tools and services at the archive. All metadata are repurposed from a single authoritative source. For example, Shibboleth is consistently used for user authentication and OAI-PMH interface to provide metadata for harvesting.
This poster showcases the functionalities of both Aila and Metka, displays how they connect to FSD’s services, and shares the experiences gained so far. Finally, it introduces a remote entitlement management concept aimed to manage the workflow needed for granting an access to datasets that are only available for download after an explicit permit from the depositor, PI, research group or IRB.
There is Life After Grant Funding: How Islandora Struck Out On Its Own
1Islandora Foundation; 2University of Prince Edward Island
The early years of Islandora supported by a multi-year Atlantic Innovation Fund grant, which provided funding for developers, project management, interns, travel, and all of the other bits and pieces that get a software development project off the ground. During that time the Islandora community grew and flourished, but long-term sustainability needed clarity. In 2013, that grant was slated to come to an end and we needed to find a new way to help sustain the project. The Islandora Foundation was born from that need.
In the two years since the formation of the Islandora Foundation was announced at Open Repositories 2013, the project has welcomed more than a dozen supporting institutions, hosted Islandora Camps all over the world, and put out two fully community-driven software releases with dozens of new modules built and contributed by the Islandora community.
The Islandora project has made the journey from a grant-funded project incubated in a University library, to a non-profit that exists in symbiosis with the community it serves. This journey, and its place in the larger community of digital repositories and the institutions that use and support them, is the subject of this poster, which details nine-year history of the IF.
User-Testing of DRUM: What Academic Researchers Want from an Open Access Data Repository
University of Minnesota, United States of America
Funding agencies and institutions are increasingly asking researchers to better manage and share their digital research data. Yet, meeting those needs should not be the only consideration in the design and implementation of open repositories for data. What do researchers expect to get out of this process? How can we design our data repositories to best fit research needs and expectations, as well as those of the organization? At the University of Minnesota, we recently implement a new open repository service, the Data Repository for U of M (DRUM). This institutional-focused repository is designed for researchers to self-deposit their research data. The data then undergo a workflow of curatorial review, metadata enhancement, and digital preservation by a team of data curators in the library. The result is well-documented research data that are broadly disseminated through an openly accessible discovery interface (DSpace 4.2) and are uniquely identifiable for future reuse and citation using DataCite DOIs. Before marketing our service to campus, we performed three usability tests with our target population: academic research faculty with data they must share publicly. The results of our user-testing revealed a handful of configuration and interface design changes that would streamline and enhance our service.
Zenodo - One year of research software via GitHub integration!
Zenodo, a CERN operated research data repository for the long tail of science, launched a bit over a year ago its GitHub integration, enabling researchers to easily preserve and make their research software citable. Since then, 2000+ research software packages have been shared on Zenodo. This poster will give an overview over the uploaded software packages in terms of programming languages, subjects, number of contributors, countries, etc. We will further explore curation of research software an integration into existing subject specific repositories.
Digital Preservation the Hard Way: recovering from an accidental deletion, with just a database snapshot and a backup tape
University of Missouri, United States of America
An awesome tool set for digital preservation is available to all institutions who use DSpace. This is not a story of how we used this tool set. This is a story of how we recovered from an accidental deletion of a significant number of items, collections, and communities--an entire campus's ETDs: 315 missing items, 878 missing bitstreams, 1.4GB of data, 7 missing communities, 11 missing collections--using a database snapshot and a tape backup. The SQL we developed to facilitate this restoration may be helpful, but it is our hope that in comparison, the effort required to implement a proper backup and preservation safeguard, such as DuraCloud and/or the Replication Task Suite, will rightly seem more appealing. In other words: here's how to do it the wrong way, but you'd really be better off doing things the right way. This poster should be sufficient to serve as a guide for actually recovering from an accidental deletion of materials in DSpace, if one only has a database snapshot and a tape backup of a DSpace assetstore. It will also serve as a reminder of the digital preservation tool set available for DSpace, as well as why these tools exist.
Integrating institutional repository, researcher directory and library catalog using ORCID and Next-L Enju
National Institute for Materials Science, Japan
Many libraries provide web services such as library catalogs and institutional repositories. Each web service contains author profiles, which are not necessarily accessible from another service; sometimes only the author name is shared, which may be ambiguous. Therefore, I have started a new project aiming to share author profiles across an institutional repository, a researcher directory and a library catalog at NIMS Library using ORCID.
This project involves three components: NIMS researcher directory "SAMURAI", an institutional repository software "PubMan" and "Next-L Enju", an open-source library system developed in Japan. In this project, I developed an add-on module for Next-L Enju that enables synchronization of profile information between these three components through ORCID so that librarians can create a correct link from our library catalog to the institutional repository. In this poster, I would like to introduce a case study of its workflow and implementation.
Leveraging Repository Communities to Highlight Scholarly Content
Northeastern University, United States of America
This poster describes the community and collection structure in Cerberus, the Fedora/Hydra repository system developed by Northeastern University Libraries. Cerberus was designed to store the important digital assets created as part of the mission of the university, including scholarly, administrative, and archival objects, but we needed a way to easily promote the scholarly content (research publications, presentations, datasets, and theses and dissertations). We were able to highlight the scholarly content by introducing the notion of communities, which we used to create relationships between collections, users, and files. The community structure has not just neatly organized repository content according to the existing Northeastern structure, it has made it easier for the system to leverage the relationships between objects to enhance the discoverability of scholarly content in the repository.
Metadata Form Creation System
West Virginia University Libraries, United States of America
WVU Libraries needed an Archival and Preservation system for digital objects in its collections. Because of the diversity of the public facing distribution and repository systems available today, as well as anticipating the development of "The Next Great Repository," this new system needed to be repository and distribution system agnostic. WVU Libraries wanted the repository managers to be able to develop new and modify existing metadata entry forms with no, or minimal, support from systems. This solution should be able to gracefully handle the changing understanding of what metadata requirements for today's researchers are. Lastly this system needed to be able to convert digital objects from their archival format to their web presentation formats (resize images, combine tiffs into single PDFs, apply watermarks, etc.)
WVU Libraries solution is the Metadata Form Creation System. MFCS provides a drag and drop form creation platform, as well as a robust data entry system that provides all the needed tools for digital collection management. The API provides easy to use methods for batch migrating data into and exporting data out of the archival system for use in other repository systems (Hydra, DLXS, etc).
Now we are 13, Open Research Online becomes a teenager
The Open University, United Kingdom
The history of the Open University (UK) institutional repository (Open Research Online) is one of changing requirements as defined by its research community, institutional administrators and external HE policy. How the repository has responded to these changes has ensured its success. However, how we manage the (potentially) competing requirements of compliance monitoring and Open Access dissemination will determine the future of the repository.
Reducing metadata errors in an IR with distributed submission privileges
University of Texas at Austin, United States of America
The distributed submission policies of many repositories makes standardizing metadata input very difficult. At The University of Texas at Austin, there are over 50 people who have permission to submit content to the UT Digital Repository (UTDR). Out of that 50, two full-time staff members have management responsibilities for the repository. This limited number of managing staff means that frequent metadata clean-up isn’t possible. We are taking a pragmatic approach to addressing the issue of limited clean-up capacity by transforming our training process. Training focuses on clearly communicating repository-wide metadata standards, collaboratively creating collection-specific metadata guidelines as needed, and providing detailed input guidelines for each Dublin Core (DC) metadata field. We are working one-on-one with student workers to familiarize them with the new guidelines and are communicating with repository submitters via listservs and in-person meetings. The new guidelines were rolled out recently, and we expect to see a decrease in the number of records requiring editing. We will present examples from our new guidelines, suggestions for successful communication methods with stakeholders, and provide information regarding the incidence of errors since implementing the new training.
Repository Cross-Linking at the National Center for Atmospheric Research
National Center for Atmospheric Research (NCAR), United States of America
The National Center for Atmospheric Research (NCAR) builds and maintains a number of repositories for data and scholarship. Using these as a development test bed, our project demonstrates how multiple repositories of diverse resources can exchange and connect related information via complementary workflows and metadata sharing. Our poster maps out how we are building cross-link between our data and scholarship repositories, on the one hand establishing relationships between resources upon submission by researchers and on the other establishing technical connections between repositories on which to build out future interoperability.
Reworking the Workflow: Easy On Acceptance Deposits
Symplectic, United Kingdom
Obtaining metadata and content for your repository can be challenging. Wait until after publication, and you can likely harvest the metadata - but then you may not be able to get the content. Authors have the manuscript to hand when they get notification of acceptance for publication - but then the metadata has to be manually entered, and they may not have all of it, requiring that it is updated later.
This poster shows a new capture process and workflow that encourages authors to deposit their manuscript when it is accepted for publication, and automatically combines it with harvested metadata after publication to complete the repository record.
SEAD People, Data, Things: Linked profiles for decision making
Indiana University, United States of America
As open linked data gains traction, vastly more information becomes available and discoverable online. The SEAD project (Sustainable Environments Actionable Data) wants to take advantage of the rich linked data landscape. SEAD needs information, about its researchers in the research areas around sustainable science (“people”), about data sets that they use and produce (“data”), and about repositories to which they may deposit their data once the project is over (“things”). But the era is over where a profile service can be built from scratch, with any expectation of completeness much less staying current over time. Profile data exists everywhere: Linkedin, ORCID, Friends of a Friend (FOAF), and DBpedia to name a few sources. How can a service that needs this information harvest what it needs from the Internet, and use it in a way that can be trusted by all?
System for cross-organizational big data analysis of Japanese institutional repositories
1Hiroshima University, Japan; 2Keio University, Japan; 3Nationl Institute of Informatics, Japan; 4Suzuka National College of Technology, Japan; 5Doshisha University, Japan
Institutional Repositories DataBase (IRDB) is the system which collects metadata from almost all Japanese institutional repositories (IR). As one of the application, IRDB content analysis system provides statistical information depending on content type and format. It allows users to compare the cross-organizational data. However, it doesn’t have the data and functionality to compare the access statistics. This poster introduces the new feature of IRDB content analysis system with JAIRO Cloud, IR cloud service used by 250 institutions in Japan, which provides the cross-organizational access statistics of IRs.
The system can divide into two major components. One is the log repository which collects filtered access logs from Japanese IRs as data source. Unprocessed logs are filtered based on COUNTER-like processing such as robots exclusion. The filtering has to be done in each IR server since it includes elimination of the privacy data. The other component is the user interface. The system enables users, including repository managers, to analyze both content and access log to the content among Japanese IRs. We are planning to deploy the system to the institution which doesn’t utilize JAIRO Cloud in order to make it the standardized log analysis system in Japan.
Toward an Improved Understanding of Research Data Management Needs: Designing and Using a Rubric to Analyze Data Management Plans
1Georgia Institute of Technology, United States of America; 2Penn State, United States of America; 3Oregon State University, United States of America; 4University of Michigan, United States of America; 5University of Oregon, United States of America
The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. As research institutions endeavor to meet these demands, repositories are emerging as a potential solution for sharing and publishing research data. To develop new curation services, repository managers and developers need to understand how researchers plan to manage, share, and archive their data.
As a document produced by researchers themselves, data management plans (DMPs) provide a window into researchers’ data management knowledge, practices, and needs; they can be used to identify gaps in institutional capacity for sharing and preserving data. With that in mind, the IMLS-funded “Data management plans as A Research Tool (DART) Project” has developed an analytic rubric to standardize the review of NSF data management plans. The information gleaned from these evaluations can be leveraged for improving research data management services and infrastructure, from data management training to data curation repositories.
This poster will introduce the analytic rubric developed through a collaboration among five U.S. research institutions. The focus will be on examining the intentions of researchers toward data sharing and archiving, as expressed through a preliminary review of DMPs across these institutions.
The Open Shape Learning Object Repository: A Hybrid 3D Object and Open Educational Resource Repository
Oregon State University, United States of America
Oregon State University Libraries & Press have initiated the Open Shape Learning Object Repository Project, which aims to provide 3D objects to the open educational resource community by explicitly tying together 3D objects (renderable and printable models) and curricular elements. The aim of this repository model is to facilitate the use of 3D models and fabrication in the classroom at multiple levels of the curriculum.This project addresses the lack of cross-over between existing learning object repositories and 3D object repositories, and provides a guiding model for how repository systems and projects can facilitate bringing 3D modeling and fabrication into the open education community. Information packages served from our repository contain a renderable or printable 3D model (or set of models) along with a set of curricular elements that help contextualize the model(s) in the learning environment. We discuss the inception of the repository project, the results of a number of pilot projects, and our plans for future development.
Packaging DSpace Ingest Folders with FileAnalyzer
Georgetown University Library, United States of America
The Georgetown University Library has developed an application named the FileAnalyzer to facilitate the ingest of large collections of content into DSpace. The application can inventory a collection of files to be ingested and prepare ingest folders from a metadata spreadsheet. Once Georgetown University adopted these workflows, the backlog of collections to be ingested was eliminated.
This workshop will demonstrate the DSpace ingest workflows that are supported by the FileAnalyzer.
Participants will learn how to install the FileAnalyzer and run several of the tasks that can be useful for DSpace collection management. Using demo.dspace.org, participants will prepare and ingest content into the demonstration site.
Lastly, the session will discuss the framework for modifying the FileAnalyzer to implement institution-specific customizations.
Not Entirely Unlike - CRIS Integration With Fedora
Symplectic, United Kingdom
Since 2009, Symplectic has been integrating its flagship research management system, Elements, with institutional repositories. In almost all cases, the repository has been either DSpace or EPrints.
Historically, neither of these platforms provided an API suitable for such integration; but despite institutional requirements varying, the architecture of these platforms have made it possible to write extensions to enable integration, with relatively few configurable parts to match each implementation.
In 2014, we took on the challenge of implementing integrations for four clients using repositories built with Fedora. Each of these had different user and administrative interfaces - from custom built, through well known platforms like Fez, Primo and VITAL.
Whilst Fedora provides a consistent API that can be used to create and modify the repository contents, the flexibility provided by Fedora meant significant differences existed for each integration:
type of metadata documents (DC vs MARC vs MODS)
use of identifiers and object and datastream fields
order and number of API calls
This poster showcases our solutions to these difficulties, which delivered successful integrations to all four institutions.
Successfully lobbying for and implementing increased repository staffing: the Iowa State University experience
Iowa State University, United States of America
In 2014, the Iowa State University Library increased staffing for its institutional repository from one full-time librarian to include two professional positions and two paraprofessional positions. This poster describes how the library was able to successfully lobby for funding from the Office of the Senior Vice President and Provost for two of the new positions, as well as the roles each staff member plays in the fully-staffed digital repository unit.
Visualizing Electronic Theses and Dissertations
Montana State University, United States of America
We present an example of this year’s conference theme, “looking back and moving forward,” based on our research of Montana State University’s (MSU) large collection of electronic thesis and dissertations (ETDs). We review the entire ETD collection from the most recent to the oldest record dating from 1901 to discover trends such as changes in subject matter and volume by academic department and ask whether these trends correlate to world events like wars or technological advances. We derive statistics from the ETD collection such as degree department, issue date, and subject area. We use these data to visualize trends and relationships between time and emphases within the collection.
DSpaceDirect: Lowering the Barriers to Open Source Repositories
DuraSpace, United States of America
For many, installing and managing open source software is not a feasible solution; however, being a part of a large community of users tackling the same issues is desirable. For Bennington College, a small undergraduate liberal arts college in Vermont, DSpaceDirect (a hosted service from DuraSpace) was the institutional repository service that best matched their requirements. DSpaceDirect is a managed service that provides a full end-to-end solution of discovery, access, and preservation services in support of open access to the scholarly resources of institutions large or small. The goal of the service is to enable institutions to quickly, easily, and cost-effectively store their content in a hosted open source institutional repository running easy-to-use DSpace software and to focus on what matters – making digital collections openly available to their users on-campus and worldwide. DSpaceDirect automatically preserves all repository content in DuraCloud where additional preservation-focused services, such as automated regular content health checking services, are run. The poster will walk through the experiences of Bennington College, an early adopter of DSpaceDirect, who researched and evaluated services available, conducted an analysis of their own needs and requirements, and has been successfully using and growing their repository for the past two years.
Institutional Repository cures Interlibrary Loan / Document Delivery
1Hiroshima University Library; 2University of Tsukuba Library; 3Shizuoka University Library; 4Chiba University Library
The very same literatures are being distributed in many times in interlibrary loan and document delivery services (ILL/DD). Most frequent one is copied and delivered over 100 times within a year. Reflecting the ILL/DD usage, by identifying very popular literatures and intensively making them open access, we could meet the real demands.
The Road Forward: Faculty Reporting System, Institutional Repository, and Research Portal Integration at the University of Arizona
University of Arizona Libraries, United States of America
How has the launch of “UA Vitae”, an online reporting system supporting the faculty annual review process at the University of Arizona, impacted the institutional repository? Explore the relationship between the “UA Vitae” system and the UA Campus Repository; learn about the current reality and the potential that is just around the corner! In addition, learn how the UA Campus Repository data is integrated into the brand-new Research Arizona Portal that shares and showcases faculty research from Arizona universities. Workflow processes have potholes, but we’re driving forward to collaborate and promote the University’s research output. This poster shows what is “under the hood” in terms of desired and actual workflows. In addition, we’ll discuss faculty participation in making content available across systems. We’ll describe the similarities and differences between system objectives, and discuss how we’ve approached common goals. This presentation will focus on the business relationships in place to make collaboration work, in addition to highlighting both bumps-in-the-road and destinations reached!
The Open Science Framework: Connecting and Supporting the Research Workflow
Center for Open Science, United States of America
The non-profit Center for Open Science (COS) seeks to connect and streamline the research workflow through use of its web application, The Open Science Framework (OSF) (https://osf.io). Free and open source, the OSF manages the entire research lifecycle: planning, execution, reporting, archiving, and discovery. The OSF is part version control system, part collaboration tool, and part project management software, and facilitates transparency in scientific research. The OSF integrates private and public workflow by assigning every researcher, every project and every component a unique, persistent identifier, and individual privacy options. The researcher maintains access control over which parts remain private and which become public - but is incentivized to share data and materials openly. The OSF streamlines workflows by connecting tools researchers already use — examples include Dropbox, FigShare, and Dataverse — allowing resources housed in different services to be displayed in one central location. This talk will highlight the ways the OSF enables open science, streamlines research workflows, and facilitations collaboration.
Visual user interfaces of a digital repository for multimedial linguistic corpora. Components and extensions of the HZSK Repository.
Universität Hamburg, Germany
The paper demonstrates the current state of the development of the digital repository of the Hamburg Centre for Language Corpora (Hamburger Zentrum für Sprachkorpora, HZSK) and shows how the underlying modular repository architecture and the connection to the research infrastructure CLARIN open up possibilities for additional web-based user interfaces. The HZSK Repository contains numerous complex linguistic corpora and provides a web interface that allows for browsing the included multimedial content and metadata. While the repository is already connected to the pan-European CLARIN research infrastructure and in this context to central interfaces like the Virtual Language Observatory and the CLARIN Federated Content Search, in the near future it will be extended by additional internal and external user interfaces that allow for generic as well as specific perspectives on the available linguistic research data.
Media Preservation and Access with HydraDAM2 and Fedora 4
1WGBH Media Library and Archives, United States of America; 2Indiana University, United States of America
The WGBH Media Library and Archives, with support from the National Endowment for the Humanities (NEH), has developed an open source digital media preservation repository and digital asset management system for audio and video. This system, known as HydraDAM, is built on a Hydra and Fedora 3 technology stack and is focused primarily on the needs of public media stations but is intended to be relevant and applicable to all cultural institutions with moving image and audio materials. At the start of 2015, WGBH and Indiana University Libraries were awarded a new two-year NEH grant to extend HydraDAM to use Fedora 4, taking advantage of its new capabilities for RDF, linked data, and integration of multiple underlying storage technologies. This new version will support integration with access systems such as the open source Avalon Media System in order to provide streaming access to preserved materials.
Dash: Exploiting Hydra/Blacklight for Repository-Agnostic Data Curation
California Digital Library
University libraries and IT groups are increasingly being asked to support research data curation as a consequence of funder mandates, pre-publication requirements, institutional policies, and evolving norms of scholarly practice. Any repository-like service targeting academic researchers must provide both high service function and intuitive user experience in order to compete successfully against free commercial alternatives such as figshare or Dropbox. To meet these goals, the UC Curation Center (UC3) is developing a second generation of its Dash curation service. Dash is not a repository itself, but rather an overlay submission and discovery layer sitting on top of a repository and supporting refactored drag-n-drop upload, metadata entry, DOI assignment, faceted search/browse, and a plug-in mechanism for extension by reconfiguration rather than recoding. Dash is now based on a forked version of the Hydra/Blacklight platform that has been genericized to permit integration with any repository supporting SWORD deposit and OAI-PMH harvesting protocols. This presentation will introduce the new Dash architecture and describe UC3’s process of enhancing Hydra for applicability beyond Fedora. UC3’s deployment of enhanced service function through the composition of loosely-coupled, protocol-linked components exemplifies a useful approach for the streamlined creation of new innovative curation and repository services.
How compliance will forever change ‘academic publishing’.
figshare, United Kingdom
Openly-available academic data on the web will soon become the norm. Funders and publishers are already making preparations for how this content will be best managed. With the coming open data mandates meaning that we are now talking about ‘when’ not ‘if’ the majority of academic outputs live somewhere on the web. The EPSRC of the UK is mandating dissemination of all of the digital products of research they fund this year. The European Commission and Whitehouse’s OSTP are pushing ahead with directives that are also causing a chain effect of open data directives amongst European governments and North American funding bodies. This session will look at the research data management landscape and the different approaches that are being taken to adjust to the various funder mandates. We will explore how different existing workflows will be disrupted and what potential opportunities there are for adding value to academic research. It will also take the audience through the experience of figshare, attempting to contribute in an area that has many stakeholders - funders, governments, institutions and researchers themselves.
Sharing and reusing data legally
Charles University in Prague, Czech Republic
The necessity to share and preserve data and software is becoming more and more important. Without the data and the software, research cannot be reproduced and tested by the scientific community. Making data and software simply reusable and legally unequivocal requires choosing a license for data and software which is not a trivial task. We describe a legal/licensing framework which implements the complete support for licensing submissions.
Globus Software as a Service data publication and discovery
University of Chicago, United States of America
Globus is software-as-a-service for research data management, used at dozens of institutions and national facilities for moving, sharing, and publishing big data. Recent additions to Globus include services for data publication and discovery that enable: publication of large research datasets with appropriate policies for all types of institutions and researchers; the ability to publish data directly from locally owned storage or from cloud storage; extensible metadata that can describe the specific attributes of any field of research; flexible publication and curation workflows that can be easily tailored to meet institutional requirements; public and restricted collections that give complete control over who may access published data; and a rich discovery model that allows others to search and use published data. This presentation will give an overview of these services.
Equal partners? Improving the integration between DSpace and Symplectic Elements
1The University of Waikato Library, Hamilton, New Zealand; 2Auckland University of Technology Library, New Zealand; 3The University of Waikato ITS, Hamilton, New Zealand
While self-submission by academics was regarded as the ideal way to add content to Open Repositories in the early days of such systems, the reality today is that many institutional repositories obtain their content automatically from integration with research management systems. The institutional DSpace repositories at Auckland University of Technology (AUT) and at the University of Waikato (UoW) were integrated with Symplectic Elements in 2010 (AUT) and in 2014 (UoW). Initial experiences at AUT suggested a mismatch between the interaction options offered to users of Symplectic Elements on one hand and the actions available to repository managers via the DSpace review workflow functionality on the other hand. Our presentation explores these mismatches and their negative effects on the repository as well as on the user experience. We then present the changes we made to the DSpace review workflow to improve the integration. We hope that our experiences will contribute to an improvement in the integration between repository software and research management systems.
E-quilt prototype: research of experiment in scenario of data sharing
1FEDERAL UNIVERSITY OF PARAÍBA; 2FEDERAL UNIVERSITY OF PARAÍBA; 3UNIVERSITY OF SÃO PAULO
Changes are occurring in the scientific communication area, from an analogical perspective to the digital, emerging the e-Science as a new scientific paradigm. This possibility can bring different perspectives of use, reuse and access of scientific research information through the use Digital Information and Communication Technologies and with new possibilities of scientific publication with the rise of different digital formats and media. The rise of a new modality of scientific publishing known as enhanced publication is doing part this scenario. Development of scientific cycle, open data sharing, metadata patterns, protocols of interoperability and data aggregation models and information systems are necessary this context. Data sharing brings whole news possibilities of convergences, connectivity and collective interactive research. The goal this poster is bring of a relate of an experiment research, called e-Quilt prototype, in development in the PhD thesis, in Graduate Program Information Science, in Federal University of Paraíba, João Pessoa, Brazil e with partner of the University of Tennesse, Knoxville (UTK), through College Communication & Information Science. This research has a Qualitative and Quantitative research approach. The used method is Quadripolar. The quadripolar research dynamics have purpose of an interaction between the epistemological, theoretical, technical and morphological poles. Has this research exploratory and experimental character.
Diversity or Perversity? An Assessment of Indonesian Higher Education Institutional Repositories
1Curtin University, Australia; 2Petra Christian University, Indonesia
The presentation will deliver the results of content analysis conducted for approximately 80 Indonesian higher education institutional repositories (IRs). These institutions have been selected to represent the full range of higher education institutions in Indonesia.
The problem facing an emerging nation such as Indonesia, is that the higher education sector is diverse, loosely regulated, and struggling to achieve a sustainable level of equitably distributed funding. It also supports an emerging research sector that struggles for visibility in an international environment that privileges outputs in English originating from ‘developed’ countries and accessible through international journals. In this context open repositories are seen as offering part of the solution when implemented at an institutional level.
The approach taken in this research is to compile quantitative and qualitative data capturing the current state of Indonesian IRs, with a view to summarizing progress and creating the basis from which future national IR policy can be decided and implemented.
Final conclusion(s) cannot be offered at this stage since this research (to be concluded in March 2015) is still proceeding. However based on the data gathered, there are interesting practices that do not reflect the original idea of IRs as an (Green) Open Access strategy.
Widget Integration in Open Repositories: Real World Experiences with the PlumX Widget
Plum Analytics, United States of America
Looking for Solid Ground During Consolidation: The DigitalCommons, Kennesaw State University and Southern Polytechnic State University
Kennesaw State University, United States of America
It was announced in November of 2013, by University System of Georgia Chancellor Hank Huckaby, that Kennesaw State University and Southern Polytechnic State University would consolidate. The institutions are scheduled to be operating as a consolidated university by July 2015. This decision has impacted every facet of campus life and work, including DigitalCommons. This poster will highlight the experiences of the repository manager at Kennesaw State University during the transition with hopes to encourage others who may be going through similar scenarios to engage in a more personal dialogue. Challenges, accomplishments, and strategies will be addressed.
Hydra North: Moving forward with 10 years of repository history Babysteps to a solid centralized Digital Asset Management System
University of Alberta Libraries, Canada
University of Alberta is currently building a centralized Digital Assets Management System (DAMS) that is designed to provide central storage, management and long-term stewardship for the full spectrum of the digital objects, from E-Theses to digitized materials, from multimedia objects to research data that currently all live in separated repositories. We see the development of the DAMS as a great opportunity to consolidate these silos into a single more coherent and effective core, which will embrace semantically richer data, better discovery experiences, and greater potential to leverage these assets for user communities. In this presentation we will discuss some of the initial steps we took to start the process of migrating existing content into the new system, with the migration of metadata from XML-based standards to RDF. This presentation is intended to unveil the baby steps (and stumbles) we have taken to plan and execute the migration, summarize the challenges we’ve been facing, and reflect on the lessons we have learned.
Sharing Scholarly Journal Articles Through University Institutional Repositories Using Publisher Supplied Data and Links
1University of Florida, United States of America; 2Elsevier
The University of Florida (UF) libraries and Elsevier are collaborating to widen access to articles authored or co-authored by UF authors and published by Elsevier. Both organizations are committed to supporting researcher success and believe working together will more effectively widen public access, improve compliance with current and future funder policies, and facilitate efficient and responsible sharing of scholarly journal articles through the university's institutional repository.
Russell and Wise will share a strategic, rather than technical, perspective on how and why they are working together to improve access to scholarship from UF authors through the University IR.
Funding models for open access digital repositories
1Trinity College Dublin, Ireland; 2National University of Maynooth, Ireland; 3Royal Irish Academy, Ireland
This presentation examines funding models for open access digital repositories. Whilst such repositories are free to access, they are not without significant cost to build and maintain. The lack of a direct funding stream through payment for use poses a considerable challenge to open access repositories and places their future and the digital collections they hold at risk. We document and critically review 14 different potential funding streams, grouped into six classes with a particular focus on funding academic research data repositories. There is no straightforward solution to funding open access digital repositories, with a number of general and specific challenges facing each repository and funding stream. We advocate the adoption of a blended approach that seeks to ameliorate cyclical effects across funding streams by generating income from a number of sources rather than overly relying on a single one. Creating open access repositories is a laudable ambition, however such repositories need to find sustainable and stable ways to fund their activities or they place the collections they hold at significant risk. Our review assesses and provides concrete advice with respect to potential funding streams in order to help repository owners address the financing conundrum they face.
Harmonizing Research Management and Repository Functionality to Support Open Science
Thomson Reuters, United Kingdom
The research landscape is rapidly changing from a publication-centered output infrastructure into an open and data-driven result-oriented infrastructure, dependent on advanced technologies and services that enable further reuse and thus creation of additional values from scientific results. Obviously, this has an effect on traditional research lifecycle workflows. It requires the harmonization of a variety of ongoing integration approaches and has to take into account a multiplicity of stakeholder needs. Here, we focus on the intersection of functionalities between a CRIS and Repositories in the changing research ecosystem. The two paradigms increasingly align their functionalities and thus collaborate in support of the open agenda. A number of conference themes will be reflected by demonstrating Thomson Reuters’ roles in the research ecosystem and contributions to a sustainable open agenda. The audience will gain an understanding of the challenges and the tensions to be managed with respect to flexibility or diversity and complexity vs. standardization and harmonization.
Data sharing requirements for decision-making in Brazilian biodiversity conservation
1UNIVERSITY OF SÃO PAULO FEDERAL UNIVERSITY OF PARAÍBA; 2FEDERAL UNIVERSITY OF PARAÍBA; 3FEDERAL UNIVERSITY OF PARAÍBA
The sharing of primary biodiversity data has gained importance in recent years, due to its use for environmental management and for decision-making regarding conservation and sustainable use of natural resources. In this sense, diverse researches initiatives have been launched with the objective of develop protocols and tools to support primary biodiversity data sharing. The use of a cyberinfrastructure for Brazilian primary biodiversity data sharing is essential to support actions regarding environmental management, but a number of challenges have to be faced such as integration of data, which is scattered across different administrative domains, retrieval of data stored in heterogeneous systems, and sharing sensitive data. In this context was conducted a survey with Brazilian biodiversity research institutes to guide the development of a cyberinfrastructure to overcome these challenges. The evaluation of this cyberinfrastructure in Instituto Chico Mendes de Conservação da Biodiversidade – Ministry of Environmental of Brazil, has showed the potential to enhance Brazilian primary biodiversity data reuse, by the means of supporting collaborative and interdisciplinary research.
System Integration between Sakai and Fedora 4
Yale Unviersity, United States of America
In this poster session, we plan to demonstrate integration of a Learning Management System like Sakai with Fedora 4, and share our suggestions on what level of services a digital repository like LMS should provide. We would also like to share any lessons learned in the integration process. The repositories (and LMS tools alike) that have integration goals with other systems should provide APIs to allow interoperability. Interoperability is a key requirement in any enterprise set up, and repositories must provide APIs, so that they don’t become silos and so that they provide the most value for the money. The intended audience for this poster is repository managers, teaching staff, and developers in the academic technology groups.
The Past, Present, & Future of Capturing the Scholarly Record of a Small Comprehensive U.S. Institution: Toward a Sustained Repository Content Recruitment and Workflow Strategy
1Valparaiso University, United States of America; 2Indiana University-Purdue University, Indianapolis, United States of America
The number of institutional repositories (IRs) has grown rapidly since Clifford Lynch declared in 2003 that IRs were “essential infrastructure” for academic institutions. While the number of IRs have grown, the success of IRs does not necessarily follow suit for a variety of reasons. For smaller institutions, limited staffing, expertise, and content recruitment can all be significant factors threatening an IR’s success, especially as these factors change over time.
Launched in 2011, ValpoScholar (http://scholar.valpo.edu/) is currently on its second iteration of a content recruitment strategy and workflow design with a third iteration in development. With a primary focus on capturing metadata before moving to full-text access and preservation, this evolving approach to content recruitment and workflow design has led to a 25% increase in IR record creation, while also increasing full-text availability as well as giving the institution its first public scholarly record. This poster will share what Valparaiso University has done in the past, what its current processes are, and what it plans to do in the future to ensure a comprehensive scholarly record of the institution and also how similar institutions with limited staffing, expertise, and content may sustain the growth of their IRs.
Starting from scratch – building the perfect digital repository
Northwestern University, United States of America
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.