Logo OR2015
 

Conference Agenda

Overview and details of the sessions of this conference. Please select a date or room to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Return to conference web site

 
Session Overview
Session
SOC1: Poster Reception
Time:
Tuesday, 09/Jun/2015:
6:00pm - 8:00pm

Session Chair: Amy Buckland
Location: Regency Foyer

Presentations

A case-study: Using the pdfminer python package and fuzzy matching to triage digitized legacy theses and dissertations for repository ingest.

Seth Robbins

University of Illinois, United States of America

This poster illustrates a detailed case study of an automated metadata solution using text extraction developed for the digitized legacy theses and dissertation collection into IDEALS, the institutional repository of the University of Illinois at Urbana-Champaign. To associate digitized legacy dissertations with department specific collections, we developed a text extraction process based on fuzzy matching to identify the granting department of the dissertations authors degree. This procedure was able to identify granting department for eighty-six percent of the more than 19000 dissertations for which the procedure was used.

Robbins-A case-study-196.pdf

A Permanent Home for Ephemeral Policies: Integrating DSpace with an Enterprise Content Management System in an Institutional Archive

Nicholas Webb

Icahn School of Medicine at Mount Sinai, New York, NY

This presentation will discuss the successful integration of a DSpace instance with an enterprise content management system, examining some of the challenges faced in connecting two disparate information systems for the purpose of long-term preservation.

In 2014, IT staff at the Mount Sinai Medical Center implemented a content management system (CMS), based on the open source Liferay portal, to centralize the management of policies and procedures across all Medical Center departments. As the permanent home for expired policies, the Mount Sinai Archives was a stakeholder in this project. Archives staff worked with members of the CMS team to develop tools and workflow for automatically exporting policies from the CMS into the Archives’ existing DSpace digital repository, which had previously been used primarily for the storage and dissemination of manually cataloged digital objects. The integration process required addressing a number of technical and descriptive challenges, including crosswalking descriptive metadata, automating the creation of preservation copies, and dealing with inaccuracies in user-submitted metadata. The end result was a bespoke combination of automated data transfer and manual staff intervention by which changes to the CMS are regularly ingested into the repository.

Webb-A Permanent Home for Ephemeral Policies-44.pdf

Accelerating Access - Making Open Access Policies Work From Day One

Graham Triggs1, Justin Gonder2, Catherine Mitchell2

1Symplectic, United Kingdom; 2California Digital Library

Successful recruitment of published content into institutional repositories relies on three key components: 1) funder mandates or institutional policies requiring deposit of papers; 2) a means of ensuring that deposit occurs as soon as possible after a paper’s acceptance or publication; and 3) an efficient, intuitive mechanism for helping faculty fulfill their deposit requirements with minimal effort.

Research management systems play an increasingly important role in the scholarly publishing ecosystem, helping collate information about scholars’ publications and enabling institutions to effectively implement and monitor open access policies. . This presentation examines how a research management system can be set up to provide faculty with the tools they need to easily comply with their institutional OA policy and to help repository managers track policy compliance rates across the institution.

Triggs-Accelerating Access-84.pdf

Achieving Ambitious Agendas with Limited Means at the University of Cincinnati

Linda D. Newman, Ted Baldwin, Eira Tansey

University of CIncinnati, United States of America

Scholar@UC - scholar.uc.edu - is the faculty self-submission repository currently in development at the University of Cincinnati (UC). Using the Hydra framework, this system comes in an environment of dramatic change: new partnerships across campus and with other entities, new engagement with faculty and stakeholders, growing needs for internal staff job development, and development of new researcher services. The UC Libraries is lean on staffing in comparison with its peers, so we face unique challenges that require flexibility and creativity. We embrace both nimble processes and a strong sense of risk-taking, to ensure that Scholar@UC becomes a critical enterprise system. This panel reflects on three aspects of our engagement and development efforts. First, we will discuss outreach efforts to bring together a small set of “early adopter” faculty, and the process of assembling feedback in a personalized, interview-based setting. Then, we will discuss the process to transform this feedback into functional use cases that prioritize needs and desires. Finally, we will discuss building a small and high-functioning software development team, and collaboration with UC’s central IT department and other local/national development efforts. We think this presentation will offer insight for other institutions with ambitious agendas and limited means.

Newman-Achieving Ambitious Agendas with Limited Means at the University-184.pdf

Adding content reporting to DSpace

Andrea Schweer1, Jenni Barr2, Deirdre Congdon2, Megan Symes2

1The University of Waikato, Hamilton, New Zealand; 2AgResearch Limited, Hamilton, New Zealand

This poster presents a content reporting add-on to DSpace, developed for AgResearch Ltd by the IRR support team at the University of Waikato's Information Technology Services Division. We outline the motivation for developing this add-on, give a high-level description of its implementation and report initial insights on its reception and uptake.

Schweer-Adding content reporting to DSpace-88.pdf

AGGREGATING MULTIPLE SOURCES TO PREPOPULATE NEW REPOSITORIES

Laurence Bianchini, Damien Vannson, Virginie Simon

MyScienceWork, United States of America

We present the process to prepopulate a new repository by discussing this in the context of MyScienceWork’s POLARIS platforms. We introduce the aggregation service that uses an internal database, uploads from users and several external services to centralize and structure metadata. We also introduce the process of assigning confidence rates to both metadata from different sources and to author-publication matching processes. In certain cases, manual validation is required when the confidence rate is low

Bianchini-AGGREGATING MULTIPLE SOURCES TO PREPOPULATE NEW REPOSITORIES-216.doc

An Implementation of Technical Revision in DSpace Allowing Open Educational Resource Browser Access

Manuela Klanovicz Ferreira, Zaida Horowitz, Janise Silva Borges da Costa, Caterina Groposo Pavão

Federal University of Rio Grande do Sul - UFRGS – Data Processing Center, Brazil

This work shows how the DSpace software was changed to create the technical revision step in the workflow submission of the Open Educational Resources (OER) community in Lume – Digital Repository of UFRGS. The main goal of technical revision step is allow access the OER directly on browser. OER installation is not required.

Klanovicz Ferreira-An Implementation of Technical Revision in DSpace Allowing Open Educational Resource.pdf

Better metrics through metadata: a study of the use of persistent identifiers in IRs

Stacy Konkiel1, Nickoal Eichmann2

1Impactstory, USA/Canada; 2Mississippi State University Libraries, USA

An increasing number of institutional repositories are partnering with services like PlumX to track citations and altmetrics for the content they host, but are all the pieces in place to ensure that such metrics are tracked successfully? Persistent identifiers like DOIs are used by altmetrics services to track accurate metrics for both repository-hosted content and related versions of the content that are hosted elsewhere (on publisher websites, aggregators like PubMed Central, and so on). Yet our case study--based on an analysis of the metadata for items held in an institutional repository at a CIC institution--finds that persistent identifiers are missing from more than 75% of all item records (where such identifiers exist). This poster shares the full results of our analysis (based on persistent identifiers including DOIs, PubMed IDs, and ArXiv IDs) and also benchmarks staff hours needed to find and add these identifiers to item records. While the study was limited in scope, it provides a framework for resource allocation planning for institutional repositories interested in adding altmetrics services.

Konkiel-Better metrics through metadata-24.pdf

Building CALiO™ – A Repository and Digital Library for Field Practitioners

David N. King

National Children's Advocacy Center, United States of America

Creating a digital library designed for field professionals is much different from amassing collections for academic and research purposes. A digital collection that delivers resources which specifically address the time-sensitive information needs of practitioners, quickly, easily and reliably, becomes an invaluable tool for professional practice and improves outcomes. Repositories can play an active, pivotal role in addressing the needs of practitioners locally, regionally, nationally and internationally. This presentation describes the mission, development and implementation of CALiO™, the Child Abuse Library Online, which is the primary digital library for multi-disciplinary teams of professionals throughout the United States and in dozens of other countries.

The front face of CALiO™, and the primary digital collection for the large cadre of professionals nationally and internationally, is the repository, CALiO™ Collections. The repository supplements locally produced resources with open access resources to optimize user ability to obtain evidence-based information for decision making. This presentation discusses the important differences between a digital collection intended for practitioners vs academic/research clientele, the basic principles guiding design and implementation, and strategies employed for collections and services.

King-Building CALiO™ – A Repository and Digital Library-186.docx

Building Research Data Repositories for the Humanities and Area Studies with CKAN (and Some Extensions)

Cheng-Jen Lee, Huang-Sin Syu, Yao-Hsien Yeh, Tyng-Ruey Chuang

Academia Sinica, Taiwan

We have sought to use CKAN, a free software package originally developed for sharing datasets, as the basis of a repository of research materials for the humanities and area studies. As the research materials are often diverse in data types and in domain semantics, we defined a set of control vocabularies for metadata usage in this repository. The controlled domain vocabularies help locate research resources for users, and help ensure the quality of metadata when importing these resources into the repository. For historical maps stored in the repository, we have sought ways to extract place name information in them such that place names can be searched by time periods and spatial extents, and spatiotemporal mappings of current or historical places can be performed in CKAN.

Lee-Building Research Data Repositories for the Humanities and Area Studies with CKAN-107.pdf

COAR Roadmap: Future Directions for Repository Interoperability

Kathleen Shearer1, Friedrich Summann2, Katharina Mueller1, Maxie Putlitz1

1COAR (Confederation of Open Access Repositories), Canada; 2University of Bielefeld

Scholarly communication is undergoing fundamental changes, in particular with new requirements for open access to research outputs, new forms of peer-review, and alternative methods for measuring impact. In parallel, technical developments, especially in communication and interface technologies, facilitate bi-directional data exchange across related applications and systems.

The success of repository services in the future will depend on the seamless alignment of the diverse stakeholders at the local, national and international level. The roadmap identifies important trends and their associated action points for the repository community and will assist COAR in identifying priority areas for our interoperability efforts in the future.

This document is the culmination of over a year’s work to identify priority issues for repository interoperability. The preparation of the roadmap was spearheaded by Friedrich Summann from Bielefeld University in Germany, with support from a COAR Editorial Group and input from an international Expert Advisory Panel.

Shearer-COAR Roadmap-152.doc

Emerging Practices for Researcher Name Disambiguation in Institutional Repositories

Melissa Christine Lohrey

Virginia Tech University Libraries

It has been observed that name disambiguation presents a significant, ongoing challenge for institutional repositories (Salo, 2009). The increase in digital scholarly resources has not always coincided with an increase in institutions’ implementation of authority control mechanisms for researchers' names (Ibid). As institutional repository content has increased, so has the occurrence of non-unique personal names, the presence of which hinders researchers’ ability to accurately and efficiently discover content. Although traditional library cataloging leverages multiple systems to disambiguate personal names, such as LC/NACO and ISNI, name authority control remains a challenging issue for institutional repositories.

As of January 2015, several pilot projects are exploring concrete, actionable solutions for implementing name authority control in institutional repositories. This poster will present a comparative analysis of these projects, illustrate the advantages and challenges institutional repositories face in adopting name disambiguation workflows and systems, and present ideas for the future development of name authority control functionality in institutional repositories.

Reference: Salo, D. (2009). Name Authority Control in Institutional Repositories. Cataloging & Classification Quarterly, 47(3-4), 249-261. doi: 10.1080/01639370902737232

Lohrey-Emerging Practices for Researcher Name Disambiguation-168.pdf

Introducing the FSD’s repository management & discovery tools and software development approach

Tuomas J. Alaterä

Finnish Social Science Data Archive, Finland

The Finnish Social Science Data Archive (FSD) has two new repository management tools. Data service portal Aila facilitates access to data and serves as the tool for data dissemination. One of its key features is the ability to control access to datasets according to the conditions set by data producers. Metka manages the data archive’s metadata production process, and provides the FSD’s other systems with the metadata they need. In addition to descriptive DDI2-metadata, it facilitates creation of structural and long-term preservation metadata. Aila and Metka define the software platform used when building new tools and services at the archive. All metadata are repurposed from a single authoritative source. For example, Shibboleth is consistently used for user authentication and OAI-PMH interface to provide metadata for harvesting.

This poster showcases the functionalities of both Aila and Metka, displays how they connect to FSD’s services, and shares the experiences gained so far. Finally, it introduces a remote entitlement management concept aimed to manage the workflow needed for granting an access to datasets that are only available for download after an explicit permit from the depositor, PI, research group or IRB.

Alaterä-Introducing the FSD’s repository management & discovery tools and software development approach-105.pdf

There is Life After Grant Funding: How Islandora Struck Out On Its Own

Melissa Anez1, Mark Leggott1,2

1Islandora Foundation; 2University of Prince Edward Island

The early years of Islandora supported by a multi-year Atlantic Innovation Fund grant, which provided funding for developers, project management, interns, travel, and all of the other bits and pieces that get a software development project off the ground. During that time the Islandora community grew and flourished, but long-term sustainability needed clarity. In 2013, that grant was slated to come to an end and we needed to find a new way to help sustain the project. The Islandora Foundation was born from that need.

In the two years since the formation of the Islandora Foundation was announced at Open Repositories 2013, the project has welcomed more than a dozen supporting institutions, hosted Islandora Camps all over the world, and put out two fully community-driven software releases with dozens of new modules built and contributed by the Islandora community.

The Islandora project has made the journey from a grant-funded project incubated in a University library, to a non-profit that exists in symbiosis with the community it serves. This journey, and its place in the larger community of digital repositories and the institutions that use and support them, is the subject of this poster, which details nine-year history of the IF.

Anez-There is Life After Grant Funding-57.pdf

User-Testing of DRUM: What Academic Researchers Want from an Open Access Data Repository

Lisa R. Johnston, Erik A Moore, Eric Larson

University of Minnesota, United States of America

Funding agencies and institutions are increasingly asking researchers to better manage and share their digital research data. Yet, meeting those needs should not be the only consideration in the design and implementation of open repositories for data. What do researchers expect to get out of this process? How can we design our data repositories to best fit research needs and expectations, as well as those of the organization? At the University of Minnesota, we recently implement a new open repository service, the Data Repository for U of M (DRUM). This institutional-focused repository is designed for researchers to self-deposit their research data. The data then undergo a workflow of curatorial review, metadata enhancement, and digital preservation by a team of data curators in the library. The result is well-documented research data that are broadly disseminated through an openly accessible discovery interface (DSpace 4.2) and are uniquely identifiable for future reuse and citation using DataCite DOIs. Before marketing our service to campus, we performed three usability tests with our target population: academic research faculty with data they must share publicly. The results of our user-testing revealed a handful of configuration and interface design changes that would streamline and enhance our service.

Johnston-User-Testing of DRUM-59.doc

Zenodo - One year of research software via GitHub integration!

Lars Holm Nielsen, Tibor Simko

CERN, Switzerland

Zenodo, a CERN operated research data repository for the long tail of science, launched a bit over a year ago its GitHub integration, enabling researchers to easily preserve and make their research software citable. Since then, 2000+ research software packages have been shared on Zenodo. This poster will give an overview over the uploaded software packages in terms of programming languages, subjects, number of contributors, countries, etc. We will further explore curation of research software an integration into existing subject specific repositories.

Nielsen-Zenodo - One year of research software via GitHub integration!-194.pdf

Digital Preservation the Hard Way: recovering from an accidental deletion, with just a database snapshot and a backup tape

Hardy Joseph Pottinger

University of Missouri, United States of America

An awesome tool set for digital preservation is available to all institutions who use DSpace. This is not a story of how we used this tool set. This is a story of how we recovered from an accidental deletion of a significant number of items, collections, and communities--an entire campus's ETDs: 315 missing items, 878 missing bitstreams, 1.4GB of data, 7 missing communities, 11 missing collections--using a database snapshot and a tape backup. The SQL we developed to facilitate this restoration may be helpful, but it is our hope that in comparison, the effort required to implement a proper backup and preservation safeguard, such as DuraCloud and/or the Replication Task Suite, will rightly seem more appealing. In other words: here's how to do it the wrong way, but you'd really be better off doing things the right way. This poster should be sufficient to serve as a guide for actually recovering from an accidental deletion of materials in DSpace, if one only has a database snapshot and a tape backup of a DSpace assetstore. It will also serve as a reminder of the digital preservation tool set available for DSpace, as well as why these tools exist.

Pottinger-Digital Preservation the Hard Way-69.pdf

Integrating institutional repository, researcher directory and library catalog using ORCID and Next-L Enju

Kosuke Tanabe

National Institute for Materials Science, Japan

Many libraries provide web services such as library catalogs and institutional repositories. Each web service contains author profiles, which are not necessarily accessible from another service; sometimes only the author name is shared, which may be ambiguous. Therefore, I have started a new project aiming to share author profiles across an institutional repository, a researcher directory and a library catalog at NIMS Library using ORCID.

This project involves three components: NIMS researcher directory "SAMURAI", an institutional repository software "PubMan" and "Next-L Enju", an open-source library system developed in Japan. In this project, I developed an add-on module for Next-L Enju that enables synchronization of profile information between these three components through ORCID so that librarians can create a correct link from our library catalog to the institutional repository. In this poster, I would like to introduce a case study of its workflow and implementation.

Tanabe-Integrating institutional repository, researcher directory and library catalog using ORCID and Next-L.pdf

Leveraging Repository Communities to Highlight Scholarly Content

Sarah Jean Sweeney

Northeastern University, United States of America

This poster describes the community and collection structure in Cerberus, the Fedora/Hydra repository system developed by Northeastern University Libraries. Cerberus was designed to store the important digital assets created as part of the mission of the university, including scholarly, administrative, and archival objects, but we needed a way to easily promote the scholarly content (research publications, presentations, datasets, and theses and dissertations). We were able to highlight the scholarly content by introducing the notion of communities, which we used to create relationships between collections, users, and files. The community structure has not just neatly organized repository content according to the existing Northeastern structure, it has made it easier for the system to leverage the relationships between objects to enhance the discoverability of scholarly content in the repository.

Sweeney-Leveraging Repository Communities to Highlight Scholarly Content-146.pdf

Metadata Form Creation System

Michael Robert Bond

West Virginia University Libraries, United States of America

WVU Libraries needed an Archival and Preservation system for digital objects in its collections. Because of the diversity of the public facing distribution and repository systems available today, as well as anticipating the development of "The Next Great Repository," this new system needed to be repository and distribution system agnostic. WVU Libraries wanted the repository managers to be able to develop new and modify existing metadata entry forms with no, or minimal, support from systems. This solution should be able to gracefully handle the changing understanding of what metadata requirements for today's researchers are. Lastly this system needed to be able to convert digital objects from their archival format to their web presentation formats (resize images, combine tiffs into single PDFs, apply watermarks, etc.)

WVU Libraries solution is the Metadata Form Creation System. MFCS provides a drag and drop form creation platform, as well as a robust data entry system that provides all the needed tools for digital collection management. The API provides easy to use methods for batch migrating data into and exporting data out of the archival system for use in other repository systems (Hydra, DLXS, etc).

Bond-Metadata Form Creation System-10.txt

Now we are 13, Open Research Online becomes a teenager

Alan Stiles, Chris Biggs, Nicola Dowson

The Open University, United Kingdom

The history of the Open University (UK) institutional repository (Open Research Online) is one of changing requirements as defined by its research community, institutional administrators and external HE policy. How the repository has responded to these changes has ensured its success. However, how we manage the (potentially) competing requirements of compliance monitoring and Open Access dissemination will determine the future of the repository.

Stiles-Now we are 13, Open Research Online becomes a teenager-26.pdf

Reducing metadata errors in an IR with distributed submission privileges

Colleen Lyon, Gilbert Borrego, Melanie Cofield

University of Texas at Austin, United States of America

The distributed submission policies of many repositories makes standardizing metadata input very difficult. At The University of Texas at Austin, there are over 50 people who have permission to submit content to the UT Digital Repository (UTDR). Out of that 50, two full-time staff members have management responsibilities for the repository. This limited number of managing staff means that frequent metadata clean-up isn’t possible. We are taking a pragmatic approach to addressing the issue of limited clean-up capacity by transforming our training process. Training focuses on clearly communicating repository-wide metadata standards, collaboratively creating collection-specific metadata guidelines as needed, and providing detailed input guidelines for each Dublin Core (DC) metadata field. We are working one-on-one with student workers to familiarize them with the new guidelines and are communicating with repository submitters via listservs and in-person meetings. The new guidelines were rolled out recently, and we expect to see a decrease in the number of records requiring editing. We will present examples from our new guidelines, suggestions for successful communication methods with stakeholders, and provide information regarding the incidence of errors since implementing the new training.

Lyon-Reducing metadata errors in an IR with distributed submission privileges-51.pdf

Repository Cross-Linking at the National Center for Atmospheric Research

Jennifer Phillips, Matthew Mayernik

National Center for Atmospheric Research (NCAR), United States of America

The National Center for Atmospheric Research (NCAR) builds and maintains a number of repositories for data and scholarship. Using these as a development test bed, our project demonstrates how multiple repositories of diverse resources can exchange and connect related information via complementary workflows and metadata sharing. Our poster maps out how we are building cross-link between our data and scholarship repositories, on the one hand establishing relationships between resources upon submission by researchers and on the other establishing technical connections between repositories on which to build out future interoperability.

Phillips-Repository Cross-Linking at the National Center for Atmospheric Research-220.pdf

Reworking the Workflow: Easy On Acceptance Deposits

Graham Triggs

Symplectic, United Kingdom

Obtaining metadata and content for your repository can be challenging. Wait until after publication, and you can likely harvest the metadata - but then you may not be able to get the content. Authors have the manuscript to hand when they get notification of acceptance for publication - but then the metadata has to be manually entered, and they may not have all of it, requiring that it is updated later.

This poster shows a new capture process and workflow that encourages authors to deposit their manuscript when it is accepted for publication, and automatically combines it with harvested metadata after publication to complete the repository record.

Triggs-Reworking the Workflow-86.pdf

SEAD People, Data, Things: Linked profiles for decision making

Scott McCaulay, Inna Kouper, Robert McDonald, Beth Plale

Indiana University, United States of America

As open linked data gains traction, vastly more information becomes available and discoverable online. The SEAD project (Sustainable Environments Actionable Data) wants to take advantage of the rich linked data landscape. SEAD needs information, about its researchers in the research areas around sustainable science (“people”), about data sets that they use and produce (“data”), and about repositories to which they may deposit their data once the project is over (“things”). But the era is over where a profile service can be built from scratch, with any expectation of completeness much less staying current over time. Profile data exists everywhere: Linkedin, ORCID, Friends of a Friend (FOAF), and DBpedia to name a few sources. How can a service that needs this information harvest what it needs from the Internet, and use it in a way that can be trusted by all?

McCaulay-SEAD People, Data, Things-217.doc

System for cross-organizational big data analysis of Japanese institutional repositories

Takurou Kawamura1, Kenichi Igarashi2, Hiroshi Kato3, Akira Maeda3, Toshihiro Aoyama4, Kazutsuna Yamaji3, Sho Sato5

1Hiroshima University, Japan; 2Keio University, Japan; 3Nationl Institute of Informatics, Japan; 4Suzuka National College of Technology, Japan; 5Doshisha University, Japan

Institutional Repositories DataBase (IRDB) is the system which collects metadata from almost all Japanese institutional repositories (IR). As one of the application, IRDB content analysis system provides statistical information depending on content type and format. It allows users to compare the cross-organizational data. However, it doesn’t have the data and functionality to compare the access statistics. This poster introduces the new feature of IRDB content analysis system with JAIRO Cloud, IR cloud service used by 250 institutions in Japan, which provides the cross-organizational access statistics of IRs.

The system can divide into two major components. One is the log repository which collects filtered access logs from Japanese IRs as data source. Unprocessed logs are filtered based on COUNTER-like processing such as robots exclusion. The filtering has to be done in each IR server since it includes elimination of the privacy data. The other component is the user interface. The system enables users, including repository managers, to analyze both content and access log to the content among Japanese IRs. We are planning to deploy the system to the institution which doesn’t utilize JAIRO Cloud in order to make it the standardized log analysis system in Japan.

Kawamura-System for cross-organizational big data analysis-56.pdf

Toward an Improved Understanding of Research Data Management Needs: Designing and Using a Rubric to Analyze Data Management Plans

Susan W. Parham1, Patricia Hswe2, Amanda L. Whitmire3, Jake Carlson4, Lizzy Rolando1, Brian D. Westra5

1Georgia Institute of Technology, United States of America; 2Penn State, United States of America; 3Oregon State University, United States of America; 4University of Michigan, United States of America; 5University of Oregon, United States of America

The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. As research institutions endeavor to meet these demands, repositories are emerging as a potential solution for sharing and publishing research data. To develop new curation services, repository managers and developers need to understand how researchers plan to manage, share, and archive their data.

As a document produced by researchers themselves, data management plans (DMPs) provide a window into researchers’ data management knowledge, practices, and needs; they can be used to identify gaps in institutional capacity for sharing and preserving data. With that in mind, the IMLS-funded “Data management plans as A Research Tool (DART) Project” has developed an analytic rubric to standardize the review of NSF data management plans. The information gleaned from these evaluations can be leveraged for improving research data management services and infrastructure, from data management training to data curation repositories.

This poster will introduce the analytic rubric developed through a collaboration among five U.S. research institutions. The focus will be on examining the intentions of researchers toward data sharing and archiving, as expressed through a preliminary review of DMPs across these institutions.

Parham-Toward an Improved Understanding of Research Data Management Needs-218.pdf

The Open Shape Learning Object Repository: A Hybrid 3D Object and Open Educational Resource Repository

Steven Van Tuyl, Margaret Mellinger

Oregon State University, United States of America

Oregon State University Libraries & Press have initiated the Open Shape Learning Object Repository Project, which aims to provide 3D objects to the open educational resource community by explicitly tying together 3D objects (renderable and printable models) and curricular elements. The aim of this repository model is to facilitate the use of 3D models and fabrication in the classroom at multiple levels of the curriculum.This project addresses the lack of cross-over between existing learning object repositories and 3D object repositories, and provides a guiding model for how repository systems and projects can facilitate bringing 3D modeling and fabrication into the open education community. Information packages served from our repository contain a renderable or printable 3D model (or set of models) along with a set of curricular elements that help contextualize the model(s) in the learning environment. We discuss the inception of the repository project, the results of a number of pilot projects, and our plans for future development.

Van Tuyl-The Open Shape Learning Object Repository-6.pdf

Packaging DSpace Ingest Folders with FileAnalyzer

Terrence W Brady

Georgetown University Library, United States of America

The Georgetown University Library has developed an application named the FileAnalyzer to facilitate the ingest of large collections of content into DSpace. The application can inventory a collection of files to be ingested and prepare ingest folders from a metadata spreadsheet. Once Georgetown University adopted these workflows, the backlog of collections to be ingested was eliminated.

This workshop will demonstrate the DSpace ingest workflows that are supported by the FileAnalyzer.

Participants will learn how to install the FileAnalyzer and run several of the tasks that can be useful for DSpace collection management. Using demo.dspace.org, participants will prepare and ingest content into the demonstration site.

Lastly, the session will discuss the framework for modifying the FileAnalyzer to implement institution-specific customizations.

https://github.com/Georgetown-University-Libraries/File-Analyzer/wiki/DSpace-Institutional-Repository-Ingest

Brady-Packaging DSpace Ingest Folders with FileAnalyzer-17.pdf

Not Entirely Unlike - CRIS Integration With Fedora

Graham Triggs

Symplectic, United Kingdom

Since 2009, Symplectic has been integrating its flagship research management system, Elements, with institutional repositories. In almost all cases, the repository has been either DSpace or EPrints.

Historically, neither of these platforms provided an API suitable for such integration; but despite institutional requirements varying, the architecture of these platforms have made it possible to write extensions to enable integration, with relatively few configurable parts to match each implementation.

In 2014, we took on the challenge of implementing integrations for four clients using repositories built with Fedora. Each of these had different user and administrative interfaces - from custom built, through well known platforms like Fez, Primo and VITAL.

Whilst Fedora provides a consistent API that can be used to create and modify the repository contents, the flexibility provided by Fedora meant significant differences existed for each integration:

type of metadata documents (DC vs MARC vs MODS)

use of identifiers and object and datastream fields

order and number of API calls

This poster showcases our solutions to these difficulties, which delivered successful integrations to all four institutions.

Triggs-Not Entirely Unlike-85.pdf

Successfully lobbying for and implementing increased repository staffing: the Iowa State University experience

Harrison W. Inefuku

Iowa State University, United States of America

In 2014, the Iowa State University Library increased staffing for its institutional repository from one full-time librarian to include two professional positions and two paraprofessional positions. This poster describes how the library was able to successfully lobby for funding from the Office of the Senior Vice President and Provost for two of the new positions, as well as the roles each staff member plays in the fully-staffed digital repository unit.

Inefuku-Successfully lobbying for and implementing increased repository staffing-90.pdf

Visualizing Electronic Theses and Dissertations

Susan Borda, Leila Sterman

Montana State University, United States of America

We present an example of this year’s conference theme, “looking back and moving forward,” based on our research of Montana State University’s (MSU) large collection of electronic thesis and dissertations (ETDs). We review the entire ETD collection from the most recent to the oldest record dating from 1901 to discover trends such as changes in subject matter and volume by academic department and ask whether these trends correlate to world events like wars or technological advances. We derive statistics from the ETD collection such as degree department, issue date, and subject area. We use these data to visualize trends and relationships between time and emphases within the collection.

Borda-Visualizing Electronic Theses and Dissertations-205.pdf

DSpaceDirect: Lowering the Barriers to Open Source Repositories

Carissa Smith

DuraSpace, United States of America

For many, installing and managing open source software is not a feasible solution; however, being a part of a large community of users tackling the same issues is desirable. For Bennington College, a small undergraduate liberal arts college in Vermont, DSpaceDirect (a hosted service from DuraSpace) was the institutional repository service that best matched their requirements. DSpaceDirect is a managed service that provides a full end-to-end solution of discovery, access, and preservation services in support of open access to the scholarly resources of institutions large or small. The goal of the service is to enable institutions to quickly, easily, and cost-effectively store their content in a hosted open source institutional repository running easy-to-use DSpace software and to focus on what matters – making digital collections openly available to their users on-campus and worldwide. DSpaceDirect automatically preserves all repository content in DuraCloud where additional preservation-focused services, such as automated regular content health checking services, are run. The poster will walk through the experiences of Bennington College, an early adopter of DSpaceDirect, who researched and evaluated services available, conducted an analysis of their own needs and requirements, and has been successfully using and growing their repository for the past two years.

Smith-DSpaceDirect-130.doc

Institutional Repository cures Interlibrary Loan / Document Delivery

Yuko Matsumoto1, Misaki Niioka2, Masako Suzuki3, Shigeki Sugita4

1Hiroshima University Library; 2University of Tsukuba Library; 3Shizuoka University Library; 4Chiba University Library

The very same literatures are being distributed in many times in interlibrary loan and document delivery services (ILL/DD). Most frequent one is copied and delivered over 100 times within a year. Reflecting the ILL/DD usage, by identifying very popular literatures and intensively making them open access, we could meet the real demands.

Matsumoto-Institutional Repository cures Interlibrary Loan Document Delivery-46.pdf

The Road Forward: Faculty Reporting System, Institutional Repository, and Research Portal Integration at the University of Arizona

Kimberly Chapman, Maliaca Oxnam

University of Arizona Libraries, United States of America

How has the launch of “UA Vitae”, an online reporting system supporting the faculty annual review process at the University of Arizona, impacted the institutional repository? Explore the relationship between the “UA Vitae” system and the UA Campus Repository; learn about the current reality and the potential that is just around the corner! In addition, learn how the UA Campus Repository data is integrated into the brand-new Research Arizona Portal that shares and showcases faculty research from Arizona universities. Workflow processes have potholes, but we’re driving forward to collaborate and promote the University’s research output. This poster shows what is “under the hood” in terms of desired and actual workflows. In addition, we’ll discuss faculty participation in making content available across systems. We’ll describe the similarities and differences between system objectives, and discuss how we’ve approached common goals. This presentation will focus on the business relationships in place to make collaboration work, in addition to highlighting both bumps-in-the-road and destinations reached!

Chapman-The Road Forward-224.doc

The Open Science Framework: Connecting and Supporting the Research Workflow

Andrew Sallans, Sara Bowman

Center for Open Science, United States of America

The non-profit Center for Open Science (COS) seeks to connect and streamline the research workflow through use of its web application, The Open Science Framework (OSF) (https://osf.io). Free and open source, the OSF manages the entire research lifecycle: planning, execution, reporting, archiving, and discovery. The OSF is part version control system, part collaboration tool, and part project management software, and facilitates transparency in scientific research. The OSF integrates private and public workflow by assigning every researcher, every project and every component a unique, persistent identifier, and individual privacy options. The researcher maintains access control over which parts remain private and which become public - but is incentivized to share data and materials openly. The OSF streamlines workflows by connecting tools researchers already use — examples include Dropbox, FigShare, and Dataverse — allowing resources housed in different services to be displayed in one central location. This talk will highlight the ways the OSF enables open science, streamlines research workflows, and facilitations collaboration.

Sallans-The Open Science Framework-62.pdf

Visual user interfaces of a digital repository for multimedial linguistic corpora. Components and extensions of the HZSK Repository.

Daniel Jettka

Universität Hamburg, Germany

The paper demonstrates the current state of the development of the digital repository of the Hamburg Centre for Language Corpora (Hamburger Zentrum für Sprachkorpora, HZSK) and shows how the underlying modular repository architecture and the connection to the research infrastructure CLARIN open up possibilities for additional web-based user interfaces. The HZSK Repository contains numerous complex linguistic corpora and provides a web interface that allows for browsing the included multimedial content and metadata. While the repository is already connected to the pan-European CLARIN research infrastructure and in this context to central interfaces like the Virtual Language Observatory and the CLARIN Federated Content Search, in the near future it will be extended by additional internal and external user interfaces that allow for generic as well as specific perspectives on the available linguistic research data.

Jettka-Visual user interfaces of a digital repository for multimedial linguistic corpora Components and.doc

Media Preservation and Access with HydraDAM2 and Fedora 4

Karen Cariani1, Jon W. Dunn2

1WGBH Media Library and Archives, United States of America; 2Indiana University, United States of America

The WGBH Media Library and Archives, with support from the National Endowment for the Humanities (NEH), has developed an open source digital media preservation repository and digital asset management system for audio and video. This system, known as HydraDAM, is built on a Hydra and Fedora 3 technology stack and is focused primarily on the needs of public media stations but is intended to be relevant and applicable to all cultural institutions with moving image and audio materials. At the start of 2015, WGBH and Indiana University Libraries were awarded a new two-year NEH grant to extend HydraDAM to use Fedora 4, taking advantage of its new capabilities for RDF, linked data, and integration of multiple underlying storage technologies. This new version will support integration with access systems such as the open source Avalon Media System in order to provide streaming access to preserved materials.

Cariani-Media Preservation and Access with HydraDAM2 and Fedora 4-128.pdf

Dash: Exploiting Hydra/Blacklight for Repository-Agnostic Data Curation

Stephen Abrams, Shirin Faenza, Marisa Strong, Bhavitavya Vedula

California Digital Library

University libraries and IT groups are increasingly being asked to support research data curation as a consequence of funder mandates, pre-publication requirements, institutional policies, and evolving norms of scholarly practice. Any repository-like service targeting academic researchers must provide both high service function and intuitive user experience in order to compete successfully against free commercial alternatives such as figshare or Dropbox. To meet these goals, the UC Curation Center (UC3) is developing a second generation of its Dash curation service. Dash is not a repository itself, but rather an overlay submission and discovery layer sitting on top of a repository and supporting refactored drag-n-drop upload, metadata entry, DOI assignment, faceted search/browse, and a plug-in mechanism for extension by reconfiguration rather than recoding. Dash is now based on a forked version of the Hydra/Blacklight platform that has been genericized to permit integration with any repository supporting SWORD deposit and OAI-PMH harvesting protocols. This presentation will introduce the new Dash architecture and describe UC3’s process of enhancing Hydra for applicability beyond Fedora. UC3’s deployment of enhanced service function through the composition of loosely-coupled, protocol-linked components exemplifies a useful approach for the streamlined creation of new innovative curation and repository services.

Abrams-Dash Exploiting HydraBlacklight for Repository-Agnostic Data Curation-22.pdf

How compliance will forever change ‘academic publishing’.

Mark Hahnel

figshare, United Kingdom

Openly-available academic data on the web will soon become the norm. Funders and publishers are already making preparations for how this content will be best managed. With the coming open data mandates meaning that we are now talking about ‘when’ not ‘if’ the majority of academic outputs live somewhere on the web. The EPSRC of the UK is mandating dissemination of all of the digital products of research they fund this year. The European Commission and Whitehouse’s OSTP are pushing ahead with directives that are also causing a chain effect of open data directives amongst European governments and North American funding bodies. This session will look at the research data management landscape and the different approaches that are being taken to adjust to the various funder mandates. We will explore how different existing workflows will be disrupted and what potential opportunities there are for adding value to academic research. It will also take the audience through the experience of figshare, attempting to contribute in an area that has many stakeholders - funders, governments, institutions and researchers themselves.

Hahnel-How compliance will forever change ‘academic publishing’-9.docx

Sharing and reusing data legally

Jozef Mišutka, Amir Kamran, Ondřej Košarko, Michal Sedlák, Pavel Straňák

Charles University in Prague, Czech Republic

The necessity to share and preserve data and software is becoming more and more important. Without the data and the software, research cannot be reproduced and tested by the scientific community. Making data and software simply reusable and legally unequivocal requires choosing a license for data and software which is not a trivial task. We describe a legal/licensing framework which implements the complete support for licensing submissions.

Mišutka-Sharing and reusing data legally-114.pdf

Globus Software as a Service data publication and discovery

Kyle Chard, Jim Pruyne, Rachana Ananthakrishnan, Steve Tuecke, Ian Foster

University of Chicago, United States of America

Globus is software-as-a-service for research data management, used at dozens of institutions and national facilities for moving, sharing, and publishing big data. Recent additions to Globus include services for data publication and discovery that enable: publication of large research datasets with appropriate policies for all types of institutions and researchers; the ability to publish data directly from locally owned storage or from cloud storage; extensible metadata that can describe the specific attributes of any field of research; flexible publication and curation workflows that can be easily tailored to meet institutional requirements; public and restricted collections that give complete control over who may access published data; and a rich discovery model that allows others to search and use published data. This presentation will give an overview of these services.

Chard-Globus Software as a Service data publication and discovery-138.pdf

Equal partners? Improving the integration between DSpace and Symplectic Elements

Kate Miller1, Craig Murdoch2, Andrea Schweer3

1The University of Waikato Library, Hamilton, New Zealand; 2Auckland University of Technology Library, New Zealand; 3The University of Waikato ITS, Hamilton, New Zealand

While self-submission by academics was regarded as the ideal way to add content to Open Repositories in the early days of such systems, the reality today is that many institutional repositories obtain their content automatically from integration with research management systems. The institutional DSpace repositories at Auckland University of Technology (AUT) and at the University of Waikato (UoW) were integrated with Symplectic Elements in 2010 (AUT) and in 2014 (UoW). Initial experiences at AUT suggested a mismatch between the interaction options offered to users of Symplectic Elements on one hand and the actions available to repository managers via the DSpace review workflow functionality on the other hand. Our presentation explores these mismatches and their negative effects on the repository as well as on the user experience. We then present the changes we made to the DSpace review workflow to improve the integration. We hope that our experiences will contribute to an improvement in the integration between repository software and research management systems.

Miller-Equal partners Improving the integration between DSpace and Symplectic Elements-198.pdf

E-quilt prototype: research of experiment in scenario of data sharing

ADRIANA CARLA S. OLIVEIRA1, GUILHERME ATAÍDE DIAS2, PEDRO LUIZ PIZZIGATTI CORRÊA3

1FEDERAL UNIVERSITY OF PARAÍBA; 2FEDERAL UNIVERSITY OF PARAÍBA; 3UNIVERSITY OF SÃO PAULO

Changes are occurring in the scientific communication area, from an analogical perspective to the digital, emerging the e-Science as a new scientific paradigm. This possibility can bring different perspectives of use, reuse and access of scientific research information through the use Digital Information and Communication Technologies and with new possibilities of scientific publication with the rise of different digital formats and media. The rise of a new modality of scientific publishing known as enhanced publication is doing part this scenario. Development of scientific cycle, open data sharing, metadata patterns, protocols of interoperability and data aggregation models and information systems are necessary this context. Data sharing brings whole news possibilities of convergences, connectivity and collective interactive research. The goal this poster is bring of a relate of an experiment research, called e-Quilt prototype, in development in the PhD thesis, in Graduate Program Information Science, in Federal University of Paraíba, João Pessoa, Brazil e with partner of the University of Tennesse, Knoxville (UTK), through College Communication & Information Science. This research has a Qualitative and Quantitative research approach. The used method is Quadripolar. The quadripolar research dynamics have purpose of an interaction between the epistemological, theoretical, technical and morphological poles. Has this research exploratory and experimental character.

OLIVEIRA-E-quilt prototype-52.pdf

Diversity or Perversity? An Assessment of Indonesian Higher Education Institutional Repositories

Toong Tjiek Liauw1,2, Paul Genoni1

1Curtin University, Australia; 2Petra Christian University, Indonesia

The presentation will deliver the results of content analysis conducted for approximately 80 Indonesian higher education institutional repositories (IRs). These institutions have been selected to represent the full range of higher education institutions in Indonesia.

The problem facing an emerging nation such as Indonesia, is that the higher education sector is diverse, loosely regulated, and struggling to achieve a sustainable level of equitably distributed funding. It also supports an emerging research sector that struggles for visibility in an international environment that privileges outputs in English originating from ‘developed’ countries and accessible through international journals. In this context open repositories are seen as offering part of the solution when implemented at an institutional level.

The approach taken in this research is to compile quantitative and qualitative data capturing the current state of Indonesian IRs, with a view to summarizing progress and creating the basis from which future national IR policy can be decided and implemented.

Final conclusion(s) cannot be offered at this stage since this research (to be concluded in March 2015) is still proceeding. However based on the data gathered, there are interesting practices that do not reflect the original idea of IRs as an (Green) Open Access strategy.

Liauw-Diversity or Perversity An Assessment of Indonesian Higher Education Institutional Repositories-21.doc

Widget Integration in Open Repositories: Real World Experiences with the PlumX Widget

Marianne Parkhill

Plum Analytics, United States of America

Integrating third-party Javascript widgets into repositories is a great way to add value, but it does have its challenges because effective integration has to be done at the artifact level. This poster will explore the real world experiences Plum Analytics has had integrating PlumX widgets into multiple repository platforms. These experiences can help inform others who are contemplating integration projects, regardless of the platform or widget product they want to use. Included in this poster session will be a view into user engagement to help answer the question: are widget integration projects worth the effort?

Parkhill-Widget Integration in Open Repositories-160.pdf

Looking for Solid Ground During Consolidation: The DigitalCommons, Kennesaw State University and Southern Polytechnic State University

Aajay Murphy

Kennesaw State University, United States of America

It was announced in November of 2013, by University System of Georgia Chancellor Hank Huckaby, that Kennesaw State University and Southern Polytechnic State University would consolidate. The institutions are scheduled to be operating as a consolidated university by July 2015. This decision has impacted every facet of campus life and work, including DigitalCommons. This poster will highlight the experiences of the repository manager at Kennesaw State University during the transition with hopes to encourage others who may be going through similar scenarios to engage in a more personal dialogue. Challenges, accomplishments, and strategies will be addressed.

Murphy-Looking for Solid Ground During Consolidation-133.pdf

Hydra North: Moving forward with 10 years of repository history Babysteps to a solid centralized Digital Asset Management System

Weiwei Shi

University of Alberta Libraries, Canada

University of Alberta is currently building a centralized Digital Assets Management System (DAMS) that is designed to provide central storage, management and long-term stewardship for the full spectrum of the digital objects, from E-Theses to digitized materials, from multimedia objects to research data that currently all live in separated repositories. We see the development of the DAMS as a great opportunity to consolidate these silos into a single more coherent and effective core, which will embrace semantically richer data, better discovery experiences, and greater potential to leverage these assets for user communities. In this presentation we will discuss some of the initial steps we took to start the process of migrating existing content into the new system, with the migration of metadata from XML-based standards to RDF. This presentation is intended to unveil the baby steps (and stumbles) we have taken to plan and execute the migration, summarize the challenges we’ve been facing, and reflect on the lessons we have learned.

Shi-Hydra North-163.docx

Sharing Scholarly Journal Articles Through University Institutional Repositories Using Publisher Supplied Data and Links

Judith C. Russell1, Alicia Wise2

1University of Florida, United States of America; 2Elsevier

The University of Florida (UF) libraries and Elsevier are collaborating to widen access to articles authored or co-authored by UF authors and published by Elsevier. Both organizations are committed to supporting researcher success and believe working together will more effectively widen public access, improve compliance with current and future funder policies, and facilitate efficient and responsible sharing of scholarly journal articles through the university's institutional repository.

Russell and Wise will share a strategic, rather than technical, perspective on how and why they are working together to improve access to scholarship from UF authors through the University IR.

Russell-Sharing Scholarly Journal Articles Through University Institutional Repositories Using Publisher.pdf

Funding models for open access digital repositories

Dermot Frost1, Rob Kitchin2, Sandra Collins3

1Trinity College Dublin, Ireland; 2National University of Maynooth, Ireland; 3Royal Irish Academy, Ireland

This presentation examines funding models for open access digital repositories. Whilst such repositories are free to access, they are not without significant cost to build and maintain. The lack of a direct funding stream through payment for use poses a considerable challenge to open access repositories and places their future and the digital collections they hold at risk. We document and critically review 14 different potential funding streams, grouped into six classes with a particular focus on funding academic research data repositories. There is no straightforward solution to funding open access digital repositories, with a number of general and specific challenges facing each repository and funding stream. We advocate the adoption of a blended approach that seeks to ameliorate cyclical effects across funding streams by generating income from a number of sources rather than overly relying on a single one. Creating open access repositories is a laudable ambition, however such repositories need to find sustainable and stable ways to fund their activities or they place the collections they hold at significant risk. Our review assesses and provides concrete advice with respect to potential funding streams in order to help repository owners address the financing conundrum they face.

Frost-Funding models for open access digital repositories-121.pdf

Harmonizing Research Management and Repository Functionality to Support Open Science

Brigitte Joerg, Nigel Robinson, Thorsten Hoellrigl, Mark Matthews

Thomson Reuters, United Kingdom

The research landscape is rapidly changing from a publication-centered output infrastructure into an open and data-driven result-oriented infrastructure, dependent on advanced technologies and services that enable further reuse and thus creation of additional values from scientific results. Obviously, this has an effect on traditional research lifecycle workflows. It requires the harmonization of a variety of ongoing integration approaches and has to take into account a multiplicity of stakeholder needs. Here, we focus on the intersection of functionalities between a CRIS and Repositories in the changing research ecosystem. The two paradigms increasingly align their functionalities and thus collaborate in support of the open agenda. A number of conference themes will be reflected by demonstrating Thomson Reuters’ roles in the research ecosystem and contributions to a sustainable open agenda. The audience will gain an understanding of the challenges and the tensions to be managed with respect to flexibility or diversity and complexity vs. standardization and harmonization.

Joerg-Harmonizing Research Management and Repository Functionality-70.docx

Data sharing requirements for decision-making in Brazilian biodiversity conservation

PEDRO LUIZ PIZZIGATTI CORRÊA1, ADRIANA CARLA S. OLIVEIRA2, GUILHERME ATAÍDE DIAS3

1UNIVERSITY OF SÃO PAULO FEDERAL UNIVERSITY OF PARAÍBA; 2FEDERAL UNIVERSITY OF PARAÍBA; 3FEDERAL UNIVERSITY OF PARAÍBA

The sharing of primary biodiversity data has gained importance in recent years, due to its use for environmental management and for decision-making regarding conservation and sustainable use of natural resources. In this sense, diverse researches initiatives have been launched with the objective of develop protocols and tools to support primary biodiversity data sharing. The use of a cyberinfrastructure for Brazilian primary biodiversity data sharing is essential to support actions regarding environmental management, but a number of challenges have to be faced such as integration of data, which is scattered across different administrative domains, retrieval of data stored in heterogeneous systems, and sharing sensitive data. In this context was conducted a survey with Brazilian biodiversity research institutes to guide the development of a cyberinfrastructure to overcome these challenges. The evaluation of this cyberinfrastructure in Instituto Chico Mendes de Conservação da Biodiversidade – Ministry of Environmental of Brazil, has showed the potential to enhance Brazilian primary biodiversity data reuse, by the means of supporting collaborative and interdisciplinary research.

CORRÊA-Data sharing requirements for decision-making in Brazilian biodiversity conservation-134.doc

System Integration between Sakai and Fedora 4

Osman Din

Yale Unviersity, United States of America

In this poster session, we plan to demonstrate integration of a Learning Management System like Sakai with Fedora 4, and share our suggestions on what level of services a digital repository like LMS should provide. We would also like to share any lessons learned in the integration process. The repositories (and LMS tools alike) that have integration goals with other systems should provide APIs to allow interoperability. Interoperability is a key requirement in any enterprise set up, and repositories must provide APIs, so that they don’t become silos and so that they provide the most value for the money. The intended audience for this poster is repository managers, teaching staff, and developers in the academic technology groups.

Din-System Integration between Sakai and Fedora 4-48.docx

The Past, Present, & Future of Capturing the Scholarly Record of a Small Comprehensive U.S. Institution: Toward a Sustained Repository Content Recruitment and Workflow Strategy

Jonathan Bull1, Teresa Schultz2

1Valparaiso University, United States of America; 2Indiana University-Purdue University, Indianapolis, United States of America

The number of institutional repositories (IRs) has grown rapidly since Clifford Lynch declared in 2003 that IRs were “essential infrastructure” for academic institutions. While the number of IRs have grown, the success of IRs does not necessarily follow suit for a variety of reasons. For smaller institutions, limited staffing, expertise, and content recruitment can all be significant factors threatening an IR’s success, especially as these factors change over time.

Launched in 2011, ValpoScholar (http://scholar.valpo.edu/) is currently on its second iteration of a content recruitment strategy and workflow design with a third iteration in development. With a primary focus on capturing metadata before moving to full-text access and preservation, this evolving approach to content recruitment and workflow design has led to a 25% increase in IR record creation, while also increasing full-text availability as well as giving the institution its first public scholarly record. This poster will share what Valparaiso University has done in the past, what its current processes are, and what it plans to do in the future to ensure a comprehensive scholarly record of the institution and also how similar institutions with limited staffing, expertise, and content may sustain the growth of their IRs.

Bull-The Past, Present, & Future of Capturing the Scholarly Record of a Small Comprehensive US Institution-182.pdf

Starting from scratch – building the perfect digital repository

Violeta Ilik, Piotr Hebal, Kristi Holmes

Northwestern University, United States of America

By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.

Ilik-Starting from scratch – building the perfect digital repository-225.pdf