Introduction

In this paper, we describe how the outputs of one Research Data Alliance (RDA) interest group (IG) and five working groups (WG) have shaped the core concepts of DiSSCo (Distributed System of Scientific Collections) – the European research infrastructure for Natural Science Collections. Designing, building and operating a research infrastructure like DiSSCo, which has a high dependence on information and communication technologies (ICT) and data management best practices brings together expertise from multiple domains (museum curators, taxonomists and other scientists, biodiversity informaticians and data managers, computing and software engineers, administrative management). The complex design decisions involve interrelated technical components spanning five data lifecycle phases, from data acquisition through data curation, data publishing and data processing to data use (; ). The collective expertise from RDA and the published recommendations provides the DiSSCo community with useful guidance for creating and supporting a sustainable, long-lived research infrastructure that can enhance the overall capacity of the user to find, retrieve, and use relevant information. How this community has used RDA recommendations to shape the DiSSCo approach is generic enough to be of interest to readers from other fields.

The paper is organized as follows. We begin with the background on Natural Science Collections (NSCs) in the context of recent advances in digitization, data sharing and how the new challenges in the future can be addressed by a research infrastructure such as DiSSCo. The background also introduces the Digital Specimen concept– a particular type of FAIR Digital Object and the DiSSCo data lifecycle. Foregrounding the DiSSCo data lifecycle then we describe how selected outputs of RDA are applied in the design of DiSSCo data infrastructure. We conclude the paper with an overview of the future core DiSSCo services that these design decisions will enable.

The RDA outputs cover the following aspects:

  1. RDA output dealing with the adoption of Digital Object Architecture, based on the work of the Data Fabric IG and the Data Foundation and Terminology WG ();
  2. RDA output dealing with the usage of persistent identifiers and kernel information in the context of machine actionable services and programmatic decisions for digital objects from the PID Kernel WG ();
  3. RDA output dealing with the aggregation of digital objects in the context of meaningful entities and serving the data from the Research Data Collections WG ();
  4. RDA output dealing with curation and maintenance of digital objects from the RDA/TDWG Metadata attribution WG (); and
  5. RDA output covering guidelines and specifications to assess the DiSSCo FAIR implementation plan ().

Background

Natural Science Collections (NSCs) hosted in natural history museums, botanic gardens, universities and other research centres around the world contain data that are critical for many scientific endeavours (). Over the years various large scale digitization projects (), mobilization of biodiversity data () and use of museum specimens to study genetic diversity () provided novel ways of doing science (). Within the context of COVID-19 pathogen discovery research, Cook et al. () highlight the crucial role of the information system related to collections that hold specimens:

“In the past few decades, museums have become hubs of biodiversity informatics, serving as the critical nexus between biological samples and sample-derived data (e.g., genomics, geographic information, isotope chemistry, CT scans). The current pandemic reminds us that natural history specimens are important but underappreciated reservoirs for studying the hosts and distributions of animal and human pathogens (see Harmon et al. 2019) and that the data connected to these specimens increase our understanding not only of the host organism but of the pathogens as well. Enhanced support of both physical and cyberinfrastructure for biodiversity collections would yield an information system to enable prediction and mitigation of future outbreaks and pandemics.”

To support data infrastructures for collections-based research in the future (and this includes their initial design and implementation), we need to understand the challenges and urgency ushered by the new types of data collection, curation, and sharing (e.g., ; ) along with maintaining and providing access to historical data (e.g., ; ). The physical materials (samples and specimens stored in natural history museums, seed banks, cryo banks, etc.) are crucial elements for scientific inquiry. However, accessing these physically comes with its challenges of reuse as materials can deplete and the distribution of traits and phenotypes in species populations in living collections varies over time (). Therefore, access to digitized data acts as an essential reference point to the relationship between the digital and physical world. This anchoring of the different kinds of data derived from physical specimens has been explored and described as the notion of the Extended Specimen (). It represents the integrative and interdisciplinary next generation of NSCs (). We use the term ‘Digital Specimen’ (explained below) in an analogous manner.

Existing systems for exploiting material stored in NSCs are inefficient and not cost-effective (). Despite significant work by global data infrastructures such as the Global Biodiversity Information Facility (GBIF), Biodiversity Heritage Library, and Plazi TreatmentBank, there remain systematic gaps in linking specimen data to other data classes such as DNA sequences, literature, functional traits, habitat and conservation data and ecological models (; ). We are noticing increased use of digitized data from NSCs (). However, at the same time, for many projects, these data are organized and managed in a manner that makes data linking, sharing, and future reuse problematic ().

Over the past several years, the community around NSCs recognized the gaps in our understanding of bio- and geo-diversity due to loosely coordinated data infrastructures (). This has led to increased efforts towards creating shared global roadmaps for biodiversity informatics (), developing standards for improved data quality (), adopting FAIR principles (; ) and creating building blocks for a data landscape in which component systems can exchange and understand the information in a standard form using open protocols, and metadata (). The Distributed System of Scientific Collections (DiSSCo), along with several global partners, is working towards such a data landscape by building a pan-European Research Infrastructure (RI) that aims to mobilize and unify bio- and geo-diversity information connected to the specimens held in natural science collections. As of February 2020, DiSSCo entered the preparation phase where key design decisions and best practices are influenced by five selected outputs from the Research Data Alliance (RDA) (summarised in Table 1) along with the FAIR data principles (Findability, Accessibility, Interoperability, and Reusability) (; ) and the concept of FAIR Digital Objects (FAIR-DO) (; ; ).

Table 1

RDA outputs applied to the management of the DiSSCo data lifecycle.

RDA outputRDA IG/WGDiSSCo ElementPurposeWorkflow/Data phase

1. Adoption of Digital Object ArchitectureData Foundation and Terminology WG Data Fabric and Terminology IGDiSSCo Digital Specimen ArchitectureDefine the FAIR Digital Object Architecture of DiSSCo, including the Digital Specimen Object ModelCreation and management of digital objects/All phases of the data life cycle
2. Persistent Identifiers and Kernel InformationPID Kernel WGMeta-information about a digital object and DiSSCo (data) type registryAllowing smart programmatic decisions and inspection of the object’s PID recordData acquisition, curation, publishing, use
3. Aggregation of digital objectsResearch Data Collection WGDiSSCo data repository/portal/APIProvide meaningful entities and serving the dataData publishing and use (share, download)
4. Metadata attribution and use of PROV entitiesRDA/TDWG Metadata attribution working groupDigital Specimen and collection objectsCorrectly attribute sources of data and work carried outDigitization, curation and maintenance of digital object (for example collection objects or specimens)
5. FAIR data maturity modelRDA FAIR data maturity model working groupDiSSCo Digital Specimen ArchitectureDevelop guidelines and specifications to assess FAIR implementation plan.DiSSCo data lifecycle

DiSSCo’s vision is to transform a landscape of disconnected individual natural science collection providers into a coherent research infrastructure with a variety of e-services to enable this: 1) the European Loans and Visits System (ELViS), a one-stop shop for access to the collections, providing both physical access and virtual access by digitization on demand; 2) European Curation and Annotation System (ECAS) for community curation of the digitized specimen data; 3) Specimen Data Refinery (SDR) providing digitization services to extract, enhance and annotate data from specimens digital images; 4) Collections Monitoring Dashboards (CMD) showing the digitization status and usage of the collections and 5) a knowledge base providing protocols, digitization resources, manuals and other documents as FAIR-DO for direct integration with the other e-services. The RDA outputs mentioned here are providing essential building blocks for envisioning these services.

One of the critical elements in DiSSCo is the ‘Digital Specimen’, a FAIR-DO acting as a digital twin on the Internet for a specific physical specimen in a museum collection. The digital information derived from the specimens will enable FAIR data and services where various data classes can be linked to provide seamless unified access to information. These ideas were explored in the EU-funded ICEDIG project (2018–2020) and one of the core architecture outcomes was the decision to adopt FAIR Digital Objects (FAIR-DO) (). In particular, this choice enables the creation of machine-actionable digital twins which by design ensures FAIRness of the data and various other features such as unambiguous identification, data typing enforcement, attribution and provenance tracking.

The following sections explain how five selected RDA outputs, summarised in Table 1, are applied to the management of the data lifecycle in DiSSCo, illustrated in Figures 1 and 2.

Figure 1 

Lifecycle of Digital Specimen research data in the DiSSCo data infrastructure, from acquisition through curation, publishing, processing and use, which can create new data that can be iteratively acquired, curated, etc.

Figure 2 

Contributions of RDA outputs to the design of data management in the DiSSCo Digital Specimen data infrastructure. The FAIR Digital Object Framework and the Recommendation on PID Kernel Information contribute to the architecture as a whole while the Recommendation on Research Data Collections and Attribution Metadata contribute more explicitly into specific phases of the data lifecycle.

This lifecycle begins with the digitization and acquisition of data from physical specimens – the creation of the Digital Specimens (DS) and Digital Collections that are specific object types with persistent identifiers and attributes. This is the data acquisition phase. These objects then are registered and curated within a repository platform (curation phase). Curated data is published to DiSSCo users and parties external to the infrastructure, as well as directly to other services. DiSSCo will provide services for further processing of data (data processing phase) that can produce new data to be stored within the infrastructure. Finally, the broader research community can use DiSSCo data and can design experiments and analyses acting on the published Digital Specimen and Collection data that produce results (derived data), which in turn can be passed back into DiSSCo for curation, publishing and processing; thus, restarting the lifecycle ().

Adoption of Digital Object Architecture

RDA output from the Data Fabric IG on virtual layer recommendations () and the Data Foundation and Terminology WG on the basic vocabulary to apply a standard core data model () provide the structure for DiSSCo’s data organization model.

Even though the history behind digital objects goes back to the early days of the Internet (), the recent rendition has its origin in the RDA’s Data Foundation and Terminology (DF&T) WG. From there the discussion has been taken up by the members of the Data Fabric IG (DFIG) together with the C2CAMP initiative, the RDA-Europe Group of European Data Experts (GEDE) and the GOFAIR initiative. These discussions have shaped the current principles of Digital Object Architecture () and most recently FAIR Digital Objects (FAIR-DO) (Figure 3a) and the FAIR-DO Framework (; ; ). DiSSCo adopts the Digital Object Architecture and the FAIR-DO framework to achieve FAIRness and meet the requirements of the FAIR Guiding Principles for scientific data management (; ; ).

Figure 3a 

Main components of a digital object – The core of the DO is a bit sequence that is encoding content (data, metadata, software, etc.). This is described by metadata to enable access and for correct interpretation. A persistent identifier uniquely identifies the DO and operations permit the content and metadata to be manipulated. Reproduced with permission ().

Figure 3b 

Basic structure of a Digital Specimen (DS). A DS acts as a container for pointers, metadata and embedded content, i.e., information about and derived from the corresponding physical specimen including but not limited to, for example, necessary information about the specimen, image(s), molecular data, genetic sequence data, and morphological measurements.

The main impetus behind adopting FAIR-DOs for Digital Specimens (Figure 3b) is to treat the digital representations of physical specimens as atomic items that need individual identification to avoid ambiguity and to collect and anchor core information about the specimens in one place. The Digital Specimens act as the mutable space for the curation of all data derived from and relating to the corresponding physical specimens. Unambiguous persistent identification allows tracking of Digital Specimens in the face of changing location, as well as organization into collections for specific purposes. The data derived from and linked to physical specimens must be easily findable and accessible. They must adhere to open standards with rich machine-comprehensible semantics, as well as conveying context () so they are interoperable and widely reusable by both humans and machines. Just being machine-readable (i.e., by linking to ontologies and encoding as RDF or JSON-LD) is insufficient to achieve reusability and, especially for reproducibility of science, provenance, data quality, credit and attribution ().

DiSSCo envisions that persistently and unambiguously identifying these Digital Specimens creates a digital doorway that allows researchers to do more than just find specimens and provide means for the institutions to widen access to the data stored within the NSCs (, ; ). DiSSCo expects that adopting the Digital Object Architecture recommended by RDA, and treating Digital Specimens as first-class citizens in that architecture can lead to transformations in working practices of collections-based science and the value chains founded in natural science collections.

Persistent Identifiers and Kernel Information

RDA output from the PID Kernel WG on PID Kernel Information () provides the capability to elevate a small number of essential attributes of Digital Specimens to the PID record level to enable new machine-actionable services without requiring access to or retrieval of the Digital Specimen objects themselves.

Identifiers are used in NSCs to identify physical specimens () and organizations such as GBIF are in the forefront of using Digital Object Identifiers (DOI) for datasets, queries and download records (). At the moment though, it is not possible to unambiguously and persistently refer to digital equivalents of a physical specimen. The Digital Specimen and persistent identifier (PID) scheme combination proposed by DiSSCo fills this gap. DiSSCo, with extensive consultation and support from several international stakeholders such as GBIF, Corporation for National Research Initiative (CNRI), International DOI Foundation (IDF), is working towards adopting a Handle-based system () tuned to the needs of the natural science collections community. Scalability in the tens to hundreds of billions of PIDs (i.e., supporting a huge address space), trust (i.e., accurately maintained by a dedicated and reliable team), persistence over a very long term (i.e.,100-year target) and community governance (i.e., transparent and sustainable business model) are essential requirements to be accommodated. Besides specimens, other things have to be persistently identified. GRID and ROR are used as unique identifiers for institutions and ORCID is used for people. These allow to unambiguously link specimens respectively to the collection holding institutions and to the researchers and curators.

Assigning identifiers is the first step towards FAIR data services and ensuring machine actionability of FAIR-DOs. The definition and description of the metadata attributes of the specific digital object and persistent link to all these are the next steps. DiSSCo Data Management Plan recognizes this and thus references the RDA Recommendation on PID Kernel Information (): “Specific PID Kernel Information profiles and object type definitions must be registered for the Digital Specimen object type and other object types in the well-known Kernel Information profile and Data Type registries” ().

It is clear that a minimal set of information associated with each Digital Specimen should be available to facilitate machine-actionable services and programmatic decisions and delivery of these attributes must work with low latency and in a scalable fashion (). What is less clear is what these attributes should be or the extent of them, and what makes an optimal kernel information profile. This needs further study.

One use case that can exploit kernel information is submitting large number (millions) of specimen images in long term storage to a workflow for optical character/text recognition (OCR), making the results findable with a full-text search (). These images and OCR’d label texts will reside in an ecosystem with millions of other digital objects (also with research artefacts from different domains). Full resolution of each PID might not be feasible in such cases. So for quick machine interpretation processing appropriate kernel information will be vital. A simple kernel information profile example to support this is in Table 2.

Table 2

Simple example of PID Kernel Information for a Digital Specimen. Example PID: 123prefix/uuid-27a9edf63.

AttributeValue TypeExample Value

Locationurlhttp://example-dissco-repo/uuid-27a9edf63
Createddate and time2019-04-24T11:07:11.771Z
Typetype definitiontypedef123/DigitalSpecimen
PhysicalSpecimenIdstringBMNH:1905.5.30.352

In this example, “123prefix/uuid-27a9edf63” is the PID of a digital object with several attributes in its particular kernel information profile:

  1. Location: URL redirecting to the location of the Digital Specimen object. This URL can resolve to a digital object repository or another landing page and can also provide data serialisation options like JSON-LD.
  2. Created: The timestamp when this object was created.
  3. Type: A Digital Specimen. Instead of storing the string “Digital Specimen”, we refer to a PID in a Data Type Registry which is a resolvable entity with other metadata attached. It tells us the structure of the Digital Specimen object, thus enabling us to parse that.
  4. PhysicalSpecimenId: Digital Specimen is a digital twin of a physical specimen, so the identifier of the physical specimen is an important and special attribute for this particular type of digital object. The value here contains the physical specimen identifier as a string.

From a simple machine-actionable point of view, this digital object provides the persistent identifier, points to a type declaration and, provides the physical specimen identifier. Other attributes – currently under discussion in DiSSCo Prepare WP 6 (Technical Architecture and Services provision) – such as scientific names, physical location, version, digitization level/definition, digital object policy, etc.) can also be included in such a profile. These can help to decide whether a Digital Specimen is suitable for the intended operation. For example, an update operation on a Digital Specimen can be adding missing records or fixing incorrect georeference and locality data. This update would be preceded by a search operation that will retrieve incomplete relevant records.

These operations will be part of services envisioned in DiSSCo such as the digitization workflow. At the moment, digitization activities vary from one specimen category to another and between institutions (). We are addressing these challenges within the context of developing openDS (an open specification of Digital Specimen and other related object type definitions essential to mass digitization) and MIDS (Minimum Information about a Digital Specimen) to establish data standard and common practices. A common understanding of these processes will help us refine how and when the minimum set of metadata that does not change during the lifetime of the object needs to be created and maintained.

Digital Specimens now can become part of a FAIR infrastructure implementation because with kernel information and other metadata, they are findable and accessible. Repository and application services can be built in conjunction with these digital twins as the basis of a Digital Object Architecture. Digital Specimens can be accessed, retrieved and interacted with using standardized communication protocols such as Digital Object Interface Protocol (DOIP) or Hypertext Transfer Protocol (HTTP). Digital Specimens are interoperable because services and systems can determine the attributes that are tied semantically to FAIR vocabularies, and perform operations on them. And lastly, the kernel information profile and other attributes enable accurate and relevant data needed for reproducibility and reusability (for example, publishing the digitized data in a different format or running an experiment using data linked to a specimen).

Aggregation of digital objects

RDA output from the Research Data Collection WG on actionable collections and a technical interface specification to enable client-server interaction provide guidelines for how to create meaningful services around the DiSSCo specific digital objects.

Building on the essential components of Digital Object Architecture and PID scheme, the next step considers how to go beyond the single data objects. “Collection” in the RDA sense means grouping objects together without demanding particular semantics or formats and this grouping should have a unique identifier with well defined actions such as CRUD (Create, Read, Update, Delete) that act on all objects in the group equally.

The NSC community has focused on creating a standard for Collection Descriptions. Furthermore, in NSC and DiSSCo terms, “Collection” has a specific meaning – “A collection is any set of physical things (material/natural objects) or image, audio and video recordings (either analogue or digital) treated together for curative purposes” (). So we need to investigate further to see how commensurate this is with the RDA “Research Data Collection” recommendation ().

In DiSSCo, a “Digital Collection” is a specific type of digital object acting as a twin for a real-world natural science collection. It is a collection of Digital Specimens, mirroring physical world practice of organizing specimens into specific kinds of collection (zoology, botany, etc.). In the digital world, however, the notion of a collection is far more flexible; insofar as objects can be members of multiple collections simultaneously, even without specific criteria defining membership.

Collections as digital objects will be consumed by services like ELViS (European Loans and Visits System) to facilitate loans and visits transactions and digitization on demand. A Collection Monitoring Dashboard can provide comprehensive overviews of collections across different institutions and disciplines. An extensive set of user stories maps user journeys for activities such as searching for collections and specimens, requesting loans, reviewing loan requests, generating reports on loans and visits and collection usage. For each of these steps and services, the role of collection as a digital object is crucial (Figure 4).

Figure 4 

Building blocks of DiSSCo e-services start with individual objects (represented digitally through Digital Specimens), collections and collections overview.

Following the RDA recommendation (), the digital collection as an entity with a persistent identifier (e.g., the digital object for the Mammal Collection at a museum) will support different operations (such as retrieve ordered or filtered list) and specific properties (essential information such as which museum, how it is stored), and membership information (e.g., specimens that are in the mammal collection). Beside ELViS, other e-services are envisaged to be implemented on collections of specific data types; for example, involving automated machine learning and computer vision ().

One of the challenges the DiSSCo architecture design work needs to address is the difficulty of understanding how data as a bundled package moves and is used across different practical situations in science, industry, policy making and public discourse. Data can be decontextualized and then recontextualized in novel situations to become meaningful beyond their original context of production (). Digital Specimens as atomic entities and the flexibility to organize them and other object types into different kinds of machine-actionable digital object container (i.e., ‘collection’ in the RDA sense) will help in facilitating yet to be imagined uses.

Metadata attribution and use of PROV entities

Output from the joint RDA/TWDG metadata attribution WG on standardized metadata for attributing work and tracking provenance in the curation and maintenance of research collections guides how attribution details can be preserved in DiSSCo digital objects.

DiSSCo’s Data Management Plan () highlights the importance of the provenance of specimens, their digitization and change history, annotation, curation, and usage. These histories must be maintained consistently through the lifetime of the DiSSCo infrastructure. Recording activities of human and machine agents during data curation and processing phases is essential to FAIR implementation. Groom et al. () highlight the importance of attributing people behind the specimens in a standard fashion: “Many people can be associated with a specimen: the collector, curator, determiner, annotator, mounter, transcriber, digitizer, imager and georeferencer. For many reasons, these people are important to science. Knowing the person gives a degree of credibility to the specimen and its identity. The biographical data of the people can not only help validate data, but also credit the people for the work they have done”.

Three important information elements capture this detail and can be used as the means for attributing work: the agent performing the activity, the activity (or action) they perform, and the digital or physical object (entity) they are curating/processing. The joint RDA/TDWG Attribution WG Metadata recommendation (), which uses W3C’s PROV data model, makes it easy to implement this in the FAIR-DO context. In our design, we are planning to store provenance as another digital object type linked to the object on which work is performed. This will create a common curation space linked to different types of digital objects, other data classes and services. One current proposal to materialize this is through the European Curation and Annotation System (ECAS) e-service to serve as a community curation service.

One of the example use cases in the RDA recommendation is relevant:

Sergey (a museum curator) recurates a jar containing multiple specimens. Each specimen is removed from the jar and individually mounted. Sergey then examines the specimen and jar label, and enters a new record into the collections management database. He uses the data in the new record to generate a new label to attach to the physical specimen.

Sergey also, in the process of recurating one of the specimens, discovers a new species.

He describes the new species, and uses the species description to publish a journal paper.

Sergey should receive attribution for:

  • recurating the physical specimens
  • describing the new species
  • authoring the journal article
  • entering the specimen into the collections management database
  • generating a label for the re-curated specimen.”

As is evident, even within a single workflow, data can travel from a collection management database to a journal where different systems, standards, and application programming interfaces (API) are involved. These five attributions need to be captured in a standard way to be part of the Digital Specimen data when different operations are performed in multiple contexts.

Leonelli () using the example of model organism biology, makes the point that to support the scientists, we must understand the processes behind successful empirical research. Often, policymakers and funders predominantly understand research as products instead of processes. Metadata attribution and use of PROV entities provide technical foundation to bring these processes to the forefront of supporting and sustaining a research infrastructure.

FAIR Data Maturity Model

Output from the RDA FAIR data maturity model WG () provides guidelines and specifications to assess the DiSSCo FAIR implementation plan.

DiSSCo’s Data Management Plan () provides a summary statement of DiSSCo’s implementation of the FAIR guiding principles. The indicators, priority levels and evaluation methods described by the FAIR Data Maturity Model (DMM) WG () were not available during the preparation of the DiSSCo DMP. However, the output is an essential tool for future periodic evaluation of the DMP and FAIR implementation.

As DiSSCo data infrastructure is FAIR by design, the essential indicators in the DMM are thus addressed. At the time of writing this article, DiSSCo is in maturity level “2” (“under consideration or in planning phase”) for all the essential indicators. The DMM also decomposes texts of the FAIR principles to provide further granularity. For instance, the RDA output provides two indicators for FAIR principle F1 (one for persistent identifier and one for globally unique identifier). DiSSCo DMP addresses F1 as such: “A handle is issued to each object published in or by DiSSCo, allowing the object data to be found regardless of its location”. Due to our design choice of FAIR Digital Object, DiSSCo addresses both the persistency and uniqueness aspect of F1. For FAIR principle R1, the DMM indicator is: “Plurality of accurate and relevant attributes are provided to allow reuse” which is based on “R1: (Meta)data are richly described with a plurality of accurate and relevant attributes”. For R1, DiSSCo DMP states:

  • Each object contains a minimum of mandatory terms consistent with its formal object type definition, with the possibility to include optional additional terms and enrichments as necessary.
  • In the case of Digital Specimen and Digital Collection object types, the minimum of mandatory terms corresponds to the object’s classification as representing a specific level of digitization according to (respectively) the Minimum Information standard for Digital Specimens (MIDS) and the Minimum Information standard for Digital Collections (MICS).

Implementation of MIDS in the digitization process will ensure that enough data are captured, curated and published to make it reusable and thus creating “plurality of accurate and relevant attributes”. As we progress along from design phase to pilot and then implementation, the DMM indicators and evaluation methods can help DiSSCo to create tailored assessment but at the same time focus on FAIR convergence for cross-disciplinary interoperability (). Similarly, the other indicators mentioned in the DMP is commensurable with the RDA DMM framework.

Conclusions

In this paper, we have presented how the RDA outputs can be used to create building blocks for research infrastructure architectural design decisions towards FAIR compliance. For DiSSCo, e-services such as ELViS, designed around the concept of Digital Specimens are planning to improve access to natural science collections across Europe. Aggregation of these Digital Specimens through Digital Collections will enable monitoring tools like CMD to provide collections overview and reports that are immensely beneficial to track and assess scientific usage of the collection. The RDA outputs are not just for the access/use part of the data lifecycle. Data enhancement, annotation (using the planned Specimen Data Refinery) and community curation (using the European Curation and Annotation System) are building blocks for the research infrastructure vision of DiSSCo that all depend on these recommendations. Along with the different building blocks, the outputs also highlight the importance of data standards and common practices which have already been discussed in the ICEDIG project (2018–2020) and are currently being further studied in DiSSco Prepare (2020–2023).

The ideas expressed here are still in the design and/or conception stage and need to be further fleshed out to support the DiSSCo implementation and construction phase. Some of the RDA outputs are also similar in their conceptual nature and thus organizing workshops, and technical hackathons through RDA can help DiSSCo to clarify further, test and refine the concepts. DiSSCo experts regularly participate in RDA and collaboration with other disciplines through RDA can also provide learning opportunities and help us identify potential issues and risks in our concepts. There are other outputs – such as the outputs of the PID Information Types WG () and the Data Type Registries WG () – that we are still exploring.

The RDA recommendations and the broader global expertise represented therein enable us to design and build a robust, FAIR Digital Object based data infrastructure. We envision that this new infrastructure will be essential in supporting the next phase in the digital transformation of collections-based science to widen access and better enable the production of data and knowledge about the 1.5 billion physical specimens in the European natural science collections.