|
D-Lib Magazine
January/February 2017
Volume 23, Number 1/2
Table of Contents
Guest Editorial
RepScience2016
Amir Aryani
Australian National Data Service, Melbourne, Australia
Oscar Corcho
Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid, Spain
Paolo Manghi
Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Italy
Jochen Schirrwagen
Bielefeld University Library, Germany
Corresponding Editor: Paolo Manghi, paolo.manghi@isti.cnr.it
https://doi.org/10.1045/january2017-guest-editorial
In the last decade, information and communication technology (ICT) advances have deeply changed the way research is conducted within research infrastructures (RIs). A Research Infrastructure is intended as the compound of elements regarding the organization (roles, procedures, etc.), the structure (buildings, laboratories, etc.), the resources (microscopes, telescopes, sensors, services, data, digital library resources), and the technology (hardware and software, network protocols, Internet, applications, etc.) underpinning the implementation of scientific research. In this respect research relies mainly on high-quality and digitally accessible research products (e.g. publications, datasets, experiments, software, web sites, blogs) in order to generate novel ideas, findings, and concrete results.
Along the same line, scientific communication has mutated in order to adapt its underlying mission (and business models) to such new scenarios as benefit from them. In particular, the traditional paradigm of publishing research solely by articles cannot (i) cope with the increasing demands of immediate access to all results of results, and (ii) exploit the opportunities of reproducibility (which subsumes repeatability) offered by "digital" science today. Scientists, funders, and research institutions are pushing for innovative scientific communication workflows (i.e. submission, peer-review, access, re-use, citation, and scientific reward), marrying an holistic approach where "publishing" includes, in principle, any digital product resulting from a research activity that is relevant to the interpretation, evaluation, and "reproducibility" of the activity or part of it. Defining, taking up, and supporting such "revolutionary" publishing workflows become urgent challenges, to be addressed by ICT solutions capable of fostering and driving radical changes in the way science is developed.
The first international workshop on Reproducible Open Science (RepScience2016) aimed at enhancing collaboration between (i) representatives of the RDA working groups addressing aspects that may contribute to the revision of current scientific communication practices, (ii) ICT scientists involved in the definition of new technical solutions for supporting them and (iii) library scientists working on the identification of new publishing and scientific reward paradigms. The workshop brought together skills and experiences focusing on the definition and establishment of the next generation scientific communication ecosystem, where scientists can publish research results (including the scientific article, the data, the methods, and any alternative product that may be relevant to the conducted research) in order to enable reproducibility (effective reuse and decrease of cost of science) and rely on novel scientific reward practices. This special issue of D-Lib Magazine includes the proceedings of RepScience2016. It includes nine articles covering novel and different facets of reproducibility, namely: "Towards an enabling infrastructure", "Models and languages", "Systems", and "Real-world experiences".
About the Guest Editors
Amir Aryani is working in the capacity of a project manager for Australian National Data Service (ANDS), and he is the co-chair of the Data Description Registry Interoperability WG in Research Data Alliance. Dr. Aryani has a PhD in computer science. His research is focused on the interoperability between research information systems, and he is leading the Research Graph project to build a large-scale distributed graph that enables connecting heterogeneous data infrastructures.
Oscar Corcho is a Full Professor at Departamento de Inteligencia Artificial (Facultad de Informática , Universidad Politécnica de Madrid) and belongs to the Ontology Engineering Group. His research activities are focused on Semantic e-Science and Real World Internet, although he also works in the more general areas of Semantic Web and Ontological Engineering. He has participated in a number of EU projects (DrInventor, Wf4Ever, PlanetData, SemsorGrid4Env, ADMIRE, OntoGrid, Esperonto, Knowledge Web and OntoWeb), and Spanish R&D projects (CENITS mIO!, España Virtual and Buscamedia, myBigData, GeoBuddies), and has also participated in privately-funded projects including ICPS (International Classification of Patient Safety), funded by the World Health Organisation, and HALO, funded by Vulcan Inc.
Paolo Manghi is a researcher at Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Pisa, Italy. He received his PhD in Computer Science at the University of Pisa (2001). Today he is a member of the InfraScience research group, part of the Multimedia Networked Information System Laboratory (NeMIS). His current research interests include data ICT infrastructures for science and technologies supporting modern scholarly communication. He is technical manager of the OpenAIRE infrastructure (www.openaire.eu).
Jochen Schirrwagen is a research fellow at Bielefeld University Library, Germany. He has a scientific background in Computer Engineering. He worked for the Digital Peer Publishing Initiative for Open Access eJournals at the academic library center "hbz" in Cologne (2004-2008). Since 2008 he is working for DFG and EU funded projects, like DRIVER-II, OpenAIRE and OpenAIREPlus. Jochen is interested in the application of metadata information using semantic technologies for the aggregation and contextualization of scientific content.
|