Software benchmarks in digital preservation: Do we need them? Can we have them? How do we get them?

Software benchmarks in digital preservation:  Do we need them? Can we have them? How do we get them?

A significant part of the work in digital preservation field is dependent on various software tools. Categories such as identification, validation, characterization, migration and emulation have thus received significant effort over the last years. While without doubt achievements have been made, it is quite hard to quantify the successfulness of the work which has happened. Furthermore the lack of evidence to demonstrate achieved quality of software solutions is according to NDSA an important research challenge to be addressed.

In terms of the collaboration ethics and the need for better evidence the digital preservation community definitely seems ready for more rigorous software evaluation initiatives. Putting more focus on software evaluation will help to channelize more effectively the limited resources for software development. However, improving software evaluation is a complex process and wider community involvement will be required if notifiable benefits are expected.

As successful examples several evaluation initiatives (TREC, MIREX, and TPC) running in information retrieval and software engineering fields can be considered. Within all of them wider community involvement is present where different participants have different roles.
All of them report huge benefits for the community which in some cases are measured in millions of dollars. Even though such an initiative is not present in the digital preservation field it is reasonable to expect that it would be beneficial for the whole community.

In the BenchmarkDP project we are exploring different approaches to improve software evaluations in the digital preservation field. Even though we have made technical contributions it is clear that the challenge of software evaluation is not a challenge that a single project can solve but it requires a wider community engagement. We have identified software benchmarks (standards that enable comparison of different software artifacts) as a way to combine software evaluation with a wider community involvement. More on this can be found in this paper.

In order to start assessing the current evaluation practices and as well to better understand the challenges and possible cooperations we are launching a series of initiatives. Two of them are already online and accessible to you.

The first is a workshop called Benchmarking forum which will be held at this year’s IPRES conference at Chapel Hill on Friday 6th November. There we will start discussing possible scenarios which are in need of proper benchmarks. More info on the workshop can be found here.

The second is a short consultation which should help us gather more information around current practices in software evaluation, which effects those practices have and what are the major challenges faced. The consultation will be open until 1st of November 2015. The link to the consultation is here.

We hope that this two initiatives will be a good starting point for a wider community involvement and better understanding of software evaluation needs in the digital preservation field.

So I invite you to help us start the dice rolling by joining the workshop and the consultation.

Join the workshop

Join the consultation

34
reads

Leave a Reply

Join the conversation