The SIGMOD 2016 papers that were awarded the reproducibility label are now available!
New for this year is the Reproducibility Award which will be awarded to the 3 best papers (up to 3 papers). The awards will be presented during SIGMOD 2017 and each award comes with a financial prize of US$750.
SIGMOD Reproducibility has three goals:
In short, the goal is to assist in building a culture where sharing results, code, and scripts of database research is the norm rather than an exception. The challenge is to do this efficiently, which means building technical expertise on how to do better research via creating repeatable and sharable research. The SIGMOD Reproducibility committee is here to help you with this.
You will be making it easy for other researchers to compare with your work, to adopt and extend your research. This instantly means more recognition for your work and higher impact. Taking part in the SIGMOD Reproducibility process enables your paper to take the Reproducible label.
TheReproducible labelwill be visible in the ACM digital library.
Successful papers will also be advertised at DBworld. In addition, the official SIGMOD Reproducibility website maintains and advertises your papers, serving as a centralized location where researchers will be able to find all the experimentation material of sharable SIGMOD papers. We will continue to enhance the functionality and material on this website to make it attractive and useful for the community, so stop by often!
At first, making research sharable seems like an extra overhead for authors. You just had your paper accepted in a major conference; why should you spend more time on it? The answer is to have more impact!
If you ask any experienced researcher in academia or in industry, they will tell you that they in fact already follow the reproducibility principles on a daily basis! Not as an afterthought, but as a way of doing good research.
Maintaining easily reproducible experiments, simply makes working on hard problems much easier by being able to repeat your analysis for different data sets, different hardware, different parameters, etc. Follow the lead of leading system designers and you will be saving significant amount of time as you will minimize the set up and tuning effort for your experiments. In addition, such practices will help you to do more complete research as you will be able to exhaustively analyze the experimental and research space in a more systematic way with less effort.
Ideally reproducibility should be close to zero effort.
The authors of all accepted papers will be able to submit their experiments for review on a voluntarily basis. Submissions are due on September 1st. Results will be communicated in mid October. The submission process will be through email where authors are expected to fill in specific information and include a readme based on the provided template.
The experimental results of the paper were reproduced by the committee and were found to support the central results reported in the paper. The experiments (data,code,scripts) are made available to the community.
We look for three parameters: sharability, coverage, and flexibility. These are described in detail below.
Each submitted experiment should contain: (1) A prototype system provided as a white box (source, configuration files, build environment) or a black-box system fully specified. (2) Input Data: Either the process to generate the input data should be made available, or when the data is not generated, the actual data itself or a link to the data should be provided. (3) The set of experiments (system configuration and initialization, scripts, workload, measurement protocol) used to produce the raw experimental data. (4) The scripts needed to transform the raw data into the graphs included in the paper.
The central results and claims of the paper should be supported by the submitted experiments, meaning we can recreate result data and graphs that demonstrate similar behavior with that shown in the paper. Typically when the results are about response times, the exact numbers will depend on the underlying hardware. We do not expect to get identical results with the paper unless it happens that we get access to identical hardware. Instead, what we expect to see is that the overall behavior matches the conclusions drown in the paper, e.g., that a given algorithm is significantly faster than another one, or that a given parameter affects negatively or positively the behavior of a system.
One important characteristic of strong research results is how flexible and robust they are in terms of the parameters and the tested environment. For example, testing a new algorithm for several input data distributions, workload characteristics and even hardware with diverse properties provides a complete picture of the properties of the algorithm. Of course, a single paper cannot always cover the whole space of possible scenarios. Typically the opposite is true. For this reason, we expect authors to provide a flexibility report where they describe the design space that they covered. Such a report should be brief and to the point, describing 1) existing experiments in the paper to ensure flexibility, i.e., “we tested the following N input data distributions...”, 2) additional experimental analysis that could be performed using the existing reproducibility submission which shows that the results hold for even more parameters and scenarios, i.e., “for the reproducibility submission we tested the following N additional input data distributions...”, and 3) further design parameters that would be interesting to test for this work, i.e., “it would be interesting to test our algorithm for input data distribution X...”. We do not expect the authors to perform any additional experiments on top of the ones in the paper. Any additional experiments submitted will be considered and tested but they are not required. As long as the flexibility report shows that there is a reasonable set of existing experiments, then a paper meets the flexibility criteria. What reasonable means will be judged on a case by case basis based on the topic of each paper and in practice all accepted papers in top database conferences meet this criteria. You should see the flexibility report mainly as a way to describe the design space covered by the paper and the design space which is interesting to cover in the future for further analysis that may inspire others to work on open problems triggered by your work.
The goal of the committee is to properly assess and promote your work! While we expect that you try as best as possible to prepare a submission that works out of the box, we know that sometimes unexpected problems appear and that in certain cases experiments are very hard to fully automate. We will not dismiss your submission if something does not work out of the box; instead, we will contact you to get your input on how to properly evaluate your work.
Here are some guidelines for authors based on accumulated experience. The end goal is to successfully reproduce the raw data and relevant plots that the authors used to draw their conclusions.
Every case is slightly different. Sometimes the reproducibility committee can simply rerun software (e.g., rerun some existing benchmark). At other times, obtaining raw data may require special hardware (e.g., sensors in the arctic). In the latter case, the committee will not be able to reproduce the acquisition of raw data, but then you can provide the committee with a protocol, including detailed procedures for system set-up, experiment set-up, and measurements.
Whenever raw data acquisition can be produced, the following information should be provided.
Authors should explicitly specify the OS and tools that should be installed as the environment. Such specification should include dependencies with specific hardware features (e.g., 25 GB of RAM are needed) or dependencies within the environment (e.g., the compiler that should be used must be run with a specific version of the OS).
System setup is one of the most challenging aspects when repeating experiments. System setup will be easier to conduct if it is automatic rather than manual. Authors should test that the system they distribute can actually be installed in a new environment. The documentation should detail every step in system setup:
The above tasks should be achieved by executing a set o scripts provided by the authors that will download needed components (systems, libraries), initialize the environment, check that software and hardware is compatible, and deploy the system.
The committee strongly suggests using ReproZip to streamline this process. ReproZip can be used to capture the environment, the input files, the expected output files, and the required libraries. A detailed how-to guide (installing, packing experiments, unpacking experiments) can be found in the ReproZip Documentation. ReproZip will help both the authors and the evaluators to seamlessly rerun experiments. If using ReproZip to capture the experiments proves to be difficult for a particular paper, the committee will work with the authors to find the proper solution based on the specifics of the paper and the environment needed.
Given a system, the authors should provide the complete set of experiments to reproduce the paper's results. Typically, each experiment will consist of the following parts.
The authors should document (i) how to perform the setup, running and clean-up phases, and (ii) how to check that these phases complete as they should. The authors should document the expected effect of the setup phase (e.g., a cold file cache is enforced) and the different steps of the running phase, e.g., by documenting the combination of command line options used to run a given experiment script.
Experiments should be automatic, e.g., via a script that takes a range of values for each experiment parameter as arguments, rather than manual, e.g., via a script that must be edited so that a constant takes the value of a given experiment parameter.
For each graph in the paper, the authors should describe how the graph is obtained from the experimental measurements. The submission should contain the scripts (or spreadsheets) that are used to generate the graphs. We strongly encourage authors to provide scripts for all their graphs using a tool such as Gnuplot or Matplotlib. Here are two useful tutorials for Gnuplot: a brief manual and tutorial, and a tutorial with details about creating eps figures and embed them using LaTeX and another two for Matplotlib: examples from SciPy, and a step-by-step tutorial discussing many features.
At a minimum the authors should provide a complete set of scripts to install the system, produce the data, run experiments and produce the resulting graphs along with a detailed Readme file that describes the process step by step so it can be easily reproduced by a reviewer.
The ideal reproducibility submission consists of a master script that:
... to produce a new PDF for the paper that contains the new graphs. It is possible!
A good source of dos and don’ts can be found in the ICDE 2008 tutorial by Ioana Manolescu and Stefan Manegold (and a subsequent EDBT 2009 tutorial).
They include a road-map of tips and tricks on how to organize and present code that performs experiments, so that an outsider can repeat them. In addition, the ICDE 2008 tutorial discusses good practices on experiment design more generally, addressing, for example, how to chose which parameters to vary and in what domain.
A discussion about reproducibility in research including guidelines and a review of existing tools can be found in the SIGMOD 2012 tutorial by Juliana Freire, Philippe Bonnet, and Dennis Shasha.
Chair: Stratos Idreos, Harvard University [email]
Dennis Shasha, New York University, USA
Juliana Freire, New York University, USA
Philippe Bonnet, IT University of Copenhagen, Denmark
Dennis Shasha, New York University, USA
University of Buffalo, USA: Oliver Kenedy, Ying Yang, Gokhan Kul
Columbia University, USA: Ken Ross, Orestis Polychroniou
TU Dortmund, Germany: Jens Teubner, Henning Funke
Univeristy of Glasgow, UK: Peter Triantafillou, George Sfakianakis
Harvard University, USA: Stratos Idreos, Manos Athanassoulis, Michael S. Kester
HP Labs, USA: Hideaki Kimura
HP Labs, USA: Alkis Simitsis
Imperial College, UK: Thomas Ηeinis, Pooyan Jamshidi
Logicblox, USA: Ryan Johnson, Tianzheng Wang (University of Toronto)
UMass Dartmouth, USA: David Koop
National University of Singapore, Singapore: Roland Yap
New York University, USA: Juliana Freire, Fernando Seabra Chirigati, Tuan-Anh Hoang-Vu
NYU Abu Dhabi, UAE: Azza Abouzied
Ohio State University, USA: Spyros Blanas, Feilong Liu
Oxford, UK: Dan Olteanu, Milos Nikolic
University of Trento, Italy: Kostas Zoumpatianos