ECOOP 2020
Sun 15 - Tue 17 November 2020 Online Conference
co-located with SPLASH 2020

Traditionally, technical research papers are published without including any artifacts (such as tools, data, models, videos, etc.), even though the artifacts may serve as crucial and detailed evidence for the quality of the results that the associated paper offers. They support the repeatability of experiments and precise comparison with alternative approaches, thus enabling higher quality in the research area as a whole. They may also make it easier for other researchers to perform their own experiments, thus helping the original authors disseminating their ideas in detail. Hence, artifacts should be taken seriously and recognized separately.

The AE process at ECOOP 2020 is a continuation of the AE process at previous ECOOP editions, and several other conferences, including ESEC/FSE, OOPSLA, PLDI, ISSTA, HSCC, and SAS: see the authoritative Artifact Evaluation for Software Conferences web site.

Accepted Artifacts

A Framework for Resource Dependent EDSLs in a Dependently-Typed Language (Artifact)
A Trusted Infrastructure for Symbolic Analysis of Event-Driven Web Applications
A Type-Directed Operational Semantics for a Calculus with a Merge Operator
Blame for Null
Don't Panic! Better, Fewer, Syntax Errors for LR Parsers
Flow-Sensitive Type Based Heap Cloning (Artifact)
Implementation of SHAPES case studies
Model-View-Update-Communicate: Session Types meet the Elm Architecture
Multiparty Session Programming with Global Protocol Combinators
Owicki-Gries Reasoning for C11 RAR
Perfect is the Enemy of Good: Best-Effort Program Synthesis
Putting Randomized Compiler Testing into Production
Reconciling Event Structures with Modern Multiprocessors
Scala with Explicit Nulls
Static Analysis of Shape in TensorFlow Programs
Static Race Detection and Mutex Safety and Liveness for Go Programs (Artifact)
Static Type Analysis by Abstract Interpretation of Python Programs
Tackling the Awkward Squad for Reactive Programming: The Actor-Reactor Model
The Duality of Subtyping (artifact)
Value Partitioning: A Lightweight Approach to Relational Static Analysis for JavaScript

Call for Artifacts

Authors of accepted papers at ECOOP 2020 can have their artifacts evaluated by an Artifact Evaluation Committee. Artifacts that live up to the expectations created by the paper will be marked with a badge in the proceedings.

Artifacts that are deemed especially meritorious will be singled out for special recognition in the proceedings and at the conference.

The Artifact Evaluation process is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. The submission of an artifact is voluntary and will not influence the final decision regarding the papers (which is obviously enforced because the artifacts are submitted after the notification of acceptance has been sent out). Notification about the outcome of the Artifact Evaluation and reviews including suggestions for improving the artifacts will be distributed before the deadline for the final version of the paper.

A submitted artifact should be consistent with the associated paper. It should be so well documented that it is accessible for a general computer scientist with an interest in the research area, who has read the associated paper.

A submitted artifact is treated as confidential, just like a submitted paper. However, it is strongly recommended that artifacts are made available to the research community afterwards, thus enabling the above mentioned effects such as improved reproducibility etc.

Artifact Submission, Guidelines, and Process

Artifact Submission

Submission link:

Every submission must include:

  1. An abstract that briefly describes the artifact.

  2. A PDF file that describes the artifact in detail and provides instructions for using it.

  3. A URL for downloading the artifact.

  4. A PDF file of the most recent version of the accepted paper.

Artifact Packaging Guidelines

When packaging your artifact for submission, please take the following into consideration: Your artifact should be as accessible to the AE committee members as possible, and it should be easy for the AE members to quickly make progress on the investigation of your artifact. Please provide some simple scenarios describing concretely how the artifact is intended to be used; for a tool, this would include specific inputs to provide or actions to take, and expected output or behavior in response to this input. In addition to these very tightly controlled scenarios that you prepare for the AE committee members to try out, it may be very useful if you suggest some variations along the way, such that the AE committee members will be able to see that the artifact is robust enough to tolerate a few experiments. For artifacts that are tools, one very convenient way for reviewers to learn about your artifact is to include a video showing you using the artifact in a simple scenario, along with verbal comments explaining what is going on.

To avoid problems with software dependencies and installation, it may be very useful if you provide the artifact installed and ready to use on a virtual machine (for example, VirtualBox, VMware, or a similar widely available platform). The artifact must be made available as a single, self-contained archive file, using a widely supported archive format such as zip or a compressed tar format (e.g., tgz). Please use widely supported open formats for documents, and preferably the CSV or JSON format for data.

As an alternative to packaging artifacts independently, we introduce in 2020 the possibility to use the hosting platform Nextjournal. In an effort to promote artifact reproducibility, Nextjournal provides an infrastructure to build and host artifacts in notebooks, which can run in different environments. Authors that create their artifacts using Nextjournal should submit a link to the notebook in which their artifact runs. Authors are expected to profit from the simplified artifact creation process and wider visibility of their artifact, because artifacts can be inspected and modified online. More information and guidelines on how to use Nextjournal can be found here: Nextjournal will provide active support to artifact authors during the submission period (refer to the “Get Help” section on the Nextjournal page).

Reviewing Process

Submitted artifacts will go through a two-phase evaluation:

  1. Kicking-the-tires: Reviewers check the artifact integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). Authors are informed of the outcome and, in case of technical problems, they can help solve them during a brief author response period.

  2. Artifact assessment: Reviewers evaluate the artifacts, checking if they live up to the expectations created by the papers.

Kick-the-Tires Response Period

Authors will be given a 4-day period to read and respond to the kick-the-tires reports of their artifacts. Authors may be asked for clarifications in case the committee encountered problems that may prevent reviewers from properly evaluating the artifact.

Guidelines for Authors and Reviewers

Guidelines for Authors: When submitting artifacts, please consult the Guidelines for Packaging AEC Submissions. We encourage you to also read the HOWTO for AEC Submitters. We would also like to provide artifact authors with general information on what makes good / bad artifacts and suggestions for good practices. In a nutshell:

Committee members want artifacts that:

  1. Contain all dependencies (Linux container / VM)

  2. Have few setup steps

  3. Have getting started guides where all instructions are tested

  4. Include some documentation on the code and layout of the artifact

  5. Have a short run reviewers can try first (several minutes max)

  6. Show progress messages (percentage complete) during longer runs


  1. Downloading content over the Internet during experiments or tests

  2. Closed source software libraries, frameworks, operating systems, and container formats

  3. Experiments or tests that run for multiple days

Authors and Reviewers of Proof Artifacts: We encourage authors and reviewers of mechanized proofs to consult the recent guidelines for submitting and reviewing proof artifacts.

Call for reviewers

This year, the Artifact Evaluation Committee (AEC) will consist primarily of experienced early-career researchers that are invited by the Artifact Evaluation co-chairs. In addition, to foster diversity and train the next generation of researchers, we will also recruit AEC members using a self-nomination process. We expect to recruit PhD students (and postdocs) that have published at least one peer-reviewed publication.

If you are interested, you can nominate yourself by submitting the following form: The application deadline is: Jan 10, 2020.