Runtime Verification 2022 - Call for Papers

by Volker Stolz, Jan. 12, 2022

We are pleased to invite you to submit papers for the

22nd International Conference on Runtime Verification (RV'22),
https://rv22.gitlab.io
Paper submission: Thursday, 5 May 2022

which will take place as part of the Computational Logic Autumn Summit CLAS 2022
in Tbilisi, Georgia, from September 28-30, 2022 (http://viam.science.tsu.ge/clas2022/).

Dates

Paper submission: Thursday, 5 May 2022
Notification: Wednesday, 22 June 2022
Camera-ready: Sunday, 24 July 2022
Conference: 28-30 September 2022

Deadlines expire at 23:59 anywhere on earth on the dates displayed above.

Conference Objectives and Scope

Runtime verification is concerned with the monitoring and analysis of the
runtime behaviour of software and hardware systems. Runtime verification
techniques are crucial for system correctness, reliability, and robustness;
they provide an additional level of rigor and effectiveness compared to
conventional testing and are generally more practical than exhaustive formal
verification. Runtime verification can be used prior to deployment, for testing,
verification, and debugging purposes, and after deployment for ensuring
reliability, safety, and security and for providing fault containment and
recovery as well as online system repair.

The topics of the conference include, but are not limited to:
- specification languages for monitoring
- monitor construction techniques
- program instrumentation
- logging, recording, and replay
- combination of static and dynamic analysis
- specification mining and machine learning over runtime traces
- monitoring techniques for concurrent and distributed systems
- runtime checking of privacy and security policies
- metrics and statistical information gathering
- program/system execution visualization
- fault localization, containment, resilience, recovery and repair
- systems with learning-enabled components
- dynamic type checking and assurance cases
- runtime verification for autonomy and runtime assurance

Application areas of runtime verification include cyber-physical systems,
autonomous systems, safety/mission critical systems, enterprise and systems
software, cloud systems, reactive control systems, health management and
diagnosis systems, and system security and privacy.

Papers

There are four categories of papers which can be submitted: regular, short,
tool demo, and benchmark papers. Papers in each category will be reviewed by at
least three members of the Program Committee.

- Regular Papers (up to 16 pages, not including references) should present
 original unpublished results. We welcome theoretical papers, system papers,
 papers describing domain-specific variants of RV, and case studies on runtime
 verification.

- Short Papers (up to 8 pages, not including references) may present novel but
 not necessarily thoroughly worked out ideas, for example emerging runtime
 verification techniques and applications, or techniques and applications that
 establish relationships between runtime verification and other domains.

- Tool Demonstration Papers (up to 8 pages, not including references) should
 present a new tool, a new tool component, or novel extensions to existing
 tools supporting runtime verification. The paper must include information on
 tool availability, maturity, selected experimental results and it should
 provide a link to a website containing the theoretical background and user
 guide. Furthermore, we strongly encourage authors to make their tools and
 benchmarks available with their submission.

- Benchmark Papers (up to 8 pages, not including references) should describe a
 benchmark, suite of benchmarks, or benchmark generator useful for evaluating
 RV tools. Papers should include information as to what the benchmark consists
 of and its purpose (what is the domain), how to obtain and use the benchmark,
 an argument for the usefulness of the benchmark to the broader RV community
 and may include any existing results produced using the benchmark. We are
 interested in both benchmarks pertaining to real-world scenarios and those
 containing synthetic data designed to achieve interesting properties. Broader
 definitions of benchmark e.g. for generating specifications from data or
 diagnosing faults are within scope. We encourage benchmarks that are tool
 agnostic, especially if they have been used to evaluate multiple tools. We
 also welcome benchmarks that contain verdict labels and with rigorous
 arguments for correctness of these verdicts, and benchmarks that are
 demonstrably challenging with respect to the state-of-the-art tools.
 Benchmark papers must be accompanied by an easily accessible and usable
 benchmark submission. Papers will be evaluated by a separate benchmark
 evaluation panel who will assess the benchmarks relevance, clarity, and
 utility as communicated by the submitted paper.

The Program Committee of RV 2022 will give a Springer-sponsored Best Paper Award to one eligible regular paper.

Submissions

All papers and tutorials will appear in the conference proceedings in an LNCS
volume in the Formal Methods sub-line. Submitted papers and tutorials must use
the LNCS/Springer style detailed here: http://www.springer.de/comp/lncs/authors.html.
Springer encourages authors to include their ORCIDs in their papers
(https://www.springer.com/gp/authors-editors/orcid).

Papers must be original work and not be submitted for publication elsewhere.
Papers must be written in English and submitted electronically (in PDF format)
using the EasyChair submission page here:
https://easychair.org/conferences/?conf=rv2022.

The page limitations mentioned above include all text and figures, but exclude
references. Additional details omitted due to space limitations may be included
in a clearly marked appendix, that will be reviewed at the discretion of
reviewers, but not included in the proceedings.

At least one author of each accepted paper and tutorial must register and attend
RV'22 to present.

Program Committee Chairs

Dang Thao (Verimag/Université Grenoble Alpes, FR)
Volker Stolz (Western Norway University of Applied Sciences, NO)

Steering Committee

Howard Barringer, University of Manchester
Ezio Bartocci, Technical University of Vienna
Saddek Bensalem, Verimag and University Joseph Fourier (co-chair)
Ylies Falcone, University of Grenoble Alpes/INRIA Grenoble
Klaus Havelund, NASA’s Jet Propulsion Laboratory
Insup Lee, University of Pennsylvania
Martin Leucker, University of Lübeck
Giles Reger, University of Manchester
Grigore Rosu, University of Illinois, Urbana-Champaign
Oleg Sokolsky, University of Pennsylvania (co-chair)