Call for Workshop Papers

by Clara Ayora, May 27, 2024

QUAMES aims to attract research on methods, procedures, techniques, and tools for measuring and evaluating the quality of conceptual models that can be used in any phase of the software development cycle. Its primary goal is to enable the development of high-quality software systems by promoting quality assurance from a modeling-based perspective. Furthermore, considering the growing use of AI to streamline software development or as
an essential element within software products, we advocate that it is important to conceptualize the models used for the training and operation of these systems and evaluate their quality. Thus, this year QUAMES is also aimed to discuss about the challenges, benefits, and lessons of using conceptual modeling and AI approaches looking for how their synergy can impact the quality of the systems.

5th International Workshop on Quality and Measurement of Model-Driven Software Development - QUAMES 2024
https://quames.testar.org

Pittsburgh, Pennsylvania
28-31 October 2024

To be held in conjunction with the 43th International Conference on Conceptual Modeling, ER 2024
https://sei.cmu.edu/go/er2024

Workshop Description:
The success of software development projects depends on the productivity of human resources and the efficiency of development processes to deliver high-quality products. Model-driven development (MDD) is a widely adopted paradigm that automates software generation by means of model transformations and reuse of development knowledge. The MDD advantages have motivated the emergence of several modeling proposals and MDD tools related to different application domains and stages of the development lifecycle.

In MDD, the quality of conceptual models is critical because it directly impacts the final software systems’ quality. Therefore, it is essential to evaluate conceptual models and predict the software products’ relevant characteristics.
Additionally, MDD project management must be adapted to take into account that programming effort is being replaced by a modelling effort at an earlier stage. Hence, measuring models is crucial to support cost estimation and project
management. Moreover, testing models are paramount to represent the exploration and calculate quality metrics (such as coverage, failures, etc.), related to developed software products.

To address these challenges, QUAMES aims to attract research on methods, procedures, techniques, and tools for measuring and evaluating the quality of conceptual models that can be used in any phase of the software development
cycle. Its primary goal is to enable the development of high-quality software systems by promoting quality assurance from a modeling-based perspective. Furthermore, considering the growing use of AI to streamline software development or as an essential element within software products, we advocate that it is important to conceptualize the models used for the training and operation of these systems and evaluate their quality. Thus, this year QUAMES is also aimed to discuss about the challenges, benefits, and lessons of using conceptual modeling and AI approaches looking for how their synergy can impact the quality of the systems.

Topics of interest include, but are not limited to:
- Quality models for conceptual models
- Empirical evaluation of quality models
- Measures for conceptual models
- Measurement of conceptual models
- Defect detection in conceptual models
- Testing of conceptual models
- Model-based testing
- Case studies, experiments, and surveys of MDD projects
- Tools for measuring conceptual models
- Tools for quality evaluation of conceptual models
- Tools for model-based testing
- Quality of conceptual models for ethics and trustworthiness
- IA for conceptual models and conceptual models for IA
- Conceptual models for critical systems
- Conceptual models for system assurance

 

****** Important Dates ******
Abstract Submission (Optional) July 5th, 2024
Paper Submission deadline July 25th, 2024
Paper Notification August 11th, 2024
Camera-Ready Version September 9th, 2024
QUAMES take place on October 28th-31st, 2024

 

Workshop Organizers:
- Beatriz Marín, Universitat Politècnica de València, Spain ([email protected])
- Giovanni Giachetti, Universidad Andrés Bello, Chile and Universitat Politècnica de València, Spain
([email protected])
- Clara Ayora, Universidad Castilla – La Mancha, Spain ([email protected])
 

Program Committee (A tentative List):
- Oscar Pastor. Universidad Politècnica de Valencia (Spain)
- Monique Snoeck. KU Leuven (Belgium)
- Maya Danev. University of Twente (The Netherlands)
- Tanja Vos. Universidad Politècnica de Valencia (Spain)
- Cecilia Bastarrica. Universidad de Chile (Chile)
- Ignacio Panach. Universidad de Valencia (Spain)
- Juan Cuadrado-Gallego. University of Alcala (Spain)
- Raian Ali. Bournemouth University (UK)
- Yves Wautelet. KU Leuven (Belgium)
- Jose Luis de la Vara. University of Castilla – La Mancha (Spain)
- Mehrdad Saadatmand. RISE Research Institutes of Sweden (Sweden)
- Isabel Brito. Instituto Politécnico de Beja (Portugal)
- Juan Carlos Trujillo. University of Alicante (Spain)
- Dietmar Winkler. Vienna University of Technology (Austria)
- Jolita Ralyte. University of Geneva (Switzerland)
- Shaukat Ali. Simula Research Lab (Norway)
- Estefania Serral. KU Leuven (Belgium)
- Marcela Genero. University of Castilla – La Mancha (Spain)
- René Noel. Universitat Politècnica de Valencia (Spain)
- Ignacio García. University of Castilla – La Mancha (Spain)
- Ana Paiva. University of Porto (Portugal)
- Porfirio Tramontana. University of Napoli Federico II (Italy)
- Stefan Biffl. Vienna University of Technology (Austria)

 
Submission Guidelines
Participants will be invited to submit papers concerning the quality, measurement, or testing of models that can be used in MDD environments. Pieces of ongoing research work are also welcome.

Submit papers via EasyChair for ER 2024 to the “QUAMES Workshop Papers” Track.

Accepted papers are planned to be published in the joint workshop proceedings of the ER conference by Springer in the LNCS series. The review process is double-blind. Submissions must be anonymized.  Submissions to QUAMES 2024 must
be formatted according to the Springer’s LNCS submission formatting guidelines using the LNCS style or Overleaf.

All submissions will be screened by the scientific committee for their appropriateness to the workshop themes and format. Each submission will be reviewed by at least three program committee members. Authors will be guided to fit their
presentations to the workshop rules. In case of inconclusive and conflicting review results, internal discussions will be held to decide upon the final acceptance or rejection of a paper. Papers can be accepted as full papers or short papers, with no
more than 16 pages for full papers and no more than 10 pages for short papers.  

It is mandatory that at least one author will register and present the paper during the workshop.