The poster track of SEMLA 2024 aims to provide an engaging venue for presenting and discussing research results and practices. As part of the SEMLA event, poster presenters will connect directly with researchers and practitioners in the AI and SE fields.
We call for posters related to the topics of SEMLA 2024, as found on the main page. In particular, we highlight the main theme of “Verification, Validation, and Operations of AI Systems” as a topic of interest. Posters can, but are not limited to, present new ideas or roadmaps, tools, practical applications, initiatives, or summarise existing works.
🏆🏆 A select number of presenters with accepted posters will be nominated for travel grants or best poster awards. 🍾🍾 Sponsored by Videns Analytics, Bank Nationale de Canada, and Ericsson, SEMLA will support travel expenses of up to 500 CAD for individuals from outside Montreal, based on the merits of their posters and their needs (Further details are provided below.) 🍾🍾
Submission requirements
Please consult the following form to submit your poster. You will be asked to include the following information:
- Poster title
- Poster abstract (up to 200 words)
- Author name(s) (with the presenter(s) highlighted) and affiliation
Deadlines
Submission deadline: May 14th, 2024
Acceptance notifications: May 28th, 2024
Evaluation criteria
Poster proposals are evaluated based on the quality of the abstract and the relevance to the SEMLA audience.
Format and requirements
The poster size should match the A0 (841 mm x 1189 mm). There is no required template. The article “Research posters 101” (https://dl.acm.org/doi/10.1145/332132.332138) has relevant advice for poster preparation.
Student Travel grants
A select group of accepted poster presenters will be eligible for travel grants or best poster awards. SEMLA will cover travel expenses up to 500 CAD for participants traveling from outside Montreal, with awards based on both the quality of their posters and demonstrated need.
The purpose of these grants is to promote the dissemination of scientific knowledge and encourage research interests among students in software engineering and machine learning.
How to apply for the travel grant:
Please indicate your interest in the travel grant on the poster submission form.
Please contact Prof. Bentley Oakes (bentley.oakes@polymtl.ca) for any questions.


















Abstract: Anomaly detection plays an important role in management of modern large-scale distributed systems. Logs, which record system runtime information, are widely used for anomaly detection. However, Unsupervised anomaly detection algorithms face challenges in addressing complex systems, which generate vast amounts of multivariate time series data. Timely anomaly detection is crucial for managing these systems effectively and minimizing downtime. This proactive approach minimizes system downtime and plays a vital role in incident management for large-scale systems. To address these challenges, a method called Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED) has been developed for detecting anomalies in CN PTC system logs. MSCRED leverages the power of multivariate time series data to perform anomaly detection and diagnosis. It creates multi-scale signature matrices that capture different levels of system statuses across various time steps. The method utilizes a convolutional encoder to capture inter-sensor correlations and a Convolutional Long-Short Term Memory (ConvLSTM) network with attention mechanisms to capture temporal patterns.
Abstract
Abstract: Language models such as RoBERTa, CodeBERT, and GraphCodeBERT have gotten much attention in the past three years for various Software Engineering tasks. Though these models are proven to have state-of-the-art performance for many SE tasks, such as code summarization, they often require to be fully fine-tuned for the downstream task. Is there a better way for fine-tuning these models that require training fewer parameters? Can we impose new information on the current models without pre-training them again? How do these models perform for different programming languages, especially low-resource ones with less training data available? How can we use the knowledge learned from other programming languages to improve the performance of low-resource languages? This talk will review a series of experiments and our contributions to answering these questions.











