Due to the pandemic situation, the conference will be held in online mode: all speakers will present their works remotely over Zoom.
Sat 27 Mar 2021
The First International Conference on Code Quality (ICCQ) is a one-day computer science event organized in cooperation with the IEEE Russia Section C Chapter and focused on static analysis, program verification, bug detection, and software maintenance.
Zhang Yuxin (Chair)
CTO of Huawei Cloud
CEO of SberCloud
Sergey Zykov (Chair)
And in alphabetical order:
University of Athens
Middle East Technical University
Laura M. Castro
Universidade da Coruña
University of Potsdam
University of Minnesota
University of Potsdam
University of Edinburgh
Carnegie Mellon University
Alexander K. Petrenko
Victoria University of Wellington
University of Paderborn
University of Technology Sydney
University of Utah
University of Leeds
Object Guild BV
Paper/abstract submission: 18 Dec 2020
(anywhere on Earth)
Author notification: 10 Feb 2021
Camera-ready submissions: 28 Feb 2021
Conference: 27 Mar 2021
Subscribe to our YouTube channel to watch us live.
10:00 (Moscow time) Yegor Bugayenko Opening
10:15 Zhang Yuxin Steering Welcome Speech
10:30 Sergey Zykov PC Welcome Speech
11:30 Veselin Raychev Invited Paper: Learning to find bugs and code quality problems - what worked and what not?
12:00 Coffee Break
13:00 Christina Peterson An Efficient Dynamic Analysis Tool for Checking Durable Linearizability
13:30 Sriteja Kummita Qualitative and Quantitative Analysis of Callgraph Algorithms for PYTHON
14:30 Ekaterina Garmash Exploring the Effect of NULL Usage in Source Code
15:00 Muntazir Fadhel Striffs: Architectural Component Diagrams for Code Reviews
15:30 Tiago Espinha Gasiba Raising Security Awareness using Cybersecurity Challenges in Embedded Programming Courses
Keynotes and Invited Talks
Learning to find bugs and code quality problems - what worked and what not? Veselin Raychev
The recent growth of open source repositories and deep learning models brought big promises for the next generation of programming tools that can automate or significantly improve the software development process. Yet, such tools are still rare and the machine learning components in them are not always apparent to their users. The current most useful techniques in machine learning for code are also not coming from the organizations such as Microsoft, Google, DeepMind, Facebook, OpenAI or nVidia that invested the most in deep neural techniques such as huge neural networks. This probably means that either many of these coding problems are significantly different from other hot topics in deep learning such as image processing or that it is much more difficult to collect datasets that would result in similarly successful tools. In this work, we study the results in the literature on the topic and discuss ways to address these shortcomings.
We received 23 submissions. 6 papers were desk-rejected. Each paper received at least three reviews from PC members. 6 papers were accepted (25% acceptance ratio):
An Efficient Dynamic Analysis Tool for Checking Durable Linearizability Christina Peterson and Damian Dechev
Designing efficient and correct durable data structures is indispensable because Non-Volatile Memory (NVM) is positioned as a successor to DRAM due to its energy efficiency and reliability. The challenge with ensuring correctness for efficient durable data structures is that caches and registers are expected to remain volatile, and the explicit cache line flush and barrier instructions are expensive. As a result cache line flushes and barriers are used sparingly, leading to potential inconsistencies in the recoverable state of the data structure. Crash consistency tools are available to ensure recoverability to a consistent state, but these tools are not able to check correctness of data structure semantics. Furthermore, the formal logic proposed to verify correctness conditions such as durable linearizability involve labor intensive mechanical proofs using a theorem prover. In this paper, we present the first dynamic analysis tool that checks durable linearizability at runtime. Our proposed tool, VSV-D, uses a vector space analysis to achieve a worst-case 𝑂(𝑛2) time complexity. We extend the analysis to transactional correctness to enable VSV-D to check durable transactional data structures. Our experimental evaluation applies VSV-D to check the correctness of a large variety of durable data structures including log-free data structures, link-free data structures, Romulus, OneFile, PMDK, and PETRA.
Qualitative and Quantitative Analysis of Callgraph Algorithms for PYTHON Sriteja Kummita, Goran Piskachev, Johannes Spaeth and Eric Bodden
As one of the most popular programming languages, Python has become a relevant target language for static analysis tools. The primary data structure for performing an inter-procedural static analysis is callgraph (CG), which links call sites to potential call targets in a program. There exists multiple algorithms for constructing callgraphs, tailored to specific languages. However, comparatively few implementations target Python. Moreover, there is still lack of empirical evidence as to how these few algorithms perform in terms of precision and recall. This paper thus presents eval_CG, an extensible framework for comparative analysis of Python callgraphs. We conducted two experiments which run the CG algorithms on different Python programming constructs and real-world applications. In both experiments, we evaluate three CG generation frameworks namely, Code2flow, Pyan, and Wala. We record precision, recall, and running time, and identify sources of unsoundness of each framework. Our evaluation shows that none of the current CG construction frameworks produce a sound CG. Moreover, the static CGs contain many spurious edges. Code2flow is also comparatively slow. Hence, further research is needed to support CG generation for Python programs
Exploring the Effect of NULL Usage in Source Code Ekaterina Garmash and Anton Cheshkov
In this paper we propose a methodology to reason about code smells, or anti-patterns, using causal inference (CI). We specifically focus on one notorious case of code smells, the usage of NULL. CI provides methods to estimate the magnitude of effect of certain actions on some target value from observational data only. Applying the methods to the domain of software engineering, we would like to estimate the average effect of using a certain pattern on some characterization of code quality. If code quality systematically deteriorates when a certain pattern is used, it can serve as empirical confirmation that it is in fact an anti-pattern. We narrow down the problem and study the effect of using a selection of NULL-based patterns on code complexity metrics. The experiments on open source Java code show preliminary confirmation that NULL is in fact an anti-pattern, but the results are not always consistent. We think that the main reason for such inconclusive results is the fact that our underlying causal model is too simplistic. As the next step of our follow-up research, we will work on improving the causal model. We believe that this can be a promising line of research towards building an explanatory theory of software engineering, since it does not rely on expensive controlled experiments with human experts and can make use of abundant open source observational data resources, such as Github.
Striffs: Architectural Component Diagrams for Code Reviews Muntazir Fadhel and Emil Sekerinski
Despite recent advancements in automated code quality and defect finding tools, developers spend a significant amount of time completing code reviews. Code understandability is a key contributor to this phenomenon, since engineers need to understand both microscopic and macroscopic level details of the code under review. Existing tools for code reviews including diffing, inline commenting and syntax highlighting provide limited support for the macroscopic understanding needs of reviewers. When reviewing code for architectural and design quality, such tools do not enable reviewers to understand the code from a top-down lens which the original architects of the code would have likely used to design the system. To overcome these limitations and to complement existing approaches, we introduce structure diff (striff) diagrams. Striffs provide reviewers with an architectural understanding of the incoming code in relation to the existing system, allowing reviewers to gain a more complete view of the scope and impact of the proposed code changes in a code review.
Raising Security Awareness using Cybersecurity Challenges in Embedded Programming Courses Tiago Espinha Gasiba, Samra Hodzic, Ulrike Lechner and Maria Pinto-Albuquerque
Security bugs are errors in code that, when exploited, can lead to serious software vulnerabilities. These bugs could allow an attacker to take over an application and steal information. One of the ways to address this issue is by means of awareness training. The Sifu platform was developed in the industry, for the industry, with the aim to raise software developers’ awareness of secure coding. This paper extends the Sifu platform with three challenges that specifically address embedded programming courses, and describes how to implement these challenges, while also evaluating the usefulness of these challenges to raise security awareness in an academic setting. Our work presents technical details on the detection mechanisms for software vulnerabilities and gives practical advice on how to implement them. The evaluation of the challenges is performed through two trial runs with a total of 16 participants. Our preliminary results show that the challenges are suitable for academia, and can even potentially be included in official teaching curricula. One major finding is an indicator of the lack of awareness of secure coding by undergraduates. Finally, we compare our results with previous work done in the industry and extract advice for practitioners.
Instructions for Authors
Submissions must be in PDF, printable in black and white on US Letter sized paper. All submissions must adhere to the acmart sigplan template (two columns, 11pt font size).
Submitted papers must be at least 4 and at most
12 16 pages long,
including bibliographical references and appendices.
Submissions that do not meet the above requirements will be rejected without review.
Higher School of Economics
Ivannikov Institute for System Programming of the RAS
Moscow State University
Moscow Institute of Physics and Technology
RUSSOFT, a non-profit union of software companies
SECR, a famous Russian software conference
Huawei, a global provider of ICT infrastructure and smart devices
SberCloud, a cloud platform of Sberbank Group
Yandex, a Russian intelligent technology company
Kaspersky, a multinational cybersecurity and anti-virus provider
Interested in joining and helping us make ICCQ even better? Click here.
These people are making ICCQ:
If you are interested in helping us and joining the team of organizers, please email firstname.lastname@example.org.
The conference will be streamed live
on YouTube and you
will be able to watch it without registration.
However, registration is mandatory if you want to attend the event
and enjoy a tasty lunch with our speakers.
(thanks to our partners).
Got questions or suggestions?
For additional information or answers to questions please write to email@example.com or better join our Telegram chat and ask there.