Sat 27 Mar 2021
Moscow, Russia Everywhere!
The First International Conference on Code Quality (ICCQ) was a one-day computer science event organized in cooperation with the IEEE Russia Section C Chapter and focused on static analysis, program verification, bug detection, and software maintenance.
Watch all presentations on YouTube and subscribe to our channel so that you don’t miss the next event!
The Proceedings of ICCQ was published by IEEE Xplore.
Zhang Yuxin (Chair)
CTO of Huawei Cloud
CEO of SberCloud
Sergey Zykov (Chair)
And in alphabetical order:
University of Athens
Middle East Technical University
Laura M. Castro
Universidade da Coruña
University of Potsdam
University of Minnesota
University of Potsdam
University of Edinburgh
Carnegie Mellon University
Alexander K. Petrenko
Victoria University of Wellington
University of Paderborn
University of Technology Sydney
University of Utah
University of Leeds
Object Guild BV
Subscribe to our YouTube channel.
10:00 (Moscow time) Yegor Bugayenko Opening
10:15 Zhang Yuxin Steering Welcome Speech
10:30 Sergey Zykov PC Welcome Speech
11:30 Veselin Raychev Invited Paper: Learning to find bugs and code quality problems - what worked and what not?
12:00 Coffee Break
13:00 Christina Peterson An Efficient Dynamic Analysis Tool for Checking Durable Linearizability
13:30 Sriteja Kummita Qualitative and Quantitative Analysis of Callgraph Algorithms for PYTHON
14:30 Ekaterina Garmash Exploring the Effect of NULL Usage in Source Code
15:00 Muntazir Fadhel Striffs: Architectural Component Diagrams for Code Reviews
Tiago Espinha Gasiba
Raising Security Awareness using Cybersecurity Challenges in Embedded Programming Courses
All 11 videos are in this playlist.
Keynotes and Invited Talks
Learning to find bugs and code quality problems - what worked and what not? Veselin Raychev
The recent growth of open source repositories and deep learning models brought big promises for the next generation of programming tools that can automate or significantly improve the software development process. Yet, such tools are still rare and the machine learning components in them are not always apparent to their users. The current most useful techniques in machine learning for code are also not coming from the organizations such as Microsoft, Google, DeepMind, Facebook, OpenAI or nVidia that invested the most in deep neural techniques such as huge neural networks. This probably means that either many of these coding problems are significantly different from other hot topics in deep learning such as image processing or that it is much more difficult to collect datasets that would result in similarly successful tools. In this work, we study the results in the literature on the topic and discuss ways to address these shortcomings.
We received 23 submissions. 6 papers were desk-rejected. Each paper received at least three reviews from PC members. 6 papers were accepted (25% acceptance ratio):
An Efficient Dynamic Analysis Tool for Checking Durable Linearizability Christina Peterson and Damian Dechev
Designing efficient and correct durable data structures is indispensable because Non-Volatile Memory (NVM) is positioned as a successor to DRAM due to its energy efficiency and reliability. The challenge with ensuring correctness for efficient durable data structures is that caches and registers are expected to remain volatile, and the explicit cache line flush and barrier instructions are expensive. As a result cache line flushes and barriers are used sparingly, leading to potential inconsistencies in the recoverable state of the data structure. Crash consistency tools are available to ensure recoverability to a consistent state, but these tools are not able to check correctness of data structure semantics. Furthermore, the formal logic proposed to verify correctness conditions such as durable linearizability involve labor intensive mechanical proofs using a theorem prover. In this paper, we present the first dynamic analysis tool that checks durable linearizability at runtime. Our proposed tool, VSV-D, uses a vector space analysis to achieve a worst-case 𝑂(𝑛2) time complexity. We extend the analysis to transactional correctness to enable VSV-D to check durable transactional data structures. Our experimental evaluation applies VSV-D to check the correctness of a large variety of durable data structures including log-free data structures, link-free data structures, Romulus, OneFile, PMDK, and PETRA.
Qualitative and Quantitative Analysis of Callgraph Algorithms for PYTHON Sriteja Kummita, Goran Piskachev, Johannes Spaeth and Eric Bodden
As one of the most popular programming languages, Python has become a relevant target language for static analysis tools. The primary data structure for performing an inter-procedural static analysis is callgraph (CG), which links call sites to potential call targets in a program. There exists multiple algorithms for constructing callgraphs, tailored to specific languages. However, comparatively few implementations target Python. Moreover, there is still lack of empirical evidence as to how these few algorithms perform in terms of precision and recall. This paper thus presents eval_CG, an extensible framework for comparative analysis of Python callgraphs. We conducted two experiments which run the CG algorithms on different Python programming constructs and real-world applications. In both experiments, we evaluate three CG generation frameworks namely, Code2flow, Pyan, and Wala. We record precision, recall, and running time, and identify sources of unsoundness of each framework. Our evaluation shows that none of the current CG construction frameworks produce a sound CG. Moreover, the static CGs contain many spurious edges. Code2flow is also comparatively slow. Hence, further research is needed to support CG generation for Python programs
Exploring the Effect of NULL Usage in Source Code Ekaterina Garmash and Anton Cheshkov
In this paper we propose to use causal inference (CI) to reason about code smells, or anti-patterns. CI provides methods to estimate the magnitude of effect of certain interventions on the studied system of variables based on observational data only. We would like to estimate the average effect of using a certain pattern on some code quality characteristic. If code quality characteristic systematically deteriorates when a certain pattern is used, it can serve as confirmation that it is in fact an anti-pattern. In the present study, we narrow down the scope and focus on one notorious case of code smells, the usage of NULL. We investigate the effect of using a selection of NULL-based patterns on code complexity metrics. The experiments on open source Java code show preliminary confirmation that NULL is in fact an anti-pattern under certain conditions. We come to a conclusion that in order to fully answer the research question, a better underlying model of the code development process is needed. Moreover, in order to confirm the detrimental effect of patterns, one should investigate their effect on a more diverse set of code characteristics, in addition to code complexity metrics.
Striffs: Architectural Component Diagrams for Code Reviews Muntazir Fadhel and Emil Sekerinski
Despite recent advancements in automated code quality and defect finding tools, developers spend a significant amount of time completing code reviews. Code understandability is a key contributor to this phenomenon, since engineers need to understand both microscopic and macroscopic level details of the code under review. Existing tools for code reviews including diffing, inline commenting and syntax highlighting provide limited support for the macroscopic understanding needs of reviewers. When reviewing code for architectural and design quality, such tools do not enable reviewers to understand the code from a top-down lens which the original architects of the code would have likely used to design the system. To overcome these limitations and to complement existing approaches, we introduce structure diff (striff) diagrams. Striffs provide reviewers with an architectural understanding of the incoming code in relation to the existing system, allowing reviewers to gain a more complete view of the scope and impact of the proposed code changes in a code review.
Raising Security Awareness using Cybersecurity Challenges in Embedded Programming Courses Tiago Espinha Gasiba, Samra Hodzic, Ulrike Lechner and Maria Pinto-Albuquerque
Security bugs are errors in code that, when exploited, can lead to serious software vulnerabilities. These bugs could allow an attacker to take over an application and steal information. One of the ways to address this issue is by means of awareness training. The Sifu platform was developed in the industry, for the industry, with the aim to raise software developers’ awareness of secure coding. This paper extends the Sifu platform with three challenges that specifically address embedded programming courses, and describes how to implement these challenges, while also evaluating the usefulness of these challenges to raise security awareness in an academic setting. Our work presents technical details on the detection mechanisms for software vulnerabilities and gives practical advice on how to implement them. The evaluation of the challenges is performed through two trial runs with a total of 16 participants. Our preliminary results show that the challenges are suitable for academia, and can even potentially be included in official teaching curricula. One major finding is an indicator of the lack of awareness of secure coding by undergraduates. Finally, we compare our results with previous work done in the industry and extract advice for practitioners.
Higher School of Economics
Ivannikov Institute for System Programming of the RAS
Moscow State University
Moscow Institute of Physics and Technology
RUSSOFT, a non-profit union of software companies
SECR, a famous Russian software conference
Huawei, a global provider of ICT infrastructure and smart devices
SberCloud, a cloud platform of Sberbank Group
Yandex, a Russian intelligent technology company
Kaspersky, a multinational cybersecurity and anti-virus provider
Interested in joining and helping us make ICCQ even better? Click here.
These people were making ICCQ’21:
If you are interested in helping us and joining the team of organizers, please email firstname.lastname@example.org.