In this talk, we report on an on-going project, VOCaL, which aims at building formally-verified general-purpose OCaml libraries of data structures and algorithms. We present the various ingredients of this project. First, we introduce GOSPEL, a specification language for OCaml. It resembles existing behavioral specification languages (e.g. JML, ACSL, SPARK), yet departs from them on several points. Second, we describe techniques and tools to perform deductive verification of GOSPEL-specified OCaml code. Currently, this is built on top of three existing tools, namely Why3, CFML and Coq. Last, we report of the successful verification of the first OCaml modules of the VOCaL library. This includes general-purpose data structures such as resizable arrays, hash tables, priority queues, and union-find.
Jean-Christophe Filliâtre is a senior researcher at CNRS. He works at University Paris Sud in Orsay, France. He is chair of IFIP working group 1.9/ 2.15 “Verified Software”. His Ph.D. (1999) was related to the verification of imperative programs using interpretations in type theory, within the framework of the Coq proof assistant. It soon evolved into a tool of its own, known as Why3, where deductive program verification can be performed with the help of many off-the-shelf theorem provers. Jean-Christophe Filliâtre has been working on Why3 since 2001.
Industry 4.0, the new wave of Smart Manufacturing in Europe and globally, relies on a Digital Thread to connect the data and processes for smarter products, smarter production, and smarter integrated ecosystems. But what is the Digital Thread? The talk will discuss a few key questions about modelling, the nature of models and the use of models that arose from the experience in the first two years of Confirm, the Irish Centre for Smart Manufacturing. It will also provide a few examples of how the new model-powered and integrated thinking can disrupt the status quo, empower a better understanding, and deliver a more automatic management of the many cross-dimensional issues that future connected software and systems will depend upon.
Tiziana Margaria is Chair of Software Systems and Head the Dept. of Computer Science and Information Systems at the University of Limerick (Ireland). She is also a Principal Investigator of Lero, The Irish Software Research Center, where she heads the Committee on International Relations Development, and is the Principal Investigator of Confirm, the new SFI Research Centre on Smart Manufacturing. In Lero, she leads research projects on Scientific Workflow and model-driven HW/SW Cybersecurity. In Confirm, she is co-leading the Hub on Cyberphysical Systems, and uses her expertise in advanced model driven, generative, and service-oriented design in the industrial automation context. She has broad experience in the use of formal methods for high assurance systems, in particular concerning functional verification, reliability, and compliance of complex heterogeneous systems. She is (co-)author of over 200 refereed papers in international journals and conferences, and she has served on more than 100 Program Committees, over 20 times as chair. She is the creator and General Chair of ISoLA (since 2004), co-founder of STTT (Springer, 1997), and most recently Founding editor of the Transactions on Foundations for Mastering Change (Springer, 2016). She is Fellow of the Irish Computer Society and Fellow of SDPS. She is currently Vice-president of EASST, and current Chair of FMICS. In EuSEM she is Member of the Research Committee SIG-TPCEC. She holds 2 USPTO patents, one of which with NASA.
Deep Neural Networks (DNNs) are increasingly used in a variety of applications that require high assurance guarantees. Verification and understanding of DNNs is hindered by their high complexity, their opaqueness and sensitivity to small adversarial perturbations and also by the lack of high-level formal specifications. In this talk I will describe recent research work which explores techniques and tools to ensure that DNNs and systems that use DNNs are safe, robust and interpretable. Research directions we are pursuing include: symbolic execution for DNN analysis, compositional approaches to improve formal SMT-based verification, property inference for DNNs, adversarial training and detection, and probabilistic reasoning for DNNs. The techniques aim to provide strong guarantees wrt safety and robustness for DNNs, making them amenable for use in safety critical domains, particular autonomy. We have applied our techniques to the analysis of deep neural networks designed to operate as controllers in the next-generation Airborne Collision Avoidance Systems for unmanned aircraft (ACAS Xu). We have also studied image classification networks (MNIST, CIFAR) and sentiment networks (for text classification).
Corina Pasareanu is a distinguished researcher at NASA Ames and Carnegie Mellon University. She is affiliated with CMU’s CyLab and holds a courtesy appointment in Electrical and Computer Engineering. At Ames, she is developing and extending Symbolic PathFinder, a symbolic execution tool for Java bytecode. Her research interests include model checking and automated testing, compositional verification, model-based development, probabilistic software analysis, and autonomy and security. She is the recipient of several awards, including ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Distinguished Scientist (2016), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010). She has been serving as Program/General Chair for several conferences including: ICST 2020, ESEC/FSE 2018, CAV 2015, ISSTA 2014, ASE 2011, and NFM 2009. She is currently an associate editor for the IEEE TSE journal.