Previous talks

JUNE 2024

The intricacies of formally proving e-voting mixnets” by  Guillaume Scerri (LMF/ENS Paris-Saclay)

Date : 07 june 2024
Room Aurigny

Mixnets are an essential building block for e-voting. Informally they break the link between the ballot cast by a voter and the ballot decrypted in order to compute the tally. While the functionality of these mixnets is relatively simple, the cryptographic tools used in order to achieve the desired properties are fairly complex, both in terms of proof techniques (notably rewinding or reprogramming random oracles), and in terms of primitives (at least zero knowledge proofs and commitments). Furthermore finding the right, composable security definitions is rarely done in the literature, and also bears careful consideration. In this talk we will explore the intricacies of formally proving these constructions. We will both discuss the complexity of a potential proof in the computational model, and how our recent extension of the computationally complete symbolic attacker model allows for a (reasonably) simple proof.

MAY 2024

Optimal Zero-Shot Detector for Multi-Armed Adversarial Attacks” by  Federica Granese (IRD)

Date : 31 may 2024
Room Markov

We address the problem of multi-armed adversarial attack detection. The detection methods are generally validated by assuming a single implicitly known attack strategy, which does not necessarily account for real-life threats. Indeed, this can lead to an overoptimistic assessment of the detectors’ performance and may induce some bias in comparing competing detection schemes. To overcome this limitation, we propose a novel multi-armed framework for evaluating detectors based on several attack strategies. Among them, we make use of three new objectives to generate attacks.
The proposed performance metric is based on the worst-case scenario: detection is successful if and only if all different attacks are correctly recognized. Moreover, following this setting, we formally derive a simple yet effective method to aggregate the decisions of multiple trained detectors, possibly provided by a third party. While every single detector tends to underperform or fail at detecting types of attack that it has never seen at training time, our framework successfully aggregates the knowledge of the available detectors to guarantee a robust detection algorithm. The proposed method has many advantages: it is simple as it does not require further training of the given detectors; it is modular, allowing existing (and future) methods to be merged into a single one; it is general since it can simultaneously recognize adversarial examples created according to different algorithms and training (loss) objectives.

[1] Federica Granese, Marco Romanelli, Pablo Piantanida: Optimal Zero-Shot Detector for
Multi-Armed Attacks. AISTATS 2024. [To appear]
[2] Federica Granese, Marine Picot, Marco Romanelli, Francisco Messina, Pablo Piantanida:
MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors.
ECML/PKDD (3) 2022: 286-303

APRIL 2024

Combining differential privacy and homomorphic encryption for privacy-preserving collaborative learning” by  Arnaud Grivet Sébert (CEA LIST)

Date : 05 april 2024
Room Pétri/Turing

We present several collaborative learning frameworks that ensure the training data privacy through the combination of differential privacy(DP) and homomorphic encryption (HE). The first of these frameworks is SPEED (Secure, Private, and Efficient Deep learning), which is based on PATE (Private Aggregation of Teacher Ensembles) framework (from Semi-supervised knowledge transfer for deep learning from private training data, Papernot et al., 2016) and which advances state-of-the-art of private deep learning against a wider range of threats. The training data privacy is protected against both the honest-but-curious aggregation server, end-users and potentially colluding data owners.To further reduce the computation time of the homomorphic aggregation in SPEED, we introduce SHIELD (Secure and Homomorphic Imperfect Election via Lightweight Design), a probabilistic approximate algorithm for the argmax operator which is faster when homomorphically executed and whose inaccuracy is used as a feature to provably enable DP guarantees. To the best of our knowledge, it is the first work in which the design of an approximate algorithm is constructively usable as a degree of freedom to achieve better HE performance. Finally, we present a private federated learning protocol that makes use of both HE and DP. The quantisation of the noise due to encryption highly complicates the DP analysis. We solve this issue by designing a novel quantisation operator based on Poisson law and which commutes with the aggregation, thus acting like a post-processing that does not impact the DP guarantee.

MARCH 2024

LDP-Auditor: Empirical Local Privacy Loss Estimation” by  Heber Hwang-Arcolezi (Inria)

Date : 22 march 2024
Room Pétri/Turing

While the existing literature on Differential Privacy (DP) auditing predominantly focuses on the centralized model (e.g., in auditing the DP-SGD algorithm), we advocate for extending this approach to audit Local DP (LDP). In this talk, we delve into the auditing perspective of LDP, seeking to reveal the true cost of local privacy and empirically estimate the privacy loss incurred in practical scenarios. In-depth case studies will be presented to explore specific aspects of LDP auditing, including longitudinal studies and multidimensional data. Finally, we present a notable achievement of our LDP-Auditor framework, which is the discovery of a bug in a state-of-the-art LDP Python package. Overall, we aim to shed light on the sources of randomness and information loss in LDP and explore insights that can be gained by examining LDP through an auditing lens. This work has been presented at the TPDP workshop [1] and is available on arxiv [2].

[1] https://tpdp.journalprivacyconfidentiality.org/2023/
[2] https://arxiv.org/abs/2309.01597

MARCH 2024

Android SmartTVs Vulnerability Discovery via Log-Guided Fuzzing” by  Yousra Aafer (University of Waterloo)

Date : 15 march 2024
Videoconference

The recent rise of Smart IoT devices has opened new doors for cyber criminals to achieve damages unique to the ecosystem. SmartTVs, the most widely adopted home-based IoT devices, are no exception. Albeit their popularity, little has been done to evaluate their security and associated risks. To proactively address the problem, we propose a systematic evaluation of Android SmartTVs security. We overcome a number of prominent challenges such as most of the added TV related functionalities are (partially) implemented in the native layer and many security problems only manifest themselves on the physical aspect without causing any misbehaviors inside the OS. We develop a novel dynamic fuzzing approach, which features an on-the-fly log-based input specification derivation and feedback collection. Our solution further introduces a novel external observer that monitors the TV-related physical symptoms (i.e., visual and auditory) to detect potential physical anomalies. We leverage our technique to analyze 11 Android TV Boxes. Our analysis reveals 37 unique vulnerabilities, leading to high-impact cyber threats (e.g., corrupting critical boot environment settings and accessing highly-sensitive data), memory corruptions, and even visual and auditory disturbances (e.g., persistent display content corruption and audio muting).

JANUARY 2024

The VEREFOO Network Security Automation Approach” by  Prof. Riccardo Sisto (Politecnico di Torino)

Date : 19 january 2024
Room Aurigny

Network softwarization and virtualization are making networks more and more dynamic, opening the possibility of extremely fast reconfigurations and of enhanced automation in their management.
This enhanced dynamism is a great opportunity, for example for prompt reaction to security attacks or to changing demands from users, but at the same time it introduces new challenges, such as how to guarantee that security policies are always correctly implemented and that resources are used efficiently in such rapidly changing systems.
VEREFOO (VErified REfinement and Optimized Orchestration) is an approach for policy-based network security automation in virtualized networks, developed at the Turin Polytechnic by the NetGroup (Computer Networks Group). VEREFOO enables automatic refinement of security policies into network configurations, providing at the same time formally verified and optimized solutions.
The talk presents VEREFOO, by explaining how it works, by illustrating its state of the art, i.e., all the major results already achieved, and by discussing the VEREFOO ongoing research activities.

 

JANUARY 2024

TreeSync: Authenticated Group Management for Messaging Layer Security” by  Théophile Wallez (Inria)

Date : 12 january 2024
Room Amphitheatre

Messaging Layer Security (MLS), currently undergoing standardization at the IETF, is an asynchronous group messaging protocol that aims to be efficient for large dynamic groups, while providing strong guarantees like forward secrecy (FS) and post-compromise security (PCS). While prior work on MLS has extensively studied its group key establishment component (called TreeKEM), many flaws in early designs of MLS have stemmed from its group integrity and authentication mechanisms that are not as well-understood. In this work, we identify and formalize TreeSync: a sub-protocol of MLS that specifies the shared group state, defines group management operations, and ensures consistency, integrity, and authentication for the group state across all members. We present a precise, executable, machine-checked formal specification of TreeSync, and show how it can be composed with other components to implement the full MLS protocol. Our specification is written in F* and serves as a reference implementation of MLS; it passes the RFC test vectors and is interoperable with other MLS implementations. Using the DY* symbolic protocol analysis framework, we formalize and prove the integrity and authentication guarantees of TreeSync, under minimal security assumptions on the rest of MLS. Our analysis identifies a new attack and we propose several changes that have been incorporated in the latest MLS draft. Ours is the first testable, machine-checked, formal specification for MLS, and should be of interest to both developers and researchers interested in this upcoming standard.

DECEMBER 2023

Software Compartmentalization and the Challenge of Interfaces” by  Pierre Olivier (University of Manchester)

Date : 08 december 2023
Room Aurigny

Software compartmentalization consists in breaking down an application or a system into several isolated components, with the goal of limiting exploits by confining attackers to vulnerable compartments. It is a prime example of application of the principle of least privilege to software. Although compartmentalization is still not the norm for most software running in production today (apart from specialized use cases such as operating systems), the past few years have seen the emergence of new and efficient isolation mechanisms, and compartmentalization is gaining traction in the systems software research community and industry.

In this talk I will first give an overview of software compartmentalization and present its general principles, as well as the challenges that researchers in this field still face today. I will focus in particular on the issue of securing inter-compartment boundaries, which has been overlooked by many recent compartmentalization studies. In that context, I will present ConfFuzz [1, 2], a tool built by my team to measure the effect of neglecting securing compartment interfaces. Our results show that failure to put proper interface security in place leads to the loss of most security guarantees in many compartmentalized schenarios.

[1] Lefeuvre, Hugo, Vlad-Andrei Bădoiu, Yi Chien, Felipe Huici, Nathan Dautenhahn, and Pierre Olivier. “Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software.” In Proceedings of 30th Network and Distributed System Security (NDSS’23). Internet Society, 2022.

[2] https://conffuzz.github.io/

 

DECEMBER 2023

Privacy Challenges in the Era of Deep Learning: Risks and Solutions” by  Mohamed Maouche (Inria Lyon)

Date : 01 december 2023
Room Pétri/Turing

The surge of deep-learning systems has developed an imperative to construct large datasets for their training. However, this growth in data collection also brings forth significant privacy concerns that the entire data pipeline at risk. These concerns manifest from the initial data acquisition stage, where the risk of data owner identification and sensitive information extraction emerge, to the final model deployment phase, where the model itself can be leveraged for inference attacks, including membership, reconstruction, and attribute inference.
In this talk, we examine the diverse array of privacy risks inherent in this data pipeline and present a range of proposed solutions. These solutions are presented through practical use cases, such as speech anonymization within Automatic Speech Recognition (ASR) systems and decentralized collaborative filtering for recommender systems. Additionally, we delve into the distinctions between one-to-one anonymization techniques and synthetic data generation methods.

 

NOVEMBER 2023

Fortifying your smartphone: Advancements in Android Security Research” by  Eleonora Losiouk (Università degli Studi di Padova)

Date : 21 november 2023
Couvent des Jacobins – Grand auditorium – Rennes

As Android devices continue to shape our modern digital landscape, the imperative to bolster their security becomes increasingly urgent. This research talk illuminates the latest advancements in Android security, shedding light on innovative strategies, emerging threats, and cutting-edge technologies designed to safeguard our digital lives. This research talk aims to equip both security practitioners and enthusiasts with a comprehensive understanding of the latest Android security research trends and innovations.

 

NOVEMBER 2023

Compiling Programs to their FHE Equivalent and Applications to Machine Learning” by  Quentin Bourgerie and Andrei Stoian (Zama)

Date : 17 november 2023
Room Pétri/Turing

Fully Homomorphic Encryption (FHE) is a powerful tool that preserves the privacy of users of online services that handle sensitive data, such as health data, biometrics or credit scores. TFHE is an FHE scheme that supports arithmetic computations and so-called programmable boostrapping, which applies arbitrary non-linear functions to ciphertexts while reducing noise. Backed by these two powerful features, TFHE is particularly adapted to the type of programs that are common in machine learning (ML), for various types of ML models: deep neural networks, large language-models (LLM) and decision trees. FHE has historically been reserved to cryptographers, with a too-high entry barrier for data scientists. This is the reason why we have developed a compiler, taking care of all the cryptographic complexities for the user. Our compiler takes care of everything, including the search for optimal secure cryptographic parameters, which has been a very important pain point for years. In this talk, we will present the technology and ideas behind our compiler, based on MLIR infrastructure. We will also describe how our automatic optimizer works, to find parameters which are both secure, correct and optimal. We will show how its frontend allows users to build programs directly in Python, without the burden of cryptography. Then, in a second part, we will present our ML application package, which adds privacy on top of classical frameworks such as scikit-learn or torch. We’ll explain the main technical challenges that we faced: first, quantization (both in post-training or during the training itself) — including to unseen-before low bit widths —; second, turning tree-based models to their FHE equivalent, which is not obvious since control-flow operations are not possible in FHE. We’ll finish with a bunch of demos, on both machine learning and deep learning, and will discuss our experiments on LLMs in FHE.

 

JUNE 2023

Managing our online account security” by  Sasa Radomirovic (University of Surrey)

Date : 23 june 2023
Room Pétri/Turing

The number of online services, accounts, apps, and devices that we use is constantly increasing and so is the complexity of the interconnections between them. These interconnections have been exploited in attacks that range from account takeovers to cryptocurrency theft. Protecting users from such attacks is difficult because each user has a unique account ecosystem whose characteristics and variability we do not fully understand yet.
In this talk I will introduce account access graphs which are a formal model to represent a user’s account ecosystem, i.e., the collection of accounts, apps, and devices, as well as their interconnections. I will show examples of real account access graphs and the first insights we have gained from them. I will then discuss some of the challenges we must overcome in order to build an account management tool that will empower users to better protect their account ecosystem.
This talk is based on joint work published at CCS 2019, CHI 2022 and carried out at ETH Zurich, the University of Dundee and Heriot-Watt University.

 

JUNE 2023

Training with malicious teachers: Poisoning attacks against machine learning” by  Antonio Cinà (CISPA Helmholtz Center for Information Security)

Date : 16 june 2023
Webconference

Training cutting-edge machine learning (ML) models is often prohibitive to most because it requires expensive hardware and a huge amount of labeled data. To address this shortcoming, pre-trained models or publicly-available data are employed to reduce the financial development costs. However, these practices are becoming the Achilles’heel of the ML development chain because they expose the models to poisoning. These attacks assume the capacity of the attacker to tamper with the model training or data collection phases to drive the model toward unexpected misclassifications at test time. Because of the harm they can cause and the difficulties in detecting or mitigating them, poisoning attacks are nowadays considered the most feared threat by companies. Therefore, mindful monitoring of the data gathering and model training procedures is becoming imperative.
In this talk we will categorize poisoning attacks according to their assumptions and attack strategies, showing that although poisoning has been acknowledged as a relevant threat in industry applications, acomplete systematization and critical review of the field was missing. We then inspect how poisoning can influence other ML aspects, going beyond misclassification violations and extending the attack surface in ML development. Finally, we shed light on the current limitations and open research questions in this field and propose possible future research directions.
 

 

MAY 2023
 

Time and Availability in Trusted Execution Environments” by  Jan Tobias Muehlberg (Université libre de Bruxelles, Belgium)

Date : 26 may 2023
Place : Room Pétri/Turing

Trusted Execution Environments (TEEs) can provide strong security guarantees in distributed systems, and even protect embedded software in the IoT or in critical control systems. Measuring the passage of time and taking actions based on such measurements is a common security-critical operation in many of these systems. Yet, few TEEs combine security with real-time processing and availability, or provide hard guarantees on the timeliness of code execution. A key difficulty here is that TEEs execute within an effectively untrusted environment, which can influence expectations on time and progress.
In this talk, I will present our research on categorising approaches to tracking the passage of time in TEEs, highlighting the respective dependability guarantees. Focusing first on the popular Intel SGX architecture, we analyse to what extend time can be securely measured and utilised. We then broaden the scope to other popular trusted computing solutions and list common applications for each notion of time and progress, concluding that not every use case requires an accurate access to real-world time.
Following this, I will present a configurable embedded security architecture that provides a notion of guaranteed real-time execution for dynamically loaded enclaves. We implement preemptive multitasking and restricted atomicity on top of strong enclave software isolation and attestation. Our approach allows the hardware to enforce confidentiality and integrity protections, while a decoupled small enclaved scheduler software component can enforce availability and guarantee strict deadlines of a bounded number of protected applications, without necessarily introducing a notion of priorities amongst these applications.
 

MAY 2023
 

A new symbolic analysis of WireGuard VPN ” by  Sylvain Ruhault (ANSSI)

Date : 12 may 2023
Place : Room Pétri/Turing

WireGuard is a new Virtual Private Network (VPN), recently integrated into the Linux Kernel and in OpenBSD. It proposes a different approach from other classical VPN such as IPsec or OpenVPN because it does not let users configure cryptographic algorithms but instead relies on a close set of preconfigured algorithms. The protocol inside WireGuard is a dedicated extension of IKpsk2 protocol from Noise Framework. WireGuard and IKpsk2 protocols have gained a lot of attraction from the academic community and different analyses have been proposed, in both the symbolic and the computational model, with or without computer-aided proof assistants. These analyses however consider different adversarial models or refer to incomplete versions of the protocols. Furthermore, when proof assistants are used, protocol models also differ. In this work, we propose a new, unified formal model of WireGuard protocol in the symbolic model. Our model is fully based on processes in the applied Pi- alculus, the language used by the automatic cryptographic protocol verifier Sapic+. It allows us to automatically generate translations into ProVerif and Tamarin proof assistants. We consider a complete protocol execution, including cookie messages used for resistance against denial of service attacks. Furthermore, we model an adversary that can read or set static, ephemeral or pre-shared keys, read or set ECDH precomputations, control key distribution. Eventually, we present our results in a unified and interpretable way, allowing comparisons with previous analyses. This is joint work with Dhekra Mahmoud and Pascal Lafourcade.
 

MARCH 2023
 

Synergies and tensions between privacy and other ethical issues in responsible machine learning” by  Sébastien Gambs (Université du Québec à Montréal)

Date : 31 march 2023
Place : Room Pétri/Turing

The success of machine learning models is such that they are now ubiquitous in our society. Their widespread use also raises serious privacy and ethical issues, however, especially if their predictions are put into action in domains in which they can significantly affect individuals. As a result, we have witnessed in recent years several initiatives proposing design principles and guidelines for the responsible development of artificial intelligence. To understand how we may best address privacy and ethics responsibly when developing machine learning models, we therefore need to first have a clear view on how these concepts interact with each other in a positive as well as negative manner. In this talk, I will review the main tensions but also convergences that can emerge when addressing jointly the privacy and ethical challenges that go into designing and deploying machine learning models.

MARCH 2023
 

Private Communication Using Untrusted Service Providers” by  Amr El Abbadi (University of California Santa Barbara)

Date : 24 march 2023
Place : Room Pétri/Turing

Our digital lives have become increasingly interdependent and interconnected. Such interconnections rely on a vast network of actors whose trustworthiness is not always guaranteed. Over the past three decades, rapid advances in computing and communication technologies have enabled billions of users with access to information and connectivity at their fingertips. Unfortunately, this rapid digitization of our personal lives is also now vulnerable to invasions of privacy. In particular, we need to worry about the malicious intent of individual actors in the network as well as large and powerful organizations such as service providers and nation states. Given such an untrusted world, we illustrate some of the privacy challenges by focusing on the following research questions: can we design a scalable infrastructure for voice communication that will hide meta-information regarding who is communicating with whom? The solution to this challenge has significant ramification on diverse data management problems such as designing scalable systems for oblivious search for documents from public repositories as well as private query processing over public databases. These are some of the iconic problems that must be solved before we can embark on building trusted platforms and services over untrusted infrastructures.

MARCH 2023
 

Fuzzing with Data Dependency Information” by  Alessandro Mantovani (R&D Qualcomm)

Date : 17 march 2023
Place : Room Pétri/Turing

Recent advances in fuzz testing have introduced several forms of feedback mechanisms, motivated by the fact that for a large range of programs and libraries, edge-coverage alone is insufficient to reveal complicated bugs. Inspired by this line of research, we examined existing program representations looking for a match between expressiveness of the structure and adaptability to the context of fuzz testing. In particular, we believe that data dependency graphs (DDGs) represent a good candidate for this task, as the set of information embedded by this data structure is potentially useful to find vulnerable constructs by stressing combinations of def-use pairs that would be difficult for a traditional fuzzer to trigger. Since some portions of the dependency graph overlap with the control flow of the program, it is possible to reduce the additional instrumentation to cover only “interesting” data-flow dependencies, those that help the fuzzer to visit the code in a distinct way compared to standard methodologies. To test these observations, we proposed DDFuzz, a new approach that rewards the fuzzer not only with code coverage information, but also when new edges in the data dependency graph are hit. Our results show that the adoption of data dependency instrumentation in coverage-guided fuzzing is a promising solution that can help to discover bugs that would otherwise remain unexplored by standard coverage approaches. This is demonstrated by the 72 different vulnerabilities that our data-dependency driven approach can identify when executed on 38 target programs from three different datasets.
 

MARCH 2023
 

Vulnerability disclosure and third parties’ involvement in software security” by  Arrah-Marie JO (IMT ATLANTIQUE)

Date : 3 march 2023
Place : Room Aurigny

Around the debate on software vulnerability disclosure, existing works have mostly explored how disclosure gives an incentive to software vendors to better secure their software. The role of third parties such as business users, security firms, downstream software vendors or service providers is rarely taken account, while in fact these actors are increasingly involved in improving the security of a software. In this talk, I will present the results of an empirical study using data from 2009 to 2018 on vulnerabilities disclosed on SecurityFocus BugTraq on how the public disclosure of a critical vulnerability affects the contribution of software vendors and third parties in discovering new vulnerabilities.
 

FEBRUARY 2023
 

Differentially Private Linear Sketches: Efficient Implementations and Applications” by  Fuheng Zhao (University of California Santa Barbara)

Date : 3 february 2023
Place : Web-conference

Linear sketches have been widely adopted to process fast data streams, and they can be used to accurately answer frequency estimation, approximate top K items, and summarize data distributions. When data are sensitive, it is desirable to provide privacy guarantees for linear sketches to preserve private information while delivering useful results with theoretical bounds. To address these challenges, we propose differentially private linear sketches with high privacy-utility trade-offs for frequency, quantile, and top K approximations.

FEBRUARY 2023
 

Where ML Security Is Broken and How to Fix It” by Maura Pintor (PRA Lab, University of Cagliari)

Date : 3 february 2023
Place : Web-conference

To understand the sensitivity under attacks and to develop defense mechanisms, machine-learning model designers craft worst-case adversarial perturbations with gradient-descent optimization algorithms against the model under evaluation. However, many of the proposed defenses have been shown to provide a false sense of robustness due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness, as highlighted by more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past.
 

JANUARY 2023
 

DroidGuard: A Deep Dive into SafetyNet” by Romain Thomas (Quarkslab)

Date : 6 january 2023
Place : Room Pétri/Turing

SafetyNet is the Android component developed by Google to verify the devices’ integrity. These checks are used by the developers to prevent running applications on devices that would not meet security requirements but it is also used by Google to prevent bots, fraud and abuse.

In 2017, Collin Mulliner & John Kozyrakis made one of the first public presentations about SafetyNet and a glimpse into the internal mechanisms. Since then, the Google anti-abuse team improved the strength of the solution which moved most of the original Java layer of SafetyNet, into a native module called DroidGuard. This module implements a custom virtual machine that runs a proprietary bytecode provided by Google to perform the devices’ integrity checks.

The purpose of this talk is to make a state-of-the-art of the current implementation of SafetyNet. In particular, we aim at presenting the internal mechanisms behind SafetyNet and the DroidGuard module. This includes an overview of the VM design, its internal mechanisms, and we will introduce the security checks performed by SafetyNet to detect Magisk, emulators, rooted devices, and even Pegasus.
 

DECEMBER 2022
 

Not so AdHoc testing: formal methods in the standardization of the EDHOC protocol” by Charlie Jacomme (Inria Paris)

Date : 16 december 2022
Place : Room Métivier

We believe that formal methods in security should be leveraged in all the standardisation’s of security protocols in order to strengthen their guarantees. To be effective, such analyses should be:
* maintainable: the security analysis should be performed on every step of the way, i.e. each iteration of the draft;
* pessimistic: all possible threat models, notably all sort of compromise should be considered;
* precise: the analysis should notably include as many real life weaknesses of the concrete cryptographic primitives specified.

In this talk, we illustrate how such a goal may be approached by detailing our analysis of the current IETF draft standard of the EDHOC protocol, as well as our subsequent interactions with its LAKE working group. We will proceed in three steps, first introducing the Sapic+ platform that allows from a single modeling of a protocol to benefit from all the capabilities of multiple automated verification tools (ProVerif, Tamarin, DeepSec). We will then introduce multiple recent advances on how to better model the cryptographic primitives and their real life weaknesses. We will finally show how we leveraged Sapic+ along with the advanced primitive models to analyze the EDHOC protocol and provide feedback to the LAKE working group that has been integrated in latter drafts.

DECEMBER 2022
 

Thwarting covert adversaries in FHE pipelines” by Sylvain Chatel (EPFL)

Date : 09 december 2022
Place : Room Aurigny

Fully Homomorphic Encryption (FHE) enables computations to be executed directly on encrypted data without decryption, thus it is becoming an auspicious solution to protect the confidentiality of sensitive data without impeding its usability for the purpose of analytics. While many practical systems rely on FHE to achieve strong privacy guarantees, their constructions only consider an honest-but-curious threat model by default, FHE provides no guarantees that the cryptographic material has been honestly generated nor that the computation was executed correctly on the encrypted data. In particular, in multiparty settings with numerous clients providing data and one or several servers performing the computation, this threat-model becomes even less realistic. Only recently, the cryptographic community has started analysing the guarantees of FHE under stronger adversaries and proposed solutions tailored to the malicious threat-model, however these efforts have remained theoretical and not applicable to real-life scenarios.

In our work, we aim at reducing this gap by considering covert adversaries that are malicious but rational (i.e., they do not want to be detected when cheating) and we build different systems that protect the FHE pipeline against such entities. We first propose an efficient solution to prove the correctness of homomorphic operations performed by a server without compromising the expressiveness of the FHE scheme. Then, we propose different protocols to secure input verification and correct encryption in settings where the client encrypting its data is not trusted. Our constructions provide baselines to evaluate the impact of the threat model change in FHE pipelines with real-life implementation constraints.

DECEMBER 2022
 

Programmable Network Security – Challenges and Opportunities” by Sandra Scott-Hayward (Queen’s University Belfast)

Date : 02 december 2022
Place : Web-conference

Society’s dependence on communication networks for all aspects of day-to-day living; business, education, entertainment etc. makes “the network” a very attractive target for malicious attack, which cyber criminals are only too eager to exploit, attempting to limit or block access to essential services (e.g., healthcare, banking, critical infrastructure etc.), disrupting our daily lives and causing high cost to businesses and individuals. Advances in networking such as Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) have enabled the network availability and scalability required to support massive global connectivity. With SDN and programmable networks, we introduce flexibility to the control and processing of network traffic, which may be leveraged for in-network security functions. However, as we explore the use of network programmability frameworks and languages such as P4 to accelerate packet processing functions for the flexible and scalable connectivity in next-generation networks, we cannot lose sight of the security of the programmable network itself. Indeed, this flexibility and programmability potentially exposes the network devices to compromise.

This talk will explore the opportunities and challenges of programmable network security, developments in the field, and our related research at CSIT/QUB.

NOVEMBER 2022
 

Private Set Intersection and Other Private Information Sharing Protocols” by Xavier Carpent (University of Nottingham)

Date : 07 november 2022
Place : Room Métivier

In this seminar, we will give an overview of Private Set Intersection (PSI), some of its constructions, use cases, and open research questions. The canonical PSI protocol allows Alice and Bob (both with their own set of elements) to interact in a way that Alice learns the intersection of the sets and nothing else. Numerous variants exist:cardinality of intersection only (how many elements in common), threshold cardinality (do we have at least X elements in common?), size-hiding, asymmetric scale, malicious versus semi-honest security, etc., and some of these will be explored. Along the way, other related “private” types of protocols and their applications will be discussed.

 

OCTOBER 2022
 

Reasoning over leaks of information for Access Control of Databases” by Pierre Bourhis (CNRS, CRISTAL)

Date : 13 october 2022
Place : Room Métivier

Controlling the access of data in Database management systems is a classical problem and it has been solved through different mechanisms. One of the most common mechanism implemented in most Database management systems is the mechanism of views, i.e defining the accessible data of a user as the result of a query. This mechanism is also used in principle in other systems such as in social networks. Unfortunately, this approach has some defaults. Even though it does not leak any secret information, the user seeing the data can infer some of these secret data by using different knowledge such as the logical definition of the query used to define the accessible data and various properties of the database. In this talk, I will present a formalism allowing to check when a set of views does not leak any information even through this kind of attacks.

OCTOBER 2022
 

Learning-Based Network Intrusion Detection: an Imbalanced, Constantly Evolving and Timely Problem” by Maxime Pelcat (INSA Rennes)

Date : 07 october 2022
Place : Room Pétri/Turing

Network intrusion detection systems (NIDS) observe network traffic and aim to pinpoint intrusions, i.e. effective threats on the integrity, availability or confidentiality of services and data provided by this network. There are two types of NIDS:
1) signature-based intrusion detection systems that identify known intrusions by referring to an existing knowledge base, and
2) anomaly-based intrusion detection systems (AIDS) that detect intrusions based on deviations from a model of normal network traffic, usually learnt through machine learning techniques.

While AIDS have the advantage to not necessitate the manual creation of signatures, deploying AIDS in networks is challenging in practice.
First, collecting representative network data and properly labelling it is complex and costly. This data is also highly unbalanced, as attacks are rare events. Finally, a learned AIDS is likely to show a drop in detection rates due to differences between the training context and the inference context.

This presentation will discuss the results of Nicolas Sourbier’s PhD thesis that has studied how genetic programming and Tangled Program Graphs (TPGs) machine learning can help overcoming the challenges of the network AIDS.

SEPTEMBER 2022
 

L’empoisonnement de données semble-t-il un risque réaliste ?” by Adrien Chan-Hon-Tong (ONERA)

Date : 23 september 2022
Place : Web-conference

Les attaques adversaires ont rencontré un fort écho dans la communauté de vision par ordinateur. Pour autant, via ce type d’attaque, un hacker ne peut modifier le comportement de l’algorithme ciblé que localement. Inversement, l’empoisonnement de données est en mesure de modifier globalement le comportement de l’algorithme visé, et, il n’est pas forcément détectable par un opérateur notamment si on considère un empoisonnement invisible comme celui introduit dans “poison frogs” (arxiv.org/abs/1804.00792). Cependant, il apparaît que ces empoisonnements invisibles sont parfois inopérants face à un apprentissage robuste. Cela amène à relativer le risque réel d’une telle attaque.


SEPTEMBER 2022
 

A Fundamental Approach to Cyber Risk Analysis” by Rainer Böhme (Universität Innsbruck)

Date : 16 september 2022
Place : Web-conference

This paper provides a framework actuaries can use to think about cyber risk. We propose a differentiated view of cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the liter- ature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences. As a result, we identify possible ways forward to improve rigorous modeling of cyber risk, including its driving factors. This is a prerequisite for establishing a deep and stable market for cyber risk insurance.

https://informationsecurity.uibk.ac.at/pdfs/BLR2019_FundamentalApproachCyberRiskInsurance_Variance.pdf
 

MAY 2022
 

Port Contention Goes Portable: Port Contention Side Channels in Web Browsers” by Thomas Rokicki (Univ Rennes, CNRS, IRISA)

Date : 13 may 2022
Place : Room Pétri/Turing

Microarchitectural side-channel attacks can derive secrets from the execution of vulnerable programs. Their implementation in web browsers represents a considerable extension of their attack surface, as a user simply browsing a malicious website, or even a malicious third-party advertisement in a benign cross-origin isolated website, can be a victim.
In this talk, we present the first CPU port contention side channel running entirely in a web browser, despite a highly challenging environment. Our attack can be used to build a cross-browser covert channel with a bit rate of 200bps, one order of magnitude above the state of the art, and has a spatial resolution of 1024 native instructions in a side-channel attack, a performance on-par with Prime+Probe attacks. We provide a framework to evaluate the port contention caused by WebAssembly instructions on Intel processors, allowing to increase the portability of port contention side channels. We conclude from our work that port contention attacks are not only fast, they are also less susceptible to noise than cache attacks, and are immune to countermeasures implemented in browsers as well as most side channel countermeasures, which target the cache in their vast majority.

APRIL 2022
 

On MILP modelisations ” by Christina Boura (UVSQ, CNRS, LMV)

Date : 29 april 2022
Place : Room Oléron

Modelizing a problem using linear constraints and solving it by some Mixed Integer Linear Programming (MILP) solver is a popular approach in many domains of computer science. In this talk we present and compare different new techniques to modelize any subset of {0,1}^n for MILP. We then discuss the efficiency of our models by applying them to the search of differential paths, a classical problem in symmetric cryptography.


MARCH 2022
 

Built on sand: on the security of Collaborative Machine Learning ” by Dario PASQUINI (EPFL)

Date : 25 march 2022
Place : Web-conference

This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing “privacy-preserving collaborative machine learning”.

FEBRUARY 2022
 

Search-Based Local Black-Box Deobfuscation: Understand, Improve and Mitigate” by Grégoire Menguy (CEA LIST)

Date : 25 february 2022
Place : Web-conference

Code obfuscation aims at protecting Intellectual Property and other secrets embedded into software from being retrieved. Recent works leverage advances in artificial intelligence (AI) with the hope of getting blackbox deobfuscators completely immune to standard (whitebox) protection mechanisms. While promising, this new field of AI-based, and more specifically search-based blackbox deobfuscation, is still in its infancy. We deepen the state of search-based blackbox deobfuscation in three key directions: understand the current state-of-the-art, improve over it and design dedicated protection mechanisms. In particular, we define a novel generic framework for search-based blackbox deobfuscation encompassing prior work and highlighting key components; we are the first to point out that the search space underlying code deobfuscation is too unstable for simulation-based methods (e.g., Monte Carlo Tree Search used in prior work) and advocate the use of robust methods such as S-metaheuristics; we propose the new optimized search-based blackbox deobfuscator Xyntia which significantly outperforms prior work in terms of success rate (especially with small time budget) while being completely immune to the most recent anti-analysis code obfuscation methods; and finally we propose two novel protections against search-based blackbox deobfuscation, allowing to counter Xyntia powerful attacks. This work has been published at ACM CCS 2021.

FEBRUARY 2022
 

Model Stealing Attacks Against Inductive Graph Neural Networks” by Yufei Han (INRIA)

Date : 18 february 2022
Place : Web-conference

Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary’s background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance.

JANUARY 2022
 

Security and privacy in personal data management systems” by Nicolas Anciaux (INRIA)

Date : 28 january 2022
Place : Web-conference

Personal Data Management Systems (called PDMS) provide individuals with a hardware and/or software solution to manage their data under control. From a data management and security/privacy perspective, the issues involved are complex and differ significantly from the traditional database setting. The emergence of trusted execution environments (such as Intel SGX) could be a game changer. This presentation will aim to (1) review different approaches for PDMS and the potential contribution of trusted runtime environments, and (2) discuss some issues related to collective processing of personal data (citizen portability), compliance with user consents, or protection against information leakage at runtime.

JANUARY 2022
 

Browser fingerprinting: past, present and possible future” by Pierre Laperdrix (CRIStAL)

Date : 21 january 2022
Place : Web-conference

Browser fingerprinting has grown a lot since its debut in 2010. By collecting specific information in the browser, one can learn a lot about a device and its configuration. It has been shown in previous studies that it can even be used to track users online, bypassing current tracking methods like cookies. In this presentation, we will look at how this technique works and present an overview of the research performed in the domain over the past decade. We will see how this technique is currently used online before looking at its possible future.

NOVEMBER 2021
 

Ransomware Detection Using Markov Chain Models Over File Headers” by David Lubicz (DGA-MI)

Date : 26 november 2021
Place : Room Pétri-Turing

In this paper, a new approach for the detection of ransomware based on the runtime analysis of their behaviour is presented. The main idea is to get samples by using a mini-filter to intercept write requests, then decide if a sample corresponds to a benign or a malicious write request. To do so, in a learning phase, statistical models of structured file headers are built using Markov chains. Then in a detection phase, a maximum likelihood test is used to decide if a sample provided by a write request is normal or malicious. We introduce new statistical distances between two Markov chains, which are variants of the Kullback-Leibler divergence, which measure the efficiency of a maximum likelihood test to distinguish between two distributions given by Markov chains. This distance and extensive experiments are used to demonstrate the relevance of our method.

NOVEMBER 2021
 

Formal security proofs in a post-quantum world” by Charlie Jacomme (CISPA)

Date : 19 november 2021
Place : Room Pétri-Turing

In the recent years, formals methods for security and their associated tools have been used successfully both to find novel and complex attacks on many protocols [A] and to help in their standardization process. They however face a new challenge with the increasing probability of quantum computers coming into the real-world: we need to be able to provide guarantees against quantum attackers.
In this talk, we will first present a broad overview of formal methods, outlining what is the general goal of the field and how we have strived toward it. We will then focus on the post-quantum issue by presenting the corresponding concrete challenges, and thus multiple ways current computational proofs of security (proof for any Polynomial Time Turing Machine attacker) can fail against a quantum attacker. We will then present the first-order logic over which Squirrel is built, the BC logic, and show based on the first part where it fails at post-quantum soundness. In a third part, we will finally present our contribution: how we made the logic and thus the Squirrel prover post-quantum sound. We will conclude by discussing some more general challenges the field may be facing in the coming years.

[A] see, e.g., https://www.bbc.co.uk/news/technology-58719891 for a recent example
[B] https://squirrel-prover.github.io/

OCTOBER 2021
 

Towards Security-Oriented Program analysis” by Sébastien Bardin (CEA LIST)

Date : 29 october 2021
Place : Room Pétri-Turing

While digital security concerns increase, we face both a urging demand for more and more code-level security analysis and a shortage of security experts. Hence the need for techniques and tools able to automate part of these code-level security analyses. As source-level program analysis and formal methods for safety-critical applications have made tremendous progress in the past decades, it is extremely tempting to adapt them from safety to security. Yet, security is not safety and, while still useful, a direct adaptation of safety-oriented program analysis to security scenarios remains limited in its scope. In this talk, we will argue for the need of security-oriented program analysis. Especially, we will first present some of the challenges faced by formal methods and program analysis in the context of code-level security scenarios. For example, security-oriented code analysis is better performed at the binary level, the attacker must be taken into account and practical security properties deviate from standard reachability / invariance properties. Second, we will discuss some early results and achievements carried out within the BINSEC group at CEA LIST. Especially, we will show how techniques such as symbolic execution and SMT constraint solving can be tailored to a number of practical code-level security scenarios.


OCTOBER 2021
 

Les cyber opérations, entre opportunités stratégiques et contraintes opérationnelles” by Stéphane Taillat (Académie Militaire de Saint Cyr-Coëtquidan)

Date : 01 october 2021
Place : Room Métivier

Le recours aux opérations numériques et au cyberespace s’est généralisé dans la gestion des crises et des conflits internationaux. Pour autant, aucune des cyber opérations étatiques ou prêtées à des États n’ont débouché sur des conflits armés ou sur une escalade significative des tensions. A ce titre, cette conférence cherche à s’interroger sur l’utilité stratégique des opérations numériques et du recours au cyberespace comme théâtre, enjeux et modalités des confrontations géopolitiques en se focalisant sur les contraintes politiques, opérationnelles et organisationnelles qui les encadrent.

APRIL 2021
 

Squirrel: a new approach to computer-assisted proofs of protocols in the computational model” by David Baelde (ENS Cachan)

Date : 16 april 2021
Place : Web-conference

Formal methods have brought several approaches for proving that security protocols ensure the expected security and privacy properties. Most of the resulting tools analyze protocols in symbolic models, aka. Dolev-Yao-style models. Security in the symbolic model does not imply security in the cryptographer’s standard model, the computational model, where attackers are arbitrary (PPTIME) Turing machines. Computer-assisted verification techniques for the computational model have appeared only recently, and are generally less flexible or less automated than in the symbolic model. In some recent work, several colleagues and myself have proposed a new approach, elaborating on the CCSA logic of Gergei Bana and Hubert Comon. We have implemented it in a new proof assistant, Squirrel, and validated it on a variety of case studies. In this talk, I will present this approach, its benefits, and some of the remaining challenges.

This is based on work with Stéphanie Delaune, Charlie Jacomme, Adrien Koutsos and Solène Moreau, which has been accepted at S&P’21.

APRIL 2021
 

Les cyberopérations: entre technique et droit international. Attribution, preuve et responsabilité.” by Anne-Thida Norodom (Professeur de droit public, Université de Paris)

Date : 02 april 2021
Place : Web-conference

L’objet de cette intervention est de montrer à quel point le juridique est dépendant du technique lorsqu’il s’agit de réguler les cyberopérations. L’approche choisie sera celle du droit international public, c’est-à-dire du droit applicable entre Etats. Alors qu’il existe un consensus au niveau international sur l’applicabilité du droit international dans le domaine numérique, les négociations en cours, notamment à l’ONU, achoppent sur un certain nombre de questions relatives à la manière dont le droit international doit s’appliquer. Il s’agira de montrer que le droit international dispose déjà de règles suffisantes pour encadrer les cyberopérations mais que sa mise en oeuvre doit être affinée, notamment au regard de certaines difficultés techniques.


MARCH 2021
 

Does Facebook use sensitive data for advertising?” by José González Cabañas (Universidad Carlos III de Madrid, Spain)

Date : 12 march 2021
Place : Web-conference

Large online platforms use personal data, for example, your interests, to allow advertisers to reach you based on the things you like. But did you know some of these interests they use are associated with sensitive information directly linked to your social profile? In this talk, I will talk about the definition of sensitive data in terms of the General Data Protection Regulation in Europe (GDPR). What does the GDPR say about its use? And how large social networks like Facebook use it to deliver you ads? We will see how many Facebook users are labeled with sensitive information around the world. Our findings show that a significant portion of Facebook users is tagged with some potentially sensitive ad preferences onto their profiles. This is important in terms of privacy. We will check if there are differences between countries. And finally, I will discuss the risks associated with this undesirable use of our data.


FEBRUARY 2021
 

A formal study of injection-based attacks and some tools it will enable “by Pierre-François Gimenez (Inria Rennes, CentraleSupélec)

Date : 19 february 2021
Place : Web-conference

Many systems work by receiving instructions and processing them: e.g., a browser receives and then displays an HTML page and executes Javascript scripts, a database receives a query and then applies it to its data, an embedded system controlled through a protocol receives and then processes a message. When such instructions depend on user input, one generally constructs them with concatenation or insertion. It can lead to injection-based attacks: when the user input modifies the query’s intended semantics and leads to a security breach. Protections do exist but are not sufficient as they never tackle the origin of the problem: the language itself. We propose a new formal approach based on formal languages to assess risk, enhance static analysis, and enable new tools. This approach is general and can be applied to query, programming, and domain-specific languages as well as network protocols. We are setting up an ANR project to go into this subject in more depth.


DECEMBER 2020
 

Canadian and Québec approaches to contact tracing
by Sébastien Gambs (Université du Québec à Montréal, Canada)

Date : 11 december 2020
Place : Web-conference

Contact tracing applications have been deployed in many countries as a complementary measure to fight Covid-19 by enabling to automatically notify individuals who have been in contact with infected persons. However, the choice of the design of a particular application is not innocent as it has a direct impact on its security as well as on the privacy of its user. In this talk, I will review the proposition of contact tracing applications that have emerged in the last months in Canada and Québec, comparing in particular their security and privacy properties. Finally, I will conclude by discussing some ethical issues raised by the deployment of these applications.

 

NOVEMBER 2020
 

The PINED-RQ Family: Differentially Private Indexes for Range Query Processing in Clouds
by Tristan Allard (IRISA, Université de Rennes 1)

Date : 13 november 2020
Place : Web-conference

Performing non- aggregate range queries on cloud stored data, while achieving both privacy and efficiency is a challenging problem. With the PINED-RQ family of techniques, we propose constructing a differentially private index to an outsourced encrypted dataset. Efficiency is enabled by using a cleartext index structure to perform range queries. Security relies on both differential privacy (of the index) and semantic security (of the encrypted dataset). Our initial solution, PINED-RQ, develops algorithms for building and updating the differentially private index. Our recent proposals extend PINED-RQ with a parallel architecture for coping with high-rate incoming data. Compared to state-of-the-art secure index based range query processing approaches, PINED-RQ executes queries in the order of at least one magnitude faster. Moreover its parallel extensions increase its throughput by at least one order of magnitude. The security of the PINED-RQ solutions is proved and their efficiency is assessed by extensive experimental validations. In this talk, I will introduce the PINED-RQ family of techniques by presenting the initial PINED-RQ proposal and overviewing then its parallel extensions.

SEPTEMBER 2020
 

An evaluation of Symbolic Execution Systems and the benefits of compilation with SymCC
by Aurélien Francillon (Eurecom)

Date : 25 september 2020
Place : Room Métivier

In this talk I will discuss our recent work, together with Sebastian Poeplau, on Symbolic execution. Symbolic execution has become a popular technique for software testing and vulnerability detection, in particular, because it allows to generate test cases for difficult to reach program paths. However, a major impediment to practical symbolic execution is speed, especially when compared to near-native speed solutions like fuzz testing.
We first discuss an extensive evaluation (published at ACSAC 2019) of the current symbolic execution tools (Angr, Klee, Qsym). Most implementations transform the program under analysis to some intermediate representation (IR), which is then used as a basis for symbolic execution. There is a multitude of available IRs, and even more approaches to transform target programs into a respective IR. Therefore, we developed a methodology for systematic comparison of different approaches to symbolic execution; we then use it to evaluate the impact of the choice of IR and IR generation.
We will then present SYMCC: our compilation-based approach to symbolic execution. SymCC is an LLVM-based C and C++ compiler that builds concolic execution right into the binary and performs better than state-of-the-art implementations by orders of magnitude. It can be used by software developers as a drop-in replacement for clang and clang++. Using SymCC on real-world software, we found that SymCC consistently achieves higher coverage, and we discovered two vulnerabilities in the heavily tested OpenJPEG project, which have been confirmed by the project maintainers and assigned CVE identifiers.

SymCC received a distinguished paper award at Usenix Security 2020.


JUNE 2020
 

Intriguing Properties of Adversarial ML Attacks in the Problem Space
by Fabio Pierazzi (King’s College London)

Date : 19 june 2020
Place : Web-conference

Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored. In this talk, I will present two major contributions from our recent IEEE Symp. S&P 2020 paper [1]. First, I will present our novel reformulation of adversarial ML evasion attacks for the problem-space, with more constraints to consider than the feature-space and with more light shed on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present our novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generative evasive malware at scale that also evade feature-space defenses.

[1] Fabio Pierazzi*, Feargus Pendlebury*, Jacopo Cortellazzi, Lorenzo Cavallaro. “Intriguing Properties of Adversarial ML Attacks in the Problem Space”. IEEE Symp. Security & Privacy (Oakland), 2020.

Trailer of the talk


MAY 2020
 

QUIC: que faut-il attendre de ce nouveau protocole de communication sécurisé ?
by Olivier Levillain (Telecom Sudparis)

Date : 29 may 2020
Place : Web-conference

Depuis plusieurs années, les grands acteurs du web travaillent à l’amélioration des communications entre leurs utilisateurs et leurs services. Ces améliorations peuvent porter sur la vitesse des connexions ou sur la sécurité des échanges. QUIC fait partie des efforts en cours. Il s’agit d’un protocole en cours de standardisation à l’IETF, qu’on peut résumer à un protocole sur UDP fournissant les services de TCP/SCTP, TLS 1.3 et HTTP/2.
Comme QUIC est destiné à remplacer le couple TLS/HTTP pour un certain nombre de cas d’usage, il nous a semblé intéressant de nous y intéresser, et d’étudier les implémentations existantes.
Les travaux que nous avons menés ont consisté à réimplémenter des portions des standards QUIC avec scapy, ce qui nous a permis de nous forger une opinion sur la complexité de QUIC, puis de tester les implémentations existantes vis-à-vis de contraintes de sécurité (implicites ou explicites) tirées des drafts IETF. Bien que nous n’ayons pas mis au jour de problème majeur, nos conclusions sont d’une part que QUIC est trop complexe et d’autre part que la majorité des implémentations sont fragiles et réagissent de manière incorrecte à certains de nos stimuli.

FEBRUARY 2020
 

Cybersecurity of industrial systems. Open problems and some ideas.
by Stéphane Mocanu (Inria Rhône-Alpes)

Date : 7 february 2020
Room : Pétri/Turing

Research in cybersecurity of SCADA systems is a relatively recent field developed mainly into the last decade. Despite the manufacturers progress in hardening the security of device SCADA systems are still prone to severe vulnerabilities and specialized countermeasures are still incipient. This talk aims to present the open problems seen from the boundary between computer science and control systems I.e. the architectural view of system (IT/OT/process). We’ll focus mainly on system modelling and vulnerability search and intrusion detection.

JANUARY 2020
 

Contemporary Issues in Digital Forensics
by Ben Martini (University of South Australia)

Date : 31 january 2020
Room : Pétri/Turing

The discipline of digital forensics, or as it was then known ‘forensic computing’, began with a focus on retrieving admissible evidence from computer systems (typically personal computers). However, with the increased pervasiveness of connected digital technologies in the last 20 years, a wide variety of new and complex sources of digital evidence have emerged. This has presented a range of opportunities and challenges for forensic practitioners.
In this presentation, I will discuss a selection of digital forensics research that I have conducted, with my colleagues and collaborators, in areas such as cloud forensics, mobile forensics and Internet of Things (IoT) forensics. We will look at the challenges of identifying, preserving, collecting and analysing evidence from these platforms, along with proposed solutions, and discuss the applicability of these techniques to the challenges of the next decade.

 

DECEMBER 2019
 

Intrusion Detection Systems over an Encrypted Traffic: Problem and Solutions
by Sébastien Canard (Orange)

Date : 6 december 2019
Room : Pétri/Turing

Privacy and data confidentiality are today at the heart of many discussions. But such data protection should not be done at the detriment of other security aspects. In the context of network traffic, intrusion detection system becomes in particular totally blind when the traffic is encrypted, making clients again vulnerable to known threats and attacks. Reconciling security and privacy is then one of the major topics for which we should find relevant and scalable solutions that can be deployed as soon as possible. In this context, several recent papers propose to perform Deep Packet Inspection over an encrypted traffic, based on different cryptographic techniques. In this talk, we introduce the main difficulties to design such solutions and give some details about two of them.

NOVEMBER 2019
 

La protection des flux en télévision numérique
by Eric Desmicht (DGA)

Date : 22 november 2019
Room : Pétri/Turing

En télévision numérique, des flux numériques comportant de la vidéo sont mis à la disposition des utilisateurs via différents média (cable, satellite, TNT, IP…) et différentes techniques (broadcast, multicast, unicast, support numérique…). Pour garantir les revenus des opérateurs de télévision à péage proposant des contenus à forte valeur ajoutée, il est essentiel que seuls les consommateurs ayant payé pour accéder à ces services puissent réellement y accéder. Rien que pour la France, certains estiment la perte de revenu liée au piratage à 1,3 milliard d’euros annuels.
Dans un premier temps, le profil des attaquants sera introduit ainsi que les standards utilisés pour protéger ces données numériques tout en garantissant l’interopérabilité. Puis l’évolution des mécanismes de sécurité au cours des dernières décennies sera détaillée. D’un décodeur fourni par le diffuseur en boite noire à l’abonné, nous verrons comment le milieu a évolué vers des approches plus ouvertes avec par exemple des décodeurs AndroidTV. En détaillant les spécifications émise par le consortium MovieLabs créé par les majors de cinéma (Warner, Fox, Disney, Universal…) pour renforcer la confidentialité des flux décodés, les techniques matérielles et logicielles demandées pour les décodeurs les plus récents seront introduites. Enfin l’extension de la sécurité vers l’ensemble du système avec du watermarking, de la protection bout en bout et de la défense en profondeur concluront la présentation.

Présentation des fondamentaux du management du risque et d’EBIOS Risk Manager
by Vincent Loriot (ANSSI)

Date : 29 november 2019
Room : Aurigny

Résumé :
a. Présentation ANSSI
b. Tendances de la menace cyber et démarche pour s’en protéger
c. Fondamentaux de la méthode EBIOS Risk Manager

La présentation théorique sera complétée avec un atelier pratique couvrant les sujets suivants :
a. Les besoins et enjeux de sécurité
b. Les couples Source de Risque / Objectif Visé
c. La cartographie de menace de l’écosystème

 

OCTOBER 2019
 

Subject Access Request and Proof of Ownership
by Cédric Lauradoux (INRIA Rhône-Alpes)

Date : 25 october 2019
Room : Pétri/Turing

The GDPR (General Data Protection Regulation) provides rights on our data: access, rectification, objection, etc. However, this regulation is not binding on how we can exercise these rights. Data controllers have therefore deployed various methods to authenticate subject requests. We have analyzed how this authentication process can fail and examined its consequences. Our study shows that a key concept is missing in the GDPR: Proof of ownership for our data.


JUNE 2019
 

A Formal Analysis of 5G Authentication
by Sasa Radomirovic (University of Dundee)

Date : 3 june 2019
Room : Métivier

Mobile communication networks connect much of the world’s population. The security of users’ calls, text messages, and mobile data depends on the guarantees provided by the Authenticated Key Exchange protocols used. For the next-generation network (5G), the 3GPP group has standardized the 5G AKA protocol for this purpose.
In this talk, I will report on our formalization of the 5G standard’s AKA protocol and show how the application of symbolic protocol verification has helped us discover and repair authentication flaws inthe standard.
Joint work with David Basin, Jannik Dreier, Lucca Hirschi, Ralf Sasse, and Vincent Stettler.

MARCH 2019
 

Voting : You Can’t Have Privacy without Individual Verifiability
by Joseph Lallemand (Loria)

Date : 1 march 2019
Rooms : Pétri/Turing

Electronic voting typically aims at two main security goals: vote privacy and verifiability. These two goals are often seen as antagonistic and some national agencies even impose a hierarchy between them: first privacy, and then verifiability as an additional feature. Verifiability typically includes individual verifiability (a voter can check that her ballot is counted); universal verifiability (anyone can check that the result corresponds to the published ballots); and eligibility verifiability (only legitimate voters may vote). We show that actually, privacy implies individual verifiability. In other words, systems without individual verifiability cannot achieve privacy (under the same trust assumptions). To demonstrate the generality of our result, we show this implication in two different settings, namely cryptographic and symbolic models, for standard notions of privacy and individual verifiability. Our findings also highlight limitations in existing privacy definitions in cryptographic settings.

FEBRUARY 2019
 

A Compositional and Complete approach to Verifying Privacy Properties using the Applied Pi-calculus
by Ross Horne (University of Luxembourg)

Date : 8 february 2019
Rooms : Pétri/Turing

The pi-calculus was introduced for verifying cryptographic protocols by Abadi and Fournet in 2001. They proposed an equivalence technique, called bisimilarity, useful for verify privacy properties. It is widely acknowledged (cf. Paige and Tarjan 1987), that bisimilarity is more efficient to check than trace equivalence; however, surprisingly, tools based on the applied pi-calculus typically still implement trace equivalence. I suggest this may be attributed to two problems:
1. Abadi and Fournet did not publish proofs following conference paper from 2001, until a journal version in 2018 with Blanchet. This perhaps reduced the confidence of the community in bisimilarity. Further to providing proofs, the journal version adjusts definitions to avoid some well known limitations in the original presentation.
2. To efficiently implement bisimulation for extensions of the pi-calculus, we typically require a bisimilarity congruence, and no bisimilarity congruence has been proposed for the applied pi-calculus.
To address the second point above I propose a bisimilarity congruence for the applied pi-calculus. I argue that the definition I provide is optimal; and show that it is sufficiently strong to verify privacy properties. The definition makes use of recent advances in concurrency theory that were not available prior to LICS 2018. Furthermore, these results lead us to the first sound and complete modal logic for the applied pi-calculus, that can specify attacks if and only if an attack exists.

JANUARY 2019
 

Privacy in The 5G-AKA Authentication Protocol
by Adrien Koutsos (LSV)

Date : 18 january 2019
Room : Métivier

The 5G mobile communications standards are being finalized, and drafts are now available. This standard describes the 5G-AKA authentication and key exchange protocol. The previous version of AKA (3G and 4G) are well-known for their lack of privacy against an active adversary (e.g. a user can be massively tracked using IMSI-catcher). This new version of AKA tries to offer more privacy, notably through the use of asymmetric randomized encryption of the users permanent identities. Our first contribution is to show that, while this prevents the mass surveillance coming from IMSI-catchers, this is not sufficient for privacy. In particular, all the other known privacy attacks against 3G and 4G-AKA carry over to 5G-AKA. We also found a new type of vulnerabilities of stateful authentication protocols.
Our main contribution is the following: we modify the 5G-AKA protocol to prevent all known privacy attacks. We do this while keeping the cost and efficiency constraints under which the 5G-AKA protocol was designed. In particular, we have a limited use of randomness, relying on a sequence number whenever 5G-AKA does. This sequence number has to be maintained by the user and the network, making the protocol stateful. Because of this statefulness, our modified 5G-AKA protocol is not unlinkable. Still, we show that our protocol satisfies a weaker notion of unlinkability called $\sigma$-unlinkability. This is a new security notion, which allows for a finer-grained quantification of the privacy provided by a protocol. The security proof is carried out in the Bana-Comon indistinguishability logic, which is well-suited for stateful complex protocols. We also prove mutual authentication between the user and the network as a secondary result.

DECEMBER 2018
 

Splitting the Linux Kernel for Fun and Profit
by Chris Dalton (HP, Bristol, UK)

Date : 7 december 2018
Room : Aurigny

This talk looks at a pragmatic attempt at strengthening the security properties of Linux by introducing a degree of intra-kernel protection into the Linux kernel (supported by CPU Virtualization silicon features). Consider it, if you like, an attempt at retrofitting a ‘micro-kernel’ interface into traditionally monolithic Linux whilst maintaining a single linux code base (yes it’s still Linux), no need for a hypervisor and with reasonable performance characteristics. The motivation for the work was the frightening increase in reliance on the security properties of the Linux kernel driven by trends away from full-virtualization solutions such as VMWare and KVM and towards lighter weight containment approaches led by Docker, et al for application hosting, deployment and consolidation.

 

NOVEMBER 2018
 

Machine Learning for Computer Security Detection Systems: Practical Feedback and Solutions
by Anaël Beaugnon (ANSSI)

Date : 30 november 2018
Room : Métivier

Machine learning based detection models can strengthen detection, but there remain some significant barriers to the widespread deployment of such techniques in operational detection systems. In this presentation, we identify the main challenges to overcome and we provide both methodological guidance and practical solutions to address them. The solutions we present are completely generic to be beneficial to any detection problem on any data type and are freely available in SecuML.
The content of the presentation is mostly based on my PhD thesis “Expert-in-the-Loop Supervised Learning for Computer Security Detection Systems”.

SEPTEMBER 2018
 

Binary Edwards Curves for intrinsically secure ECC implementations for the IoT
by Antoine Loiseau (CEA)

Date: 7 september 2018
Room: Petri/Turing

Even if recent advances in public key cryptography tend to focus on algorithms able to survive the post quantum era, at present, there is a urgent need to propose fast, low power and securely implemented cryptography to address the immediate security challenges of the IoT. In this talk, we present a new set of Binary Edwards Curves which have been defined to achieve the highest security levels (up to 284-bit security level) and whose parameters have been defined to fit IoT devices embedding 32-bit general purpose processors. We optimized the choice of the point generator with the w-coordinate to save a multiplication in the addition and doubling formulae. We manage to compute one step of the Montgomery Ladder in 4 multiplications and 4 squares. On top of the performance benefits, cryptography over such curves have some intrinsic security properties against physical attacks.

 

JULY 2018
 

How to decrypt without keys with GlobalPlatform SCP02 protocol
by Loic Ferreira (Orange Labs, IRISA)

Date: 6 july 2018
Room: Petri/Turing

The GlobalPlatform SCP02 protocol is a security protocol implemented in smart cards, and used by transport companies, in the banking world and by mobile network operators (UICC/SIM cards). We describe how to perform a padding oracle attack against SCP02. The attack allows an adversary to efficiently retrieve plaintext bytes from an encrypted data field. We provide results of our experiments done with 10 smart cards from six different card manufacturers, and show that, in our experimental setting, the attack is fully practical. Given that billions SIM cards are produced every year, the number of affected cards, although difficult to estimate, is potentially high. To the best of our knowledge, this is the first practical attack against SCP02.

JUNE 2018
 

A theory of assertions for Dolev-Yao models
by Vaishnavi Sundararajan (Chennai Mathematical Institute, India)

Date: 8 june 2018
Room: Amphitheather – Inria Convention center

We undertake an abstract study of certification in security protocols, concentrating on the logical properties and derivability of certificates. Specifically, we extend the Dolev-Yao model with a new class of objects called ‘assertions’, along with an associated algebra for deriving new assertions from old ones. We obtain complexity results for the derivability problem and active intruder problem for this model, and provide a case study via the FOO e-voting protocol.

 

MARCH 2018
 

Anonymous Server-Aided Verification
by Elena Pagnin (Chalmers University of Technology, Sweden)

Date: 30 march 2018
Room: Métivier

Server-Aided Verification (SAV) is a method that can be employed to speed up the process of verifying signatures by letting the verifier outsource part of its computation load to a third party. Achieving fast and reliable verification under the presence of an untrusted server is an attractive goal in cloud computing and internet of things scenarios.
In this talk I will describe a simple and powerful framework for SAV and present a security model that refines existing while covering the new notions of SAV-anonymity and extended unforgeability. In addition, I will present the first generic transformation from any signature scheme to a single-round SAV scheme that incorporates verifiable computation.


“Automated verification of privacy-type properties for security protocols”
by Ivan Gazeau (LORIA, Inria Nancy)

Date: 16 march 2018
Room: Petri/Turing

The applied pi-calculus is a powerful framework to model protocols and to define security properties. In this symbolic model, it is possible to verify automatically complex security properties such as strong secrecy, anonymity and unlinkability properties which are based on equivalence of processes.
In this talk, we will see an overview of a verification method used by a tool, Akiss. The tool is able to handle 
- a wide range of cryptographic primitives (in particular AKISS is the only tool able to verify equivalence properties for protocols that use xor); 
- protocols with else branches (the treatment of disequalities is often complicated). 
We will also provide some insights on how interleaving due to concurrency can be effectively handled.


The Internet of Backdoors
by Sam L. Thomas (University of Birmingham, UK)

Date: 7 march 2018
Room: Petri/Turing

Complex embedded devices are becoming ever prevalent in our everyday lives, yet only a tiny amount of people consider the potential security and privacy implications of attaching such devices to our home, business and government networks. As demonstrated through recent publications from academia and blog posts from numerous industry figures, these devices are plagued by poor design choices concerning end-user security. What’s even more worrying, are reports of manufacturers inserting backdoor-like functionality into the production firmware of those devices.
This talk will provide a precise definition of the term backdoor and outline a framework we have devised for reasoning about such constructs. We will discuss the main challenges in backdoor detection, and present two tools we have developed to perform backdoor detection in a semi-automated manner. We will demonstrate the effectiveness of our methods through a number of case-studies of real-world backdoors.

FEBRUARY 2018
 

When Good Components Go Bad: Formally Secure CompilationDespite Dynamic Compromise” by Catalin Hritcu (Inria Paris)

Date: 5 february 2018
Room: Petri/Turing

We propose a new formal criterion for secure compilation, providing strong end-to-end security guarantees for components written in unsafe, low-level languages with C-style undefined behavior. Our criterion is the first to model dynamic compromise in a system of mutually distrustful components running with least privilege. Each component is protected from all the others until it becomes compromised by exhibiting undefined behavior, opening the door for an attacker to take control over the component and to use the component’s privileges to attack the remaining uncompromised components. More precisely, we ensure that dynamically compromised components cannot break the safety properties of the system at the target level any more than equally privileged components without undefined behavior already could in the source language. To illustrate this model, we build a secure compilation chain for an unsafe language with buffers, procedures, and components. We compile this to a simple RISC abstract machine with built-in compartmentalization and provide machine-checked proofs in Coq showing that this compiler satisfies our secure compilation criterion. Finally, we show that the protection guarantees offered by the compartmentalized abstract machine can be achieved at the machine-code level using either software fault isolation or tag-based reference monitoring.


Breaking and fixing HB+DB: A Short Tale of Provable vs Experimental Security and Lightweight Designs
by Ioana Boureanu (University of Surrey)

Date: 2 february 2018
Room: Petri/Turing

HB+ is a well-know authentication scheme purposely designed to be lightweight. However, HB+ is vulnerable to a key-recovery, man-in-the-middle (MiM) attack dubbed GRS. To this end, at WiSec2015, the HB+DB protocol added a distance-bounding dimension to HB+, which was experimentally shown to counteract the GRS attack.
In this talk, we will exhibit however a number of security flaws in the HB+DB protocol. Some attacks are authentication-driven, others relate to distance-bounding. What is more, we will show that a small refinement on the GRS-strategy still leads to key-recovery in HB+DB, un-deterred by its distance-bounding dimension. We will also propose a new distance-bounding protocol called BLOG, which is based on HB+DB but which is provably secure, enjoys better (asymptotical) security and is more lightweight.

Comments are closed.