Accepted Papers

All accepted Papers at ARES 2024: Full Papers, Short Papers, System of Knowledge. 
ARES papers are listed in no particular order, followed by workshops organized alphabetically, with their accepted papers.



ARES

Digital Forensic Artifacts of FIDO2 Passkeys in Windows 11
Patricio Domingues (Polytechnic Institute of Leiria, Portugal), Miguel Frade (Polytechnic Institute of Leiria, Portugal), Miguel Negrao (Polytechnic Institute of Leiria, Portugal)
Full Paper
FIDO2's passkey aims to provide a passwordless authentication solution. It relies on two main protocols -- WebAuthn and CTAP2 -- for authentication in computer systems, relieving users from the burden of using and managing passwords.

FIDO2's passkey leverages asymmetric cryptography to create a unique public/private key pair for website authentication. While the public key is kept at the website/application, the private key is created and stored on the authentication device designated as the authenticator. The authenticator can be the computer itself -- same-device signing --, or another device -- cross-device signing --, such as an Android smartphone that connects to the computer through a short-range communication method (NFC, Bluetooth).

Authentication is performed by the user unlocking the authenticator device.

In this paper, we report on the digital forensic artifacts left on Windows 11 systems by registering and using passkeys to authenticate on websites. We show that digital artifacts are created in Windows Registry and Windows Event Log.

These artifacts enable the precise dating and timing of passkey registration, as well as the usage and identification of the websites on which they have been activated and utilized. We also identify digital artifacts created when Android smartphones are registered and used as authenticators in a Windows system. This can prove useful in detecting the existence of smartphones linked to a given individual.
Combinatorial Testing Methods for Reverse Engineering Undocumented CAN Bus Functionality
Christoph Wech (SBA Research, Austria), Reinhard Kugler (SBA Research, Austria), Manuel Leithner (SBA Research, Austria), Dimitris E. Simos (SBA Research, Austria)
Short Paper
Modern vehicles such as cars, ships, and planes are increasingly managed using Electronic Control Units (ECUs) that communicate over a Controller Area Network (CAN) bus. While this approach offers enhanced functionality, efficiency, and robustness against electric disturbances and electronic interference, it may also be used for unforeseen or malicious purposes ranging from aftermarket modifications to full-fledged attacks threatening the passengers' safety. The ability to conduct in-depth tests is thus vital to protect against these issues. However, much of the functionality of ECUs is proprietary or undocumented. To alleviate this obstacle, this work presents a reverse engineering approach using high-coverage test sets produced using combinatorial testing (CT) methods. Our results indicate that this technique is promising for exciting unknown functionality, although challenges regarding the presence of hidden state and high-accuracy oracles are yet to be overcome.
Prov2vec: Learning Provenance Graph Representation for Anomaly Detection in Computer Systems
Bibek Bhattarai (Intel, United States), H. Howie Huang (George Washington University, Washington DC, USA, United States)
Full Paper
Modern cyber attackers use advanced zero-day exploits, highly targeted spear phishing, and other social engineering techniques to gain access and also use evasion techniques to maintain a prolonged presence within the victim network while working gradually towards the objective. To minimize the damage, it is necessary to detect these Advanced Persistent Threat as early in the campaign as possible. This paper proposes, Prov2vec, a system for the continuous monitoring of enterprise host's behavior to detect attackers' activities.

It leverages the data provenance graph built using system event logs to get complete visibility into the execution state of an enterprise host and the causal relationship between system entities. It proposes a novel provenance graph kernel to obtain the canonical representation of the system behavior, which is compared against its historical behaviors and that of other hosts to detect the deviation from the norm. These representations are used in several machine learning models to evaluate their ability to capture the underlying behavior of an endpoint host. We have empirically demonstrated that the provenance graph kernel produces a much more compact representation compared to existing methods while improving prediction ability.
Investigating HTTP Covert Channels Through Fuzz Testing
Kai Hölk (FernUniversität in Hagen, Germany), Wojciech Mazurczyk (Warsaw University of Technology, Poland), Marco Zuppelli (Institute for Applied Mathematics and Information Technologies, Italy), Luca Caviglione (CNR - IMATI, Italy)
Full Paper
Modern malware increasingly deploys network covert channels to prevent detection or bypass firewalls. Unfortunately, discovering in advance fields and functional behaviors of protocols that can be abused for cloaking information is difficult. In this perspective, fuzz testing could represent a valuable approach to face the tight relationship between the used hiding scheme and the targeted protocol trait.

Therefore, this paper explores whether basic fuzzing techniques can be effective to quantify the "susceptibility" of ubiquitous HTTP conversations against information hiding attempts. To this aim, we conducted a thorough test campaign considering three different covert channels hidden in traffic exchanged with 1,000 real Web destinations. Results indicate that fuzzing should be considered a valid technique to investigate how HTTP can be altered to cloak data and to define the theoretical limits of covert channels when deployed in the Internet.
HeMate: Enhancing Heap Security through Isolating Primitive Types with Arm Memory Tagging Extension
Yu-Chang Chen (National Taiwan University, Taiwan), Shih-Wei Li (National Taiwan University, Taiwan)
Full Paper
Memory safety vulnerabilities are a significant challenge for programming languages like C and C++. Among these vulnerabilities, heap-based issues have become more prevalent in recent years. Exploiting these vulnerabilities allows adversaries to execute arbitrary memory reads, writes, and even code execution. The Memory Tagging Extension (MTE), introduced in the Arm v8.5-A processor architecture, is an example of such a security feature. MTE has been utilized in modern software to implement probabilistic protection for heap-based memory safety vulnerabilities, including use-after-free and heap-based buffer overflow. However, the existing MTE-based approaches offer probabilistic protection and are vulnerable to brute-force attacks. Moreover, these approaches offer inter-object isolation but are vulnerable to intra-object overflow. These shortcomings leave opportunities for adversaries to exploit vulnerabilities.

Adversaries tend to leverage memory confusion to manipulate or leak pointers, leading to arbitrary memory read/write and code execution. In response to this observation, we propose a novel usage of MTE, called HeMate, to isolate memory storing different types of data on the heap to prevent such exploitation. This approach provides a non-probabilistic constraint on vulnerability exploitation. We have implemented a HeMate prototype compiler for C language programs based on the LLVM framework. Our approach effectively leverages MTE to protect against intra-object overflow vulnerabilities and brute-force attacks that previous approaches offer no protection.
Adversary Tactic Driven Scenario and Terrain Generation with Partial Infrastructure Specification
Ádám Ruman (Masaryk University, Czechia), Martin Drašar (Masaryk University, Czechia), Lukáš Sadlek (Masaryk University, Czechia), Shanchieh Jay Yang (Rochester Institute of Technology, United States), Pavel Čeleda (Masaryk University, Czechia)
Full Paper
Diverse, correct, and up-to-date training environments are instrumental for training cybersecurity experts and autonomous systems alike. However, their preparation is time-consuming and requires experts to supply detailed specifications. In this paper we explore the challenges of automated generation of key elements of such environments: complex attack plans – scenarios – that lead to a user-defined adversary goal, and infrastructure specifications – terrains – that enable the attack plan to be executed.

We propose new models to represent the cybersecurity domain and associated action spaces. These models are used to create sound and complex scenarios and terrains, based on partial specifications provided by users. We compare the results with a real-world complex malware campaign scenario to assess the realism of the produced scenarios. To further evaluate the correctness and variability of the results, we utilize the killchain attack graph generation to distill attack graphs for generated terrains and compare them with the respective scenarios to assess correct correspondence in generated scenario-terrain pairs.

Our results demonstrate that our approach is able to create terrains and non-linear scenarios of complexity similar to advanced malware campaigns. Further quantitative evaluation shows that the generated terrains represent their respective scenarios, as evaluated with attack graph analysis, regardless of terrain and scenario size.

To the best of our knowledge, our proposed approach and its implementation represent a significant leap in the state of the art and enable novel approaches to cybersecurity training and autonomous system development.
Secure Noise Sampling for DP in MPC with Finite Precision
Hannah Keller (Aarhus University, Denmark), Helen Möllering (McKinsey & Company, Germany), Thomas Schneider (Technical University of Darmstadt, Germany), Oleksandr Tkachenko (DFINITY Foundation, Germany), Liang Zhao (Technical University of Darmstadt, Germany)
Full Paper
While secure multi-party computation (MPC) protects privacy of the inputs and intermediate values, it has to be combined with differential privacy (DP) to protect the privacy of the outputs. For this reason, MPC is used to generate noise and add this noise to the output.} However, securely generating and adding this noise is a challenge considering real-world implementations on finite-precision computers, since many DP mechanisms guarantee privacy only when noise is sampled from continuous distributions requiring infinite precision.

We introduce efficient MPC protocols that securely realize noise sampling for several plaintext DP mechanisms that are secure against existing precision-based attacks: the discrete Laplace and Gaussian mechanisms, the snapping mechanism, and the integer-scaling Laplace and Gaussian mechanisms. Due to their inherent trade-offs, the favorable mechanism for a specific application depends on the available computation resources, type of function evaluated, and desired (epsilon, delta)-DP guarantee.

The benchmarks of our protocols implemented in the state-of-the-art MPC framework MOTION (Braun et al., TOPS'22) demonstrate highly efficient online runtimes of less than 32 ms/query and down to about 1ms/query with batching in the two-party setting. Also the respective offline phases are practical, requiring only 51 ms to 5.6 seconds/query depending on the batch size.
Compromising anonymity in identity-reserved k-anonymous datasets through aggregate knowledge
Kevin De Boeck (KU Leuven - DistriNet, Belgium), Jenno Verdonck (KU Leuven - DistriNet, Belgium), Michiel Willocx (KU Leuven - DistriNet, Belgium), Jorn Lapon (KU Leuven - DistriNet, Belgium), Vincent Naessens (KU Leuven - DistriNet, Belgium)
Full Paper
Data processors increasingly rely on external data sources to improve strategic or operational decision taking. Data owners can facilitate this by releasing datasets directly to data processors or doing so indirectly via data spaces. As data processors often have different needs and due to the sensitivity of the data, multiple anonymized versions of an original dataset are often released. However, doing so can introduce severe privacy risks.

This paper demonstrates the emerging privacy risks when curious -- potentially colluding -- service providers obtain an identity-reserved and aggregated k-anonymous version of the same dataset. We build a mathematical model of the attack and demonstrate its applicability in the presence of attackers with different goals and computing power. The model is applied to a real world scenario and countermeasures are presented to mitigate the attack.
Let Them Drop: Scalable and Efficient Federated Learning Solutions Agnostic to Stragglers
Riccardo Taiello (Inria Sophia Antipolis - EURECOM - University Cote d’Azur, France), Melek Önen (EURECOM, France), Clémentine Gritti (EURECOM, France), Marco Lorenzi (Inria Sophia Antipolis - University Cote d’Azur, France)
Full Paper
Secure Aggregation (SA) stands as a crucial component in modern Federated Learning (FL) systems, facilitating collaborative training of a global machine learning model while protecting the privacy of individual clients' local datasets. Many existing SA protocols described in the FL literature operate synchronously, leading to notable runtime slowdowns due to the presence of stragglers (i.e. late-arriving clients).

To address this challenge, one common approach is to consider stragglers as client failures and use SA solutions that are robust against dropouts. While this approach indeed seems to work, it unfortunately affects the performance of the protocol as its cost strongly depends on the dropout ratio and this ratio has increased significantly when taking stragglers into account.

Another approach explored in the literature to address stragglers is to introduce asynchronicity into the FL system. Very few SA solutions exist in this setting and currently suffer from high overhead.

In this paper, similar to related work, we propose to handle stragglers as client failures but design SA solutions that do not depend on the dropout ratio so that an unavoidable increase on this metric does not affect the performance of the solution. We first introduce Eagle, a synchronous SA scheme designed not to depend on the client failures but on the online users' inputs only. This approach offers better computation and communication costs compared to existing solutions under realistic settings where the number of stragglers is high.

We then propose Owl, the first SA solution that is suitable for the asynchronous setting and once again considers online clients' contributions only.

We implement both solutions and show that: (i) in a synchronous FL with realistic dropout rates (that takes potential stragglers into account), Eagle outperforms the best SA solution, namely Flamingo, by X4; (ii) In the asynchronous setting, Owl exhibits the best performance compared to the state-of-the-art solution LightSecAgg.
Security Analysis of a Decentralized, Revocable and Verifiable Attribute‑Based Encryption Scheme
Thomas Prantl (Julius-Maximilians-Universität Würzburg, Germany), Marco Lauer (Julius-Maximilians-Universität Würzburg, Germany), Lukas Horn (Julius-Maximilians-Universität Würzburg, Germany), Simon Engel (Julius-Maximilians-Universität Würzburg, Germany), David Dingel (Julius-Maximilians-Universität Würzburg, Germany), André Bauer (University Chicago, United States), Christian Krupitzer (University of Hohenheim, Germany), Samuel Kounev (Julius-Maximilians-Universität Würzburg, Germany)
Full Paper
In recent years, digital services have experienced significant growth, exemplified by platforms like Netflix achieving unprecedented revenue levels. Some of these services employ subscription models, with certain content requiring additional payments or offering third-party products. To ensure the widespread availability of diverse digital services anytime and anywhere, providers must have control over content accessibility. To address the multifaceted challenges in this domain, one promising solution is the adoption of attribute-based encryption (ABE). Over the years, various approaches have been proposed in the literature, offering a wide range of features. In a prior study [Annonymised source], we assessed the security of one of these proposed approaches and identified one that did not meet its promised security standards. In this research we focuses on conducting a security analysis for another ABE scheme to pinpoint its shortcomings and emphasize the critical importance of evaluating the safety and effectiveness of newly proposed schemes. Specifically, we uncover an attack vector within this ABE scheme, which enables malicious users to decrypt content without the required permissions or attributes. Furthermore, we propose a solution to rectify this identified vulnerability.
A Large-Scale Study on the Prevalence and Usage of TEE-based Features on Android
Davide Bove (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
Full Paper
In the realm of mobile security, where OS-based protections have proven insufficient against robust attackers, Trusted Execution Environments (TEEs) have emerged as a hardware-based security technology. Despite the industry's persistence in advancing TEE technology, the impact on end users and developers remains largely unexplored. This study addresses this gap by conducting a large-scale analysis of TEE utilization in Android applications, focusing on the key areas of cryptography, digital rights management, biometric authentication, and secure dialogs.

To facilitate our extensive analysis, we introduce Mobsec Analytika, a framework tailored for large-scale app examinations, which we make available to the research community. Through 333,475 popular Android apps, our analysis illuminates the implementation of TEE-related features and their contextual usage.

Our findings reveal that TEE features are predominantly utilized indirectly through third-party libraries, with only 6.2% of apps directly invoking the APIs. Moreover, the study reveals the underutilization of the recent TEE-based UI feature Protected Confirmation.
SoK: A Comparison of Autonomous Penetration Testing Agents
Raphael Simon (Royal Military Academy, Belgium), Wim Mees (Royal Military Academy, Belgium)
Sok Paper
In the still growing field of cyber security, machine learning methods have largely been employed for detection tasks. Only a small portion revolves around offensive capabilities. Through the rise of Deep Reinforcement Learning, agents have also emerged with the goal of actively assessing the security of systems by the means of penetration testing. Thus learning the usage of different tools to emulate humans. In this paper we present an overview, and comparison of different autonomous penetration testing agents found within the literature. Various agents have been proposed, making use of distinct methods, but several factors such as modelling of the environment and scenarios, different algorithms, and the difference in chosen methods themselves, make it difficult to draw conclusions on the current state and performance of those agents. This comparison also lets us identify research challenges that present a major limiting factor, such as handling large action spaces, partial observability, defining the right reward structure, and learning in a real-world scenario.
Towards Reducing Business-Risk of Data Theft Implementing Automated Simulation Procedures of Evil Data Exfiltration
Michael Mundt (Esri Deutschland GmbH, Germany), Harald Baier (Universität der Bundeswehr München, Research Instiute CODE, Germany), Antje Raab-Düsterhöft (Hochschule Wismar, Germany)
Full Paper
As of today exposure and remediation technologies are mainly validated by taking the attacker's perspective. This paradigm is often referred to as "Know Your Enemy", it enables a realistic assessment of the actual attack surface of your IT infrastructure. Furthermore, the operational environment is becoming increasingly dynamic and complex, hence a flexible and adaptable reaction to the tactics, techniques, and procedures of cyber attackers must be implemented. In this work, we present a concept and a prototypical proof of concept, which take both aspects into account. More precisely we present a simulation-based approach in the scope of data exfiltration, which improves anticipation of the attacker's perspective and thus puts effective and adapted strategies into place. As sample use cases of data exfiltration techniques, we shed light on recent techniques like abuse of scheduled tasks, which presumably will become of increasing importance in the future. Our prototype makes use of common open-source software and can thus be implemented easily. During our evaluation, we simulate relevant sections of our sample attack vectors using test data and derive options for detection and protection against the respective simulated attack. Finally, we expound on the integration of our proposed technical and organisational measures into an existing Information Security Management System (ISMS) as part of a process for continuous improvement.
Sybil Attack Strikes Again: Denying Content Access in IPFS with a Single Computer
Thibault Cholez (University of Lorraine, CNRS, Inria, LORIA, France), Claudia Ignat (University of Lorraine, CNRS, Inria, LORIA, France)
Short Paper
The Distributed Hash Table architecture is known to be a very efficient way to implement peer-to-peer (P2P) computer networks, but the scientific literature also proved that they can be easily disrupted by a single entity controlling many peers, known as the Sybil Attack. Various defensive mechanisms are known to prevent such attacks, or at least hinder them. The current study evaluates the resiliency of the IPFS P2P network to a legacy Sybil Attack. We show that, surprisingly, IPFS does not implement any defense mechanism, allowing the most simple attack from a single computer to easily take the control of any DHT entry. A practical use of this attack is to almost entirely deny access to a given content on the network. Thus we provide some recommendations to quickly remediate this vulnerability.
Continuous Authentication Leveraging Matrix Profile
Luis Ibanez-Lissen (Universidad Carlos III de Madrid, Spain), Jose Maria de Fuentes (Universidad Carlos III de Madrid, Spain), Lorena González Manzano (Universidad Carlos III de Madrid, Spain), Nicolas Anciaux (Inria, France)
Full Paper
Continuous Authentication (CA) mechanisms involve managing sensitive data from users which may change over time. Both requirements (privacy and adapting to new users) lead to a tension in the amount and granularity of the data at stake. However, no previous work has addressed them together. This paper proposes a CA approach that leverages incremental Matrix Profile (MP) and Deep Learning using accelerometer data. Results show that MP is effective for CA purposes, leading to 99% of accuracy when a single user is authorized. Besides, the model can on-the-fly increase the set of authorized users up to 10 while offering similar accuracy rates. The amount of input data is also characterized -- the last 15 s. of data in the user device require 0.4 MB of storage and lead to a CA accuracy of 97% even with 10 authorized users.
Extracting Randomness from Nucleotide Sequencers for use in a Decentralised Randomness Beacon
Darren Hurley-Smith (Royal Holloway University of London, United Kingdom), Alastair Droop (University of York, United Kingdom), Remy Lyon (Veiovia Ltd., United Kingdom), Roxana Teodor (Veiovia Ltd., United Kingdom)
Full Paper
This paper presents an investigation of nucleotide sequencing based random number generators, refutation of naive approaches to this problem, and a novel random number generator design based on the characteristics of nucleotide sequencers such as the Oxford Nanopore Technologies (ONT) MinION.

Common issues include misunderstanding the statistical properties of nucleotide sequences and the provenance of entropy observed in post-processed sequences extracted from such data. We identify that the use of sequences, expressed as base-pair (ACGT) sequences, for random number generation is not possible. The process by which such sequences are observed and reported by scientific instrumentation, provide a means by which entropy associated with nucleotide sequences (or more correctly the act of observing and recording them) can be observed.

We report a novel method of extracting entropy from the process of reading nucleotide sequences, as opposed to the nucleotide sequences themselves. We overcome the limitations and inherent bias of nucleotide sequences, to provide a source of randomness decoupled from biological data and records. A novel random number generator drawing on entropy extracted from nucleotide sequencing is presented with validation of its performance and characteristics.
Hardware Trust Anchor Authentication for Updatable IoT Devices
Dominik Lorych (Fraunhofer SIT | ATHENE, Germany), Christian Plappert (Fraunhofer SIT | ATHENE, Germany)
Full Paper
Secure firmware update mechanisms and Hardware Trust Anchors (HTAs) are crucial in securing future IoT networks. Among others, HTAs can be used to shield security-sensitive data like cryptographic keys from unauthorized access, using hardware isolation. Authentication mechanisms for key usage, however, are difficult to implement since corresponding credentials need to be stored outside the HTA. This makes them vulnerable against host hijacking attacks, which in the end also undermines the security gains of the HTA deployment.

This paper introduces an update-resilient and secure HTA authentication mechanism that secures the HTA authentication credentials on the host. Our concept is based on an integration of the Device Identifier Composition Engine (DICE), a Trusted Computing standard for resource-constrained off-the-shelf devices, with signed update manifest documents. This secures HTA authentication credentials, but also provides value for DICE-based devices without an HTA. We evaluate the feasibility of our solution based on a proof-of-concept implementation.
A Privacy Measure Turned Upside Down? Investigating the Use of HTTP Client Hints on the Web
Stephan Wiefling (swiefling.de, Germany), Marian Hönscheid (H-BRS University of Applied Sciences, Germany), Luigi Lo Iacono (H-BRS University of Applied Sciences, Germany)
Full Paper
HTTP client hints are a set of standardized HTTP request headers designed to modernize and potentially replace the traditional user agent string. While the user agent string exposes a wide range of information about the client's browser and device, client hints provide a controlled and structured approach for clients to selectively disclose their capabilities and preferences to servers. Essentially, client hints aim at more effective and privacy-friendly disclosure of browser or client properties than the user agent string.

We present a first long-term study of the use of HTTP client hints in the wild. We found that despite being implemented in almost all web browsers, server-side usage of client hints remains generally low. However, in the context of third-party websites, which are often linked to trackers, the adoption rate is significantly higher. This is concerning because client hints allow the retrieval of more data from the client than the user agent string provides, and there are currently no mechanisms for users to detect or control this potential data leakage. Our work provides valuable insights for web users, browser vendors, and researchers by exposing potential privacy violations via client hints and providing help in developing remediation strategies as well as further research.
Comparative Analysis and Implementation of Jump Address Masking for Preventing TEE Bypassing Fault Attacks
Shoei Nashimoto (Mitsubishi Electric Corporation, Japan), Rei Ueno (Tohoku University, Japan), Naofumi Homma (Tohoku University, Japan),
Full Paper
Attacks on embedded devices continue to evolve with the increasing number of applications in actual products. A trusted execution environment (TEE) enhances the security of embedded devices by isolating and protecting sensitive applications such as cryptography from malicious or vulnerable applications. However, the emergence of TEE bypass attacks using faults exposes TEEs to threats. In CHES’22, jump address masking (JAM) was proposed as a countermeasure against TEE bypass attacks, specifically targeting RISC-V. JAM prevents modifications of protected data by calculating jump addresses using the protected data, and is expected to provide promising resistance to TEE bypass attacks, for which traditional countermeasures are ineffective. However, JAM was originally proposed for bare metal applications. Therefore, its application to TEEs that operate with an OS presents technical and security challenges. This study proposes a method for applying JAM to Keystone, a major TEE framework for RISC-V, and validates its practical effectiveness and performance through a comparative evaluation with existing countermeasures such as memory encryption, random delays, and instruction duplication. Our evaluation reveals that the proposed JAM implementation is the first countermeasure that achieves complete resistance to TEE bypass attacks with an execution time overhead of approximately 340% for context switches and 1.0% across the entire program, which is acceptable compared with other countermeasures.
Monitor-based Testing of Network Protocol Implementations Using Symbolic Execution
Hooman Asadian (Uppsala University, Sweden), Paul Fiterau-Brostean (Uppsala University, Sweden), Bengt Jonsson (Uppsala University, Sweden), Konstantinos Sagonas (Uppsala University & NTUA, Sweden)
Full Paper
Implementations of network protocols must conform to their specifications in order to avoid security vulnerabilities and interoperability issues. To detect errors, testing must investigate an implementation's response to a wide range of inputs, including those that could be supplied by an attacker. This can be achieved by symbolic execution, but its application in testing network protocol implementations has so far been limited. One difficulty when testing such implementations is that the inputs and requirements for processing a packet depend on the sequence of previous packets. We present a novel technique to encode protocol requirements by monitors, and then employ symbolic execution to detect violations of these requirements in protocol implementations. A monitor is a component external to the SUT, that observes a sequence of packets exchanged between protocol parties, maintains information about the state of the interaction, and can thereby detect requirement violations. Using monitors, requirements for stateful network protocols can be tested with a wide variety of inputs, without intrusive modifications in the source code of the SUT. We have applied our technique on the most recent versions of several widely-used DTLS and QUIC protocol implementations, and have been able to detect twenty two previously unknown bugs in them, twenty one of which have already been fixed and the remaining one has been confirmed.
From Code to EM Signals: A Generative Approach to Side Channel Analysis-based Anomaly Detection
Kurt A. Vedros (University of Idaho, United States), Constantinos Kolias (University of Idaho, United States), Daniel Barbara (George Mason University, United States), Robert Ivans (Idaho National Laboratory, United States)
Full Paper
Today, it is possible to perform external anomaly detection by analyzing the involuntary EM emanations of digital device components. However, one of the most important challenges of these methods is the manual collection of EM signals for fingerprinting. Indeed, this procedure must be conducted by a human expert and requires high precision. In this work, we introduce a framework that alleviates this requirement by relying on synthetic EM signals that have been generated from the assembly code. The signals are produced through a Generative Adversarial Network (GAN) model. Experimentally, we identify that the synthetic EM signals are extremely similar to the real and thus, can be used for training anomaly detection models effectively. Through experimental assessments, we prove that the anomaly detection models are capable of recognizing even minute alterations to the code with high accuracy.
Privacy Preserving Release of Mobile Sensor Data
Rahat Masood (UNSW Sydney, Australia), Wing Yan Cheng (UNSW Sydney, Australia), Dinusha Vatsalan (Macquarie University, Australia), Deepak Mishra (UNSW Sydney, Australia), Hassan Jameel Asghar (Macquarie University, Australia), Dali Kaafar (Macquarie University, Australia)
Full Paper
Sensors embedded in mobile smart devices can monitor users' activity with high accuracy to provide a variety of services to end-users ranging from precise geolocation, health monitoring, and handwritten word recognition. However, this involves the risk of accessing and potentially disclosing sensitive information of individuals to the apps that may lead to privacy breaches. In this paper, we aim to minimize privacy leakages that may lead to user identification on mobile devices through user tracking and distinguishability while preserving the functionality of apps and services. We propose a privacy-preserving mechanism that effectively handles the sensor data fluctuations (e.g., inconsistent sensor readings while walking, sitting, and running at different times) by formulating the data as time-series modeling and forecasting. The proposed mechanism uses the notion of correlated noise-series against noise filtering attacks from an adversary, which aims to filter out the noise from the perturbed data to re-identify the original data. Unlike existing solutions, our mechanism keeps running in isolation without the interaction of a user or a service provider. We perform rigorous experiments on three benchmark datasets and show that our proposed mechanism limits user tracking and distinguishability threats to a significant extent compared to the original data while maintaining a reasonable level of utility of functionalities. In general, we show that our obfuscation mechanism reduces the user trackability threat by 60% across all the datasets while maintaining the utility loss below 0.3 Mean Absolute Error (MAE). More specifically, we observe that 80% of users achieve a 100% untrackability rate in the Swipes dataset across all noise scales. In the handwriting dataset, distinguishability is 17% for 60% of the users. Overall, our mechanism provides a utility error (MAE) of only 0.12 for 60% of users, and this increases to 0.2 for 100% users when correction thresholds are altered.
Dealing with Bad Apples: Organizational Awareness and Protection for Bit-flip and Typo-Squatting Attacks
Huancheng Hu (Hasso Plattner Institute & University of Potsdam, Germany), Afshin Zivi (Hasso Plattner Institute & University of Potsdam, Germany), Christian Doerr (Hasso Plattner Institute & University of Potsdam, Germany)
Full Paper
The domain name system (DNS) maps human-readable service names to IP addresses used by the network. As it exerts control over where users are directed to, domain names have been targets of abuse ever since the Internet become a success. Over the past twenty years, adversaries have repeated invented new strategies to trick users and our findings reveal a continuous increase in the exploitation of domain names.

Aside from educating users, it is foremost the responsibility of organizations to monitor for or proactively register domain names with abuse potential. This however requires organizations to be aware and translate this into concrete action. While the typo-related attacks of the early 2000s are self-explanatory, other types of domain attacks are not. In this paper, we investigate the level of organizational awareness and preparedness towards two types of DNS abuse, and analyze the reaction and protection response of 300 large organizations over the course of 7 years. We find that large companies take little action towards this threat, with the exception of few well-prepared organizations. We validate these findings in an interview study with security experts of 12 large organizations and discover that this lack of preparation is the result of insufficient resources and a clear preference for reaction to incidents instead of prevention.
Subjective Logic-based Decentralized Federated Learning for Non-IID Data
Agnideven Palanisamy Sundar (Indiana University - Purdue University - Indianapolis, United States), Feng Li (IUPUI, United States), Xukai Zou (Indiana University Purdue University Indianapolis, United States), Tianchong Gao (Southeast University, China)
Full Paper
Existing Federated Learning (FL) methods are highly influenced by the training data distribution. In the single global model FL systems, users with highly non-IID data do not improve the global model, and neither does the global model work well on their local data distribution. Even with the clustering-based FL approaches, not all participants get clustered adequately enough for the models to fulfill their local demands. In this work, we design a modified subjective logic-based FL system utilizing the distribution-based similarity among users. Each participant has complete control over their own aggregated model, with handpicked contributions from other participants. The existing clustered model only satisfies a subset of clients, while our individual aggregated models satisfy all the clients. We design a decentralized FL approach, which functions without a trusted central server; the communication and computation overhead is distributed among the clients. We also develop a layer-wise secret-sharing scheme to amplify privacy. We experimentally show that our approach improves the performance of each participant's aggregated model on their local distribution over the existing single global model and clustering-based approach.
Let the Users Choose: Low Latency or Strong Anonymity? Investigating Mix Nodes with Paired Mixing Techniques
Sarah Abdelwahab Gaballah (Ruhr University Bochum, Germany), Lamya Abdullah (Technical University of Darmstadt, Germany), Max Mühlhäuser (Technical University of Darmstadt, Germany), Karola Marky (Ruhr University Bochum, Germany)
Full Paper
Current anonymous communication systems force users to choose between strong anonymity with significant delay or low latency with unreliable anonymity. That divides users across systems based on their requirements, leading to smaller user bases and reduced anonymity. To address this, we propose an approach based on mix networks, that employs two mixing techniques on mix nodes: threshold mixing for users valuing strong anonymity and timed or continuous-time mixing for those with specific latency constraints. We conducted an in-depth empirical study to evaluate the effectiveness of our proposal. The evaluation results demonstrate that our proposal offers enhanced anonymity for all users while meeting the latency requirements for those who prioritize that. It further outperforms using single mixing techniques on the mix node, even when considering the same user base size. Moreover, our findings indicate that our proposal eliminates the need for generating cover traffic to enhance anonymity, achieving this improvement without introducing the bandwidth overhead associated with cover traffic.
GNN-IDS: Graph Neural Network based Intrusion Detection System
Zhenlu Sun (Department of Information Technology, Uppsala University, Sweden), André Teixeira (Department of Information Technology, Uppsala University, Sweden), Salman Toor (Department of Information Technology, Uppsala University, Sweden)
Full Paper
Intrusion detection systems (IDSs) are widely used to identify anomalies in computer networks and raise alarms on intrusive behaviors. ML-based IDSs generally take network traces or host logs as input to extract patterns from individual samples, whereas the inter-dependencies of network are often not captured and learned, which may result in large amount of uncertain predictions, false positives, and false negatives. To tackle the challenges in intrusion detection, we propose a graph neural network based intrusion detection system (GNN-IDS), which is data-driven and machine learning-empowered. In GNN-IDS, the attack graph and real-time measurements, representing the static and dynamic attributes of computer networks, respectively, are incorporated and associated to represent the complex computer networks. Graph neural networks are employed as the inference engine for intrusion detection. By learning network connectivity, graph neural networks can quantify the importance of neighboring nodes and node features to make more reliable predictions. Furthermore, by incorporating an attack graph, GNN-IDS could not only detect anomalies, but also identify the malicious actions causing the anomalies. The experimental results on a use case network with two synthetic datasets (one generated from public IDS data) show that the proposed GNN-IDS achieves good performance. The results are analyzed from the aspects of uncertainty, explainability, and robustness.
SoK: Visualization-based Malware Detection Techniques
Matteo Brosolo (University of Padua, Italy, Italy), Vinod Puthuvath (University of Padua, Italy, Italy), Asmitha Ka (Cochin University of Science and Technology, Kochi, Kerala, India, India), Rafidha Rehiman (Cochin University of Science and Technology, Kochi, Kerala, India, India), Mauro Conti (University of Padua, Italy, Italy)
Sok Paper
Cyber attackers leverage malware to infiltrate systems, steal sensitive data, and extort victims, posing a significant cybersecurity threat. Security experts address this challenge by employing machine learning and deep learning approaches to detect malware precisely, using static, dynamic, or hybrid methodologies. They visualize malware to identify patterns, behaviors, and common features across different malware families. Various methods and tools are used for malware visualization to represent different aspects of malware behavior, characteristics, and relationships. This article evaluates the effectiveness of visualization techniques in detecting and classifying malware. We methodically categorize studies based on their approach to information retrieval, visualization, feature extraction, classification, and evaluation, allowing for an in-depth review of cutting-edge methods. This analysis identifies key challenges in visualization-based techniques and sheds light on the field's progress and future possibilities. Our thorough analysis can provide valuable insights to researchers, helping them establish optimal practices for selecting suitable visualizations based on the specific characteristics of the analyzed malware.
SECURA: Unified Reference Architecture for Advanced Security and Trust in Safety Critical Infrastructures
Michael Eckel (Fraunhofer SIT | ATHENE, Germany), Sigrid Gürgens (Fraunhofer SIT | ATHENE, Germany)
Full Paper
In the evolving landscape of safety-critical infrastructures, ensuring the integrity and security of systems has become paramount. Building upon a previously established security architecture tailored for the railway sector, this work introduces significant enhancements that extend its applicability beyond the confines of any singular industry. Key advancements include the integration of a security heartbeat to augment safety monitoring, the implementation of a sophisticated secure update mechanism leveraging Trusted Platform Module (TPM) Enhanced Authorization (EA) policies, automated vulnerability scanning leveraging Linux Integrity Measurement Architecture (IMA) logs to check against a vulnerability database, and a formal evaluation of system integrity reporting capabilities through remote attestation. Moreover, aiming for a universally adaptable framework, this paper proposes a reference architecture to accommodate various operational contexts. We use compartments as a universal abstraction for system components, designed to be compatible with a variety of real-time operating systems (RTOSes), including PikeOS, the ACRN hypervisor, and beyond.
Attack Analysis and Detection for the Combined Electric Vehicle Charging and Power Grid Domains
Dustin Kern (Darmstadt University of Applied Sciences, Germany), Christoph Krauß (Darmstadt University of Applied Sciences, Germany), Matthias Hollick (TU Darmstadt, Germany)
Full Paper
With the steady rising Electric Vehicle (EV) adoption world-wide, a consideration of the EV charging-related load on power grids is becoming critically important. While strategies to manage this load (e.g., to avoid peaks) exist, they assume that EVs and charging infrastructure are trustworthy. If this assumption is, however, violated (e.g., by an adversary with control over EV charging systems), the threat of charging load-based attacks on grid stability arises. An adversary may, for example, try to cause overload situations, by means of a simultaneous increase in charging load coordinated over a large number of EVs. In this paper, we propose an Intrusion Detection System (IDS) that combines regression-based charging load prediction with novelty detection-based anomaly identification. The proposed system considers features from both the EV charging and power grid domains, which is enabled in this paper by a novel co-simulation concept. We evaluate our IDS concept with simulated attacks in real EV charging data. The results show that the combination of gradient boosting regression trees with elliptic envelope-based novelty detection generally provides the best results. Additionally, the evaluation shows that our IDS concept, combining grid and charging features, is capable of detecting novel/stealthy attack strategies not covered by related work.
Reverse Engineered MiniFS File System
Dmitrii Belimov (Technology Innovation Institute, United Arab Emirates), Evgenii Vinogradov (Technology Innovation Institute, United Arab Emirates)
Short Paper
In an era where digital connectivity is increasingly foundational to daily life, the security of Wi-Fi Access Points (APs) is a critical concern. This paper addresses the vulnerabilities inherent in Wi-Fi APs, with a particular focus on those using proprietary file systems like MiniFS found in TP-Link’s AC1900 WiFi router. Through reverse engineering, we unravel the structure and operation of MiniFS, marking a significant advancement in our understanding of this previously opaque file system. Our investigation reveals not only the architecture of MiniFS but also identifies several private keys and underscores a concerning lack of cryptographic protection. These findings point to broader security vulnerabilities, emphasizing the risks of security-by-obscurity practices in an interconnected environment. Our contributions are twofold: firstly, based, on the file system structure, we develop a methodology for the extraction and analysis of MiniFS, facilitating the identification and mitigation of potential vulnerabilities. Secondly, our work lays the groundwork for further research into WiFi APs’ security, particularly those running on similar proprietary systems. By highlighting the critical need for transparency and community engagement in firmware analysis, this study contributes to the development of more secure network devices, thus enhancing the overall security posture of digital infrastructures.
SoK: How Artificial-Intelligence Incidents Can Jeopardize Safety and Security
Richard May (Harz University of Applied Sciences, Germany), Jacob Krüger (Eindhoven University of Technology, Netherlands), Thomas Leich (Harz University of Applied Sciences, Germany)
Sok Paper
In the past years, a growing number of highly-automated systems has build on artificial-intelligence (AI) capabilities, for example, automatic-driving vehicles or predictive health-state diagnosis. As for any software system, there is a risk that misbehavior occurs (e.g., system failure due to bugs) or that malicious actors aim to misuse the system (e.g., generating attack scripts), which can lead to safety and security incidents. While software safety and security incidents have been studied in the past, we are not aware of research focusing on the specifics of AI incidents. With this paper, we aim to shed light on this gap through a case survey of 240 incidents that we elicited from four datasets comprising safety and security incidents involving AI from 2014 to 2023. Using manual data analyses and automated topic modeling, we derived relevant topics as well as the major issues and contexts in which the incidents occurred. We find that the topic of AI incidents is, not surprisingly, becoming more and more relevant, particularly in the contexts of automatic driving and process-automation robotics. Regarding security and its intersection with safety, most incidents connect to generative AI (i.e., large-language models, deep fakes) and computer-vision systems (i.e., facial recognition). This emphasizes the importance of security to also ensure safety in the context of AI systems, with our results further revealing a high number of serious consequences (system compromise, human injuries) and major violations of confidentiality, integrity, availability, as well as authorization. We hope to support practitioners and researchers in understanding major safety and security issues to support the development of more secure, safe, and trustworthy AI systems.
Unveiling Vulnerabilities in Bitcoin’s Misbehavior-Score Mechanism: Attack and Defense
Yuwen Zou (Xi'an Jiaotong-Liverpool University, China), Wenjun Fan (Xi'an Jiaotong-Liverpool University, China), Zhen Ma (Xi'an Jiaotong-Liverpool University, China)
Full Paper
The Bitcoin network is susceptible to various attacks due to its openness, decentralization and plaintext connections. Bitcoin created a misbehavior-score mechanism for monitoring and tracking peer misconduct. In this paper, we uncover several vulnerabilities of this mechanism, leading to potential Bitcoin-Message-based DoS (BitMsg-DoS) attacks on Bitcoin nodes and Slander attacks by maligning innocent nodes. We prototype those attacks for our experiments against testing real nodes connected to the Bitcoin main network (while we do not exfiltrate our attacks to the real-world main network). The experimental results show that the attacks exert varying degrees of impact on mining and non-mining nodes, notably reducing mining rates by up to half for affected mining nodes and decreasing the synchronization speed of blocks for non-mining nodes. To address these drawbacks, this study proposes three corresponding countermeasures targeting the identified vulnerabilities in the misbehavior-score mechanism. Furthermore, we explore the P2P encrypted transport protocol with experimental support in the latest Bitcoin Core 26.0, but find it insufficient in mitigating the Slander attacks.
A Metalanguage for Dynamic Attack Graphs and Lazy Generation
Viktor Engström (KTH Royal Institute of Technology, Sweden), Giuseppe Nebbione (KTH Royal Institute of Technology, Sweden), Mathias Ekstedt (KTH Royal Institute of Technology, Sweden)
Full Paper
Two types of dynamics are important when modeling cyberattacks: how adversaries chain together techniques across systems and how they change the target systems. Attack graphs are prominent within research communities for automatically mapping and chaining together actions. Modeling adversary-driven system changes is comparatively unexplored, however. One reason could be that modeling adversarial change dynamics poses a blend of problems where the typical attack graph approaches could produce state-space explosions and infinite graphs. Therefore, this work presents the core modeling aspects of the Dynamic Meta Attack Language (DynaMAL), a project to lazily generate attack graphs by combining attack graph construction and simulation methods. DynaMAL lets users declare domain-specific modeling and attack graph generation languages. Then, the attack graphs are generated one step at a time based on the actions of an adversary agent. By only generating what is explicitly requested, DynaMAL can demonstrably change the system model as the attack graph grows while sidestepping typical state-space explosions and graph re-calculation problems. Shifting to a lazy generation process poses new challenges, however. Nevertheless, there is likely a point where lazy approaches will prevail when analyzing large and complex systems.
SoK: A Unified Data Model for Smart Contract Vulnerability Taxonomies
Claudia Ruggiero (Sapienza Università di Roma, Italy), Pietro Mazzini (Sapienza Università di Roma, Italy), Emilio Coppa (LUISS University, Italy), Simone Lenti (Sapienza Università di Roma, Italy), Silvia Bonomi (Sapienza Università di Roma, Italy)
Sok Paper
Modern blockchains support the execution of application-level code in the form of smart contracts, allowing developers to devise complex Distributed Applications (DApps). Smart contracts are typically written in high-level languages, such as Solidity, and after deployment on the blockchain, their code is executed in a distributed way in response to transactions or calls from other smart contracts. As a common piece of software, smart contracts are susceptible to vulnerabilities, posing security threats to DApps and their users.

The community has already made many different proposals involving taxonomies related to smart contract vulnerabilities. In this paper, we try to systematize such proposals, evaluating their common traits and main discrepancies. A major limitation emerging from our analysis is the lack of a proper formalization of such taxonomies, making hard their adoption within, e.g., tools and disfavoring their improvement over time as a community-driven effort. We thus introduce a novel data model that clearly defines the key entities and relationships relevant to smart contract vulnerabilities. We then show how our data model and its preliminary instantiation can effectively support several valuable use cases, such as interactive exploration of the taxonomy, integration with security frameworks for effective tool orchestration, and statistical analysis for performing longitudinal studies.
Mealy Verifier: An Automated, Exhaustive, and Explainable Methodology for Analyzing State Machines in Protocol Implementations
Arthur Tran Van (Télécom SudParis, France), Olivier Levillain (Télécom SudParis, France), Herve Debar (Télécom SudParis, France)
Full Paper
Many network protocol specifications are long and lack clarity, which paves the way to implementation errors. Such errors have led to vulnerabilities for secure protocols such as SSH and TLS. Active automata learning, a black-box method, is an efficient method to discover discrepancies between specification and its implementation. It consists in extracting state machines by interacting with a network stack. It can be (and has been) combined with model checking to analyze the obtained state machines. Model checking is designed for exhibiting a single model violation instead of all model violations and thus leads to a limited understanding of implementation errors. As far as we are aware, there is only one specialized exhaustive method available for analyzing the outcomes of active automata learning applied to network protocols,Fiterau-Brostean’s method. We propose an alternative method, to improve the discovery of new bugs and vulnerabilities and enhance the exhaustiveness of model verification processes. In this article, we apply our method to two use cases: SSH, where we focus on the analysis of existing state machines and OPC UA, for which we present a full workflow from state machine inference to state machine analysis.
SECL: A Zero-Day Attack Detector and Classifier based on Contrastive Learning and Strong Regularization
Robin Duraz (Chaire of Naval Cyberdefense, Lab-STICC, France), David Espes (University of Brest, Lab-STICC, France), Julien Francq (Naval Group (Naval Cyber Laboratory, NCL), France), Sandrine Vaton (IMT Atlantique, Lab-STICC, France)
Full Paper
Intrusion Detection Systems (IDSs) always had difficulties in detecting Zero-Day attacks (ZDAs). One of the advantages of Machine Learning (ML)-based IDSs, which is their superiority in detecting ZDAs, remains largely unexplored, especially when considering multiple ZDAs. This is mainly due to the fact that ML-based IDSs are mainly using supervised ML methods. Although they exhibit better performance in detecting known attacks, they are by design unable to detect unknown attacks because they are limited to detecting the labels present in the dataset they were trained on. This paper introduces SECL, a method that combines Contrastive Learning and a new regularization method composed of dropout, Von Neumann Entropy (VNE) and Sepmix (a regularization inspired from mixup). SECL is close to, or even better than supervised ML methods in detecting known attacks, while gaining the ability to detect and differentiate multiple ZDAs. Experiments were performed on three datasets, UNSW-NB15, CIC-IDS2017 and WADI, effectively showing that this method is able to detect multiple ZDAs while achieving performance similar to supervised methods on known attacks. Notably, the proposed method even has an overall better performance than a supervised method knowing all attacks on the WADI dataset. These results pave the way for better detection of ZDAs, without reduction of performance on known attacks.
BenchIMP: A Benchmark for Quantitative Evaluation of the Incident Management Process Assessment
Alessandro Palma (Sapienza University of Rome, Italy), Nicola Bartoloni (Sapienza University of Rome, Italy), Marco Angelini (Link Campus University of Rome, Italy)
Full Paper
In the current scenario, where cyber-incidents occur daily, an effective Incident Management Process (IMP) and its assessment have assumed paramount significance.

While assessment models, which evaluate the risks of incidents, exist to aid security experts during such a process, most of them provide only qualitative evaluations and are typically validated in individual case studies, predominantly utilizing non-public data.

This hinders their comparative quantitative analysis, incapacitating the evaluation of new proposed solutions and the applicability of the existing ones due to the lack of baselines.

To address this challenge, we contribute a benchmarking approach and system, BenchIMP, to support the quantitative evaluation of IMP assessment models based on performance and robustness in the same settings, thus enabling meaningful comparisons.

The resulting benchmark is the first one tailored for evaluating process-based security assessment models and we demonstrate its capabilities through two case studies using real IMP data and state-of-the-art assessment models.

We publicly release the benchmark to help the cybersecurity community ease quantitative and more accurate evaluations of IMP assessment models.
Towards Secure Virtual Elections: Multiparty Computation of Order Based Voting Rules
Tamir Tassa (The Open University of Israel, Israel), Lihi Dery (Ariel University, Israel)
Full Paper
Electronic voting systems have significant advantages in comparison with physical voting systems. One of the main challenges in e-voting systems is to secure the voting process: namely, to certify that the computed results are consistent with the cast ballots and that the voters' privacy is preserved. We propose herein a secure voting protocol for elections that are governed by order-based voting rules. Our protocol offers perfect ballot secrecy in the sense that it issues only the required output while no other information on the cast ballots is revealed. Such perfect secrecy, achieved by employing secure multiparty computation tools, may increase the voters' confidence and, consequently, encourage them to vote according to their true preferences. Evaluation of the protocol's computational costs establishes that it is lightweight and can be readily implemented in real-life electronic elections.
Confidence-Aware Fault Trees
Alexander Günther (Rheinland-Pfälzische Technische UniversitätKaiserslautern-Landau, Germany), Peter Liggesmeyer (Technical University of Kaiserslautern, Germany), Sebastian Vollmer (Technical University of Kaiserslautern, Germany)
Short Paper
Fault trees are one of the most well-known techniques for safety analysis, which allow both, quantitative and qualitative statements about systems. In the classical approach, deterministic failure probabilities for the basic events are necessary, in order to obtain quantified results. Classical hardware-related failures and events can be obtained through testing. Software-dependent failures are harder to measure and identify but are still possible to quantify when the implementation is given. In contrast, Machine Learning models lack this information, as their behaviour is not explicitly specified. Up to today, there are very few methods available to judge the worst-case performance of these models and predict their general performance.

To encounter this problem, we will introduce confidence levels inside the fault tree analysis. This will allow the usage of failure rate bounds at basic events, that only hold with given probability. We will present how this information can be used in the computations towards the top event. Our approach can be seen as a parallel or double application of a fault tree, to include the confidence levels. Consequently, also basic events, that depend on machine learning models, can be included in a fault tree analysis. The proposed technique is compared to probabilistic fault tree analysis in an example.
Increasing the Confidence in Security Assurance Cases using Game Theory
Antonia Welzel (Chalmers | University of Gothenburg, Sweden), Rebekka Wohlrab (Chalmers | University of Gothenburg, Sweden), Mazen Mohamad (Chalmers | University of Gothenburg, Sweden)
Full Paper
Security assurance cases (SACs) consist of arguments that are supported by evidence to justify that a system is acceptably secure. However, they are a relatively static representation of the system’s security and therefore currently not effective at runtime which make them difficult to maintain and unable to support users during threats. The aim of this paper is to investigate how SACs can be adapted to become more effective at runtime and increase confidence in the system’s security. We extend an example SAC with game theory, which models the interaction between the system and attacker and identifies their optimal strategies based on their payoffs and likelihoods. The extension was added as a security control in the assurance case, where a security claim indicates what strategy should be taken at runtime. This claim changes dynamically with the recommended strategy output by the game-theoretic model at runtime. Based on the results of the evaluation, the extension was considered to be potentially effective, however this would further depend on how it is implemented in practice.
SoK: Federated Learning based Network Intrusion Detection in 5G: Context, state of the art and challenges
Sara Chennoufi (SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, Palaiseau, France., France), Gregory Blanc (SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, Palaiseau, France., France), Houda Jmila (Institute LIST, CEA, Paris-Saclay University, Palaiseau, France, France), Christophe Kiennert (SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, Palaiseau, France., France)
Sok Paper
The advent of 5G marks a remarkable advancement, offering faster data rates, lower latency, and improved connectivity. Yet, its complexity, stemming from factors such as integrating advanced technologies like Software Defined Networking (SDN) and slicing, introduces challenges in implementing strong security measures against emerging threats. Although Intrusion Detection Systems (IDSs) can be successfully employed to detect attacks, the novelty of 5G results in an expanded and new attack surface. Collaborative efforts are essential for detecting novel, and distributed attacks, and ensuring comprehensive observability in multiparty networks. However, such collaboration raises privacy concerns due to the sensitivity of shared data. Federated Learning (FL), a collaborative Machine Learning (ML) approach, is a promising solution to preserve privacy as the model is trained across devices without exchanging raw data.

In this paper, we examine ongoing efforts that propose FL-based IDS solutions in a 5G context. We set out to systematically review in light of challenges raised by their practical deployment in 5G networks. Out of the numerous papers we analyzed in FL, only 17 specifically concentrate on 5G scenario and they are the focus of this study. In this sok, we first identify IDS challenges in 5G. Second, we classify FL-based IDS according to (i) their 5G application domain, (ii) 5G challenges they address, and (iii) their FL approach in terms of architecture, parameters, detection method, evaluation, etc. Through this examination, we find out that some issues receive less attention or are overlooked, prompting us to explore potential solutions. Additionally, we have identified other challenges, like the lack of evaluation results applicability due to the difficulties to get high quality 5G datasets for FL-based IDS evaluation.
Graph-Based Spectral Analysis for Detecting Cyber Attacks
Majed Jaber (Laboratory of research of EPITA (LRE), France), Nicolas Boutry (EPITA Research Laboratory (LRE), Le Kremlin-Bicêtre, France., France), Pierre Parrend (EPITA Strasbourg, France)
Full Paper
Spectral graph theory delves into graph properties through their spectral signatures. The eigenvalues of a graph's Laplacian matrix are crucial for grasping its connectivity and overall structural topology. This research capitalizes on the inherent link between graph topology and spectral characteristics to enhance spectral graph analysis applications. In particular, such connectivity information is key to detect low signals that betray the occurrence of cyberattacks. This paper introduces SpectraTW, a novel spectral graph analysis methodology tailored for monitoring anomalies in network traffic. SpectraTW relies on four spectral indicators, Connectedness, Flooding, Wiriness, and Asymmetry, derived from network attributes and topological variations, that are defined and evaluated. This method interprets networks as evolving graphs, leveraging the Laplacian matrix's spectral insights to detect shifts in network structure over time. The significance of spectral analysis becomes especially pronounced in the medical IoT domains, where the complex web of devices and the critical nature of healthcare data amplify the need for advanced security measures. Spectral analysis's ability to swiftly pinpoint irregularities and shift in network traffic aligns well with the medical IoT's requirements for prompt attack detection.
On the effectiveness of Large Language Models for GitHub Workflows
Xinyu Zhang (Purdue University, United States), Siddharth Muralee (Purdue University, United States), Sourag Cherupattamoolayil (Purdue University, United States), Aravind Machiry (Purdue University, United States)
Full Paper
GitHub workflows or GitHub CI is a popular continuous integration platform that enables developers to automate various software engineering tasks by specifying them as workflows, i.e., YAML files with a list of jobs. However, engineering valid workflows is tedious. They are also prone to severe security issues, which can result in supply chain vulnerabilities. Recent advancements in Large Language Models (LLMs) have demonstrated their effectiveness in various software development tasks. However, GitHub workflows differ from regular programs in both structure and semantics. We perform the first comprehensive study to understand the effectiveness of LLMs on five workflow-related tasks with different levels of prompts. We curated a set of ∼400K workflows and generated prompts with varying detail. We also fine-tuned LLMs on GitHub workflow tasks. Our evaluation of three state-of-the-art LLMs and their fine-tuned variants revealed various interesting findings on the current effectiveness and drawbacks of LLMs.
Provably Secure Communication Protocols for Remote Attestation
Johannes Wilson (Sectra Communications; Linköping University, Sweden), Mikael Asplund (Linköping University, Sweden), Niklas Johansson (Sectra Communications; Linköping University, Sweden), Felipe Boeira (Linköping University, Sweden)
Full Paper
Remote Attestation is emerging as a promising technique to ensure that some remote device is in a trustworthy state. This can for example be an IoT device that is attested by a cloud service before allowing the device to connect. However, flaws in the communication protocols associated with the remote attestation mechanism can introduce vulnerabilities into the system design and potentially nullify the added security. Formal verification of protocol security can help to prevent such flaws. In this work we provide a detailed analysis of the necessary security properties for remote attestation focusing on the authenticity of the involved agents. We extend beyond existing work by considering the possibility of an attestation server (making the attestation process involve three parties) as well as requiring verifier authentication. We demonstrate that some security properties are not met by a state-of-the-art commercial protocol for remote attestation for our strong adversary model. Moreover, we design two new communication protocols for remote attestation that we formally prove fulfil all of the considered authentication properties.

ASOD

SoK: Automated Software Testing for TLS Libraries
Ben Swierzy (University of Bonn, Germany), Felix Boes (University of Bonn, Germany), Timo Pohl (University of Bonn, Germany), Christian Bungartz (University of Bonn, Germany), Michael Meier (University of Bonn, Fraunhofer FKIE, Germany)
Sok Paper
Reusable software components, typically integrated as libraries, are a central paradigm of modern software development. 
By incorporating a library into their software, developers trust in its quality and its correct and complete implementation.
Since errors in a library affect all applications using it, there is a need for quality assurance tools such as automated testing that can be used by library and application developers to verify the functionality.
In the past decade, many different systems have been published that focus on the automated analysis of TLS implementations for finding bugs and security vulnerabilities.

However, all of these systems focus only on few TLS components and lack a common analysis scenario and inter-approach comparisons.
Especially, the amount of manual effort required across the whole analysis process to obtain the root cause of an error is often ignored.
In this paper, we survey and categorize literature on automated testing approaches for TLS libraries.
The results reveal a heterogeneous landscape of approaches with a trade-off between the manual effort required for setup and for result interpretation, along with major deficits in the considered performance metrics.
These imply important future directions which need to be followed to advance the current state of protocol test automation
Workshop ASOD
Accuracy Evaluation of SBOM Tools for Web Applications and System-Level Software
Andreas Halbritter (Augsburg Technical University of Applied Sciences Institute for Innovative Safety and Security, Germany), Dominik Merli (Augsburg Technical University of Applied Sciences Institute for Innovative Safety and Security, Germany)
Full Paper
Recent vulnerabilities in software like Log4J raise the question whether the software supply chain is secured sufficiently.
Governmental initiatives in the United States (US) and the European Union (EU) demand a Software Bill of Materials (SBOM) for solving this issue. A SBOM has to be produced by using creation tools and it has to be accurate and complete. In the past, there has been research in this field of research.

However, no detailed investigation of several tools producing SBOMs has been conducted regarding accuracy and reliability. For this reason, the following work presents a selection of four popular programming languages of web application and system-level software Python, C, Rust and Typescript. They build the base for four sample software projects and their package manager. For human checking the software projects are small with a small amount of packages and a single dependency. The open-source analysis tools are differed in programming language dependent and general usable tools and run in the standard execution mode on the software projects.

The results were checked against completeness and the National Telecommunications and Information Administration (NTIA) minimum and recommended elements. There is no recommendation for a specific tool as no tool fulfills every requirement, only two tools can be recommended in a limited way. Many tools do not provide a complete SBOM, as they do not depict every test package and dependency. Governmental initiatives should define further specifications on SBOM for example regarding their accuracy and depth. Further research in this field for example proprietary tools or other programming languages is desirable.
Workshop ASOD
Enhancing Secure Deployment with Ansible: A Focus on Least Privilege and Automation for Linux
Eddie Billoir (IRIT, Université de Toulouse, CNRS, Toulouse INP, UT3, AIRBUS Protect, France), Romain Laborde (IRIT, Université de Toulouse, CNRS, Toulouse INP, UT3, France), Ahmad Samer Wazan (Zayed University, France), Yves Rutschle (AIRBUS Protect, France), Abdelmalek Benzekri (IRIT, Université de Toulouse, CNRS, Toulouse INP, UT3, France)
Full Paper
As organisations increasingly adopt Infrastructure as Code (IaC), ensuring secure deployment practices becomes paramount. Ansible is a well-known open-source and modular tool for automating IT management tasks. However, Ansible is subject to supply-chain attacks that can compromise all managed hosts.

This article presents a semi-automated process that improves Ansible-based deployments to have fine-grained control on administrative privileges granted to Ansible tasks. We describe the integration of the RootAsRole framework to Ansible. Finally, we analyse the limit of the current implementation.
Workshop ASOD

BASS

Analysis of the Windows Control Flow Guard
Niels Pfau (Institute of IT Security Research, St. Pölten University of Applied Sciences, Austria), Patrick Kochberger (Institute of IT Security Research, St. Pölten University of Applied Sciences, Austria)
Full Paper
Cybersecurity’s constantly evolving field demands defense mechanisms’ continuous development and refinement. Memory corruption attacks, including buffer overflows and use-after-free vulnerabilities, have long been a significant threat, especially for web browsers. Microsoft introduced Control Flow Guard (CFG) as a mitigative measure against advanced exploitation techniques, like ROP and use-after-free-based exploits, to address these risks. This paper delves into the internals of CFG, its implementation, effectiveness, and possible bypasses that could undermine its security. A thorough examination of Microsoft’s CFG design principles gives the reader an in-depth understanding of how CFG enforces control flow integrity within a program’s execution. The limitations of this mitigation are highlighted by employing a direct return address overwrite to exploit the ChakraCore JavaScript engine.

Additional potential bypasses are investigated, considering other scenarios wherein CFG might get circumvented. This exploration emphasizes the importance of continued research and development in the field of exploit mitigation, and the chaining of multiple mitigations to address evolving threats and maintain the security and integrity of modern software.

In conclusion, the paper discusses the Windows CFG and its ramifications on memory corruption attacks. It manifests the effectiveness against specific exploitation methods while spotlighting limitations and potential bypasses that could jeopardize its security.
Workshop BASS
If It Looks Like a Rootkit and Deceives Like a Rootkit: A Critical Examination of Kernel-Level Anti-Cheat Systems
Christoph Dorner (St. Pölten University of Applied Sciences, Austria), Lukas Daniel Klausner (St. Pölten University of Applied Sciences, Austria)
Full Paper
Addressing a critical aspect of cybersecurity in online gaming, this paper systematically evaluates the extent to which kernel-level anti-cheat systems mirror the properties of rootkits, highlighting the importance of distinguishing between protective and potentially invasive software. After establishing a definition for rootkits (making distinctions between rootkits and simple kernel-level applications) and defining metrics to evaluate such software, we introduce four widespread kernel-level anti-cheat solutions. We lay out the inner workings of these types of software, assess them according to our previously established definitions, and discuss ethical considerations and the possible privacy infringements introduced by such programs. Our analysis shows two of the four anti-cheat solutions exhibiting rootkit-like behaviour, threatening the privacy and the integrity of the system. This paper thus provides crucial insights for researchers and developers in the field of gaming security and software engineering, highlighting the need for informed development practices that carefully consider the intersection of effective anti-cheat mechanisms and user privacy.
Workshop BASS
Systematic Analysis of Label-flipping Attacks against Federated Learning in Collaborative Intrusion Detection Systems
Léo Lavaur (IMT Atlantique / IRISA-SOTERN / Cyber CNI, France), Yann Busnel (IMT Nord Europe / IRISA-SOTERN, France), Fabien Autrel (IMT Atlantique / IRISA-SOTERN, France)
Full Paper
With the emergence of federated learning (FL) and its promise of privacy-preserving knowledge sharing, the field of intrusion
detection systems (IDSs) has seen a renewed interest in the development of collaborative models. However, the distributed nature of FL makes it vulnerable to malicious contributions from its participants, including data poisoning attacks. The specific case of label-flipping attacks, where the labels of a subset of the training data are flipped, has been overlooked in the context of IDSs that leverage FL primitives. This study aims to close this gap by providing a systematic and comprehensive analysis of the impact of label-flipping attacks on FL for IDSs. We show that such attacks can still have a significant impact on the performance of FL models, especially targeted ones, depending on parameters and dataset characteristics. Additionally, the provided tools and methodology can be used to extend our findings to other models and datasets, and benchmark the efficiency of existing countermeasures.
Workshop BASS
Behavioural Modelling for Sustainability in Smart Homes
Luca Ardito (Politecnico di Torino, Italy)
Full Paper
A tool for IoT Firmware Certification
Giuseppe Marco Bianco (Politecnico di Torino, Italy), Luca Ardito (Politecnico di Torino, Italy), Michele Valsesia (Politecnico di Torino, Italy)
Full Paper
The IoT landscape is plagued by security and reliability concerns due to the absence of standardization, rendering devices susceptible to breaches. Certifying IoT firmware offers a solution by enabling consumers to easily identify secure products and incentivizing developers to prioritize secure coding practices, thereby fostering transparency within the IoT ecosystem. This study proposes a methodology centered on ELF binary analysis, aimed at discerning critical functionalities by identifying system calls within firmware. It introduces the manifest-producer tool, developed in Rust, for analyzing ELF binaries in IoT firmware certification. Employing static analysis techniques, the tool detects APIs and evaluates firmware behavior, culminating in the generation of JSON manifests encapsulating essential information. These manifests enable an assessment of firmware compliance with security and reliability standards, as well as alignment with declared device behaviors. Performance analysis using benchmarking tools demonstrates the tool's versatility and resilience across diverse programming languages and file sizes. Future avenues of research include refining API discovery algorithms and conducting vulnerability analyses to bolster IoT device security. This paper underscores the pivotal role of firmware certification in cultivating a safer IoT ecosystem and presents a valuable tool for realizing this objective within academic discourse.
Workshop BASS
Image-based detection and classification of Android malware through CNN models
Alessandro Aldini (University of Urbino Carlo Bo, Italy), Tommaso Petrelli (University of Urbino Carlo Bo, Italy)
Full Paper
Convolutional Neural Networks (CNN) are artificial deep learning networks widely used in computer vision and image recognition for their highly efficient capability of extracting input image features. In the literature, such a successful tool has been leveraged for detection/classification purposes in several application domains where input data are converted into images. In this work, we consider the application of CNN models, developed by employing standard Python libraries, to detect and then classify Android-based malware applications. Different models are tested, even in combination with machine learning-based classifiers, with respect to two datasets of 5000 applications each. To emphasize the adequacy of the various CNN implementations, several performance metrics are considered, as also stressed by a comprehensive comparison with related work.
Workshop BASS
A Web Browser Plugin for Users' Security Awareness
Thomas Hoad (University of Southampton, United Kingdom), Erisa Karafili (University of Southampton, United Kingdom)
Full Paper
Browsing online continues to pose a risk to the users’ privacy and security. There is a plethora of existing tools and solutions that aim at ensuring safe and private browsing but they are not used by the majority of the users due to the lack of ease of use or because they are too restrictive. In this work, we present a plugin for Google Chrome that aims to increase the users' security awareness regarding the visited websites. We aim to provide the user with simple and understandable information about the security of the visited website. We evaluated our tool through a usability analysis and compared it with existing well-known solutions. Our study showed that our plugin ranking was high in the ease of use, and in the middle range for clarity, information provided, and overall satisfaction. Overall, our study showed that the users would like to use a tool that has ease of use but that also provides some simple security information about the visited website.
Workshop BASS

CSA

RMF: A Risk Measurement Framework for Machine Learning Models
Jan Schröder (Fraunhofer FOKUS and HTW Berlin, Germany), Jakub Breier (TTControl GmbH, Austria)
Full Paper
Machine learning (ML) models are used in many safety and security-critical applications nowadays. It is therefore of interest to measure the security of a system that uses ML as its component.

This paper deals with the field of ML, especially security on autonomous vehicles. For this purpose, the concept of a technical framework will be described, implemented, and evaluated in a case study. Based on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the extent of damage and the effort required by an attacker. It is not possible, as assumed, to determine a risk value that represents the attacker's effort. Therefore, four different values must be interpreted individually.
Workshop CSA
Analyzing Air-traffic Security using GIS-``blur'' with Information Flow Control in the IIIf
Florian Kammueller (Middlesex University London and TU Berlin, United Kingdom)
Full Paper
In this paper, we address security and privacy of air-traffic control systems. Classically these systems are closed proprietary systems. However, air-traffic monitoring systems like flight-radars are decentralized public applications risking loss of confidential information thereby creating security and privacy risks. We propose the use of the Isabelle Insider and Infrastructure framework (IIIf) to alleviate the security specification and verification of air traffic control systems. This paper summarizes the IIIf and then illustrates the use of the framework on the application of a flight path monitoring system. Using the idea of blurring visual data to obfuscate privacy critical data used in GIS systems, we observe that for dynamic systems like flightradars, implicit information flows exist. We propose information hiding as a solution. To show the security of this approach, we present the extension of the IIIf by a formal notion of indistinguishability and prove the central noninterference property for the flight path monitoring application with hiding.
Workshop CSA
Exploring the influence of the choice of prior of the Variational Auto-Encoder on cybersecurity anomaly detection
Tengfei Yang (Software Research Institute, Technological University of the Shannon:Midlands Midwest, Ireland), Yuansong Qiao (Software Research Institute, Technological University of the Shannon:Midlands Midwest, Ireland), Brian Lee (Software Research Institute, Technological University of the Shannon:Midlands Midwest, Ireland)
Full Paper
The Variational Auto-Encoder (VAE) is a popular generative model as the variance inference in the latent layer, the prior is an important element to improve inference efficient. This research explored the prior in the VAE by comparing the Normal family distributions and other location-scale family distributions in three aspects (performance, robustness, and complexity) in order to find a suitable prior for cybersecurity anomaly detection. Suitable distributions can improve the detection performance, which was verified at UNSW-NB15 and CIC-IDS2017.
Workshop CSA
A Technical Exploration of Strategies for Augmented Monitoring and Decision Support in Information Warfare
Frida Muñoz Plaza (Indra, Spain), Inés Hernández San Román (Indra, Spain), Marco Antonio Sotelo Monge (Indra, Spain)
Full Paper
The evolving landscape of global security has shifted away from the traditional dynamics of superpower confrontations towards a more complex interaction involving both state and non-state actors. This transition is fueled by factors like globalization, resource competition, and shifts in political and social frameworks, contributing to heightened levels of uncertainty. Simultaneously, there has been an 'information revolution' driven by technologies such as the Internet and mobile phones, ushering in an era dominated by computer-based decision-making. This evolving Information Environment encompasses various components, from the information itself to the actors and systems facilitating its utilization. The capability to influence perceptions, especially among local populations, holds significant strategic importance in military contexts. Additionally, the growing dependence on Information Technology (IT) introduces both opportunities for exploitation and vulnerabilities that require attention, particularly in the dissemination of information and disinformation campaigns via the Internet. In this paper, the authors explore technical enablers that can help to mitigate the downside effects of information warfare targeted against individuals engaged in information warfare campaigns. A three-fold analysis unveils alternatives for monitoring the cognitive domain capabilities, the analysis of external sources of information (e.g., OSINT sources), and analysis of cognitive patterns. The ultimate goal is to suggest defensive mechanisms to diminish the likelihood of success of an adversarial attack through deterrence from others' perceptions effectively
Workshop CSA
Evaluation of Cyber Situation Awareness - Theory, Techniques and Applications
Georgi Nikolov (Royal Military School Brussels, Belgium), Axelle Perez (Université libre de Bruxelles, Belgium), Wim Mees (Royal Military Academy Brussels, Belgium)
Full Paper
In recent years the technology field has grown exponentially, bringing with it new possibilities, but also new threats. This rapid advancement has created fertile grounds for new sophisticated cyber attacks, exhibiting a high degree of complexity. In an ever evolving cyber landscape, organizations need to dedicate valuable resources in enhancing their understanding of emergent threats for the purposes of identification, analysis and mitigation. To accomplish this task, they rely on Cyber Situation Awareness (CSA), a framework designed for the purposes of managing the virtual environment through the perception and comprehension of the behaviors therein, be that benign or malicious, followed by modeling the future state of the environment based on the gathered information. In this paper, we will discuss how exactly the theory of Situation Awareness has been applied to the cyber domain. Further on, we will present various techniques used for handling the large quantity of complex data and managing the dynamic nature of the environment by Cyber Situation Operation Centers (CSOC) and discuss in detail a number of methodologies that have been designed for the evaluation of the level of CSA. Finally, we will provide specific examples of simulated scenarios for the application of the CSA assessment techniques.
Workshop CSA
Unlocking the Potential of Knowledge Graphs: A Cyber Defense Ontology for a Knowledge Representation and Reasoning System
José María Jorquera Valero (University of Murcia, Spain), Antonio López Martínez (University of Murcia, Spain), Pedro Miguel Sánchez Sánchez (University of Murcia, Spain), Daniel Navarro Martínez (Indra Digital Labs, Spain), Rodrigo Varas López (Indra Digital Labs, Spain), Javier Ignacio Rojo Lacal (Indra Digital Labs, Spain), Antonio López Vivar (Indra Digital Labs, Spain), Marco Antonio Sotelo Monge (Indra Digital Labs, Spain), Manuel Gil Pérez (University of Murcia, Spain), Gregorio Martínez Pérez (University of Murcia, Spain)
Full Paper
In today's dynamic and complex warfare landscape, characterized by the convergence of traditional and emerging threats, the significance of cybersecurity in shaping modern conflicts cannot be overstated. Such trend presents a challenging paradigm shift in how military organizations approach mosaic warfare in the digital age since new attack vectors and targets appear in their landscapes. In this vein, it is pivotal for military teams to have a clear and concise roadmap for cybersecurity incidents linked to potential mosaic warfare. This manuscript introduces a novel approach to bolstering mosaic warfare strategies by integrating an advanced Knowledge Representation and Reasoning system and a tailored ontology. Motivated by the critical role of cybersecurity in contemporary warfare, the proposed system aims to enhance situational awareness, decision-making capabilities, and operational effectiveness in the face of evolving cyber threats. In this sense, this manuscript entails a new ontology that not only covers the cybersecurity realm but also introduces key concepts related to strategic and operational military levels at the same time. The ad-hoc ontology is also compared against other well-known ones, such as MITRE, NATO, or UCO approaches and manifests a significant performance by employing standardized quality metrics for ontologies. Lastly, a realistic mosaic warfare scenario is contextualized to demonstrate the deployment of the proposed system and how it can properly represent all information gathered from heterogeneous data sources.
Workshop CSA
NEWSROOM: Towards Automating Cyber Situational Awareness Processes and Tools for Cyber Defence
Markus Wurzenberger (AIT Austrian Institute of Technology GmbH, Austria), Stephan Krenn (AIT Austrian Institute of Technology GmbH, Austria), Max Landauer (AIT Austrian Institute of Technology, Austria), Florian Skopik (AIT Austrian Institute of Technology, Austria), Cora Perner (Airbus, Germany), Jarno Lötjönen (Jamk University of Applied Sciences, Finland), Jani Päijänen (Jamk University of Applied Sciences, Finland), Georgios Gardikis (Space Hellas S.A., Greece), Nikos Alabasis (Space Hellas S.A., Greece), Liisa Sakerman (Sihtasutus CR14, Estonia), Fredi Arro (Sihtasutus CR14, Estonia), Kristiina Omri (CybExer Technologies OÜ, Estonia), Aare Reintam (CybExer Technologies OÜ, Estonia), Juha Röning (University of Oulu, Finland), Kimmo Halunen (University of Oulu, Finland), Romain Ferrari (ThereSIS, Thales SIX GTS, France), Vincent Thouvenot (ThereSIS, Thales SIX GTS, France), Martin Weise (TU Wien, Austria), Andreas Rauber (TU Wien, Austria), Vasileios Gkioulos (Norwegian University of Science and Technology, Norway), Sokratis Katsikas (Norwegian University of Science and Technology, Norway), Luigi Sabetta (LeonardoLabs (Leonardo spa), Italy), Jacopo Bonato (LeonardoLabs (Leonardo spa), Italy), Rocío Ortíz (INDRA, Spain), Daniel Navarro (INDRA, Spain), Nikolaos Stamatelatos (Logstail, Greece), Ioannis Avdoulas (Logstail, Greece), Rudolf Mayer (University of Vienna, Austria), Andreas Ekelhart (University of Vienna, Austria), Ioannis Giannoulakis (Eight Bells Ltd, Cyprus), Emmanouil Kafetzakis (Eight Bells Ltd, Cyprus), Antonello Corsi (CY4GATE SpA, Italy), Ulrike Lechner (Universität der Bundeswehr München, Germany), Corinna Schmitt (Universität der Bundeswehr München, FI CODE, Germany)
Full Paper
Cyber Situational Awareness (CSA) is an important element in both cyber security and cyber defence to inform processes and activities on strategic, tactical, and operational level. Furthermore, CSA enables informed decision making. The ongoing digitization and interconnection of previously unconnected components and sectors equally affects the civilian and military sector. In defence, this means that the cyber domain is both a separate military domain as well as a cross-domain and connecting element for the other military domains comprising land, air, sea, and space. Therefore, CSA must support perception, comprehension, and projection of events in the cyber space for persons with different roles and expertise. This paper introduces NEWSROOM, a research initiative to improve technologies, methods, and processes specifically related to CSA in cyber defence. For this purpose, NEWSROOM aims to improve methods for attacker behavior classification, cyber threat intelligence (CTI) collection and interaction, secure information access and sharing, as well as human computer interfaces (HCI) and visualizations to provide persons with different roles and expertise with accurate and easy to comprehend mission- and situation-specific CSA. Eventually, NEWSROOM's core objective is to enable informed and fast decision-making in stressful situations of military operations. The paper outlines the concept of NEWSROOM and explains how its components can be applied in relevant application scenarios.
Workshop CSA
Evaluating the impact of contextual information on the performance of intelligent continuous authentication systems
Pedro Miguel Sánchez Sánchez (Department of Information and Communications Engineering, University of Murcia, Spain, Spain), Adrián Abenza Cano (Department of Information and Communications Engineering, University of Murcia, Spain, Spain), Alberto Huertas Celdrán (Communication Systems Group CSG, Department of Informatics, University of Zurich, Switzerland), Gregorio Martínez Pérez (Department of Information and Communications Engineering, University of Murcia, Spain, Spain)
Full Paper
Nowadays, the usage of computers ranges from activities that do not consider sensitive data, such as playing video games, to others managing confidential information, like military operations. Additionally, regardless of the actions performed by subjects, most computers store different pieces of sensitive data, making the implementation of robust security mechanisms a critical and mandatory task. In this context, continuous authentication has been proposed as a complementary mechanism to improve the limitations of conventional authentication methods. However, mainly driven by the evolution of Machine Learning (ML), a series of challenges related to authentication performance and, therefore, the feasibility of existing systems are still open. This work proposes the usage of contextual information related to the applications executed in the computers to create ML models able to authenticate subjects continuously. To evaluate the suitability of the proposed context-aware ML models, a continuous authentication framework for computers has been designed and implemented. Then, a set of experiments with a public dataset with 12 subjects demonstrated the improvement of the proposed approach compared to the existing ones. Precision, recall, and F1-Score metrics are raised from an average of 0.96 (provided by general ML models proposed in the literature) to 0.99-1.
Workshop CSA
On the Application of Natural Language Processing for Advanced OSINT Analysis in Cyber Defence
Florian Skopik (AIT Austrian Institute of Technology, Austria), Benjamin Akhras (AIT Austrian Institute of Technology, Austria), Elisabeth Woisetschlaeger (AIT Austrian Institute of Technology, Austria), Medina Andresel (AIT Austrian Institute of Technology, Austria), Markus Wurzenberger (AIT Austrian Institute of Technology, Austria), Max Landauer (AIT Austrian Institute of Technology, Austria)
Full Paper
Open Source Intelligence (OSINT), in addition to closed military sources, provides timely information on emerging cyber attack techniques, attacker groups, changes in IT products, policy updates, recent events, and much more. Often, dozens of analysts scour hundreds of sources to gather, categorize, cluster, and prioritize news items, delivering the most pertinent information to decision makers. However, the sheer volume of sources and news items is continually expanding, making manual searches increasingly challenging. Moreover, the format and presentation of this information vary widely, with each blog entry, threat report, discussion forum, and mailing list item appearing differently, further complicating parsing and extracting relevant data. The research projects NEWSROOM and EUCINF, under the European Defence Fund (EDF), focus on leveraging Natural Language Processing (NLP) and Artificial Intelligence (AI) to enhance mission-oriented cyber situational awareness. These EDF initiatives are instrumental in advancing Taranis AI, a tool designed to categorize news items using machine learning algorithms and extract pertinent entities like company names, products, CVEs, and attacker groups. This enables the indexing and labeling of content, facilitating the identification of relationships and grouping of news items related to the same events -- a crucial step in crafting cohesive "stories." These stories enable human analysts to swiftly capture the most significant current "hot topics", alleviating them from the task of consolidating or filtering redundant information from various sources. Taranis AI further enhances its capabilities by automatically generating summaries of reports and stories, and implementing a collaborative ranking system, among other features. This paper serves as an introduction to Taranis AI, exploring its NLP advancements and their practical applications. Additionally, it discusses lessons learned from its implementation and outlines future directions for research and development.
Workshop CSA
PQ-REACT: Post Quantum Cryptography Framework for Energy Aware Contexts
Marta Irene Garcia Cid (Indra, Spain), Kourtis Michail-Alexandros (National Centre for Scientific Research “DEMOKRITOS”, Greece), David Domingo (Indra Sistemas de Comunicaciones Seguras, Spain), Nikolay Tcholtchev (Fraunhofer Institute for Open Communication Systems, Germany), Vangelos K. Markakis (Hellenic Mediterranean University, Greece), Marcin Niemiec (AGH University, Poland), Juan Pedro Brito Mendez (Universidad Politécnica de Madrid, Spain), Laura Ortiz (Universidad Politécnica de Madrid, Spain), Vicente Martin (Universidad Politécnica de Madrid, Spain), Diego Lopez (Telefonica Investicacion y Desarrollo, Spain), George Xilouris (National Centre for Scientific Research “DEMOKRITOS”, Greece), Maria Gagliardi (Scuola Superiore Sant'Anna, Italy), Jose Gonzalez (MTU Autralo Alplha Lab, Estonia), Miguel Garcia (Splorotech S.L., Spain), Giovanni Comande (SMARTEX SRL, Italy), Nikolai Stoianov (Bulgarian Defence Institute, Bulgaria)
Full Paper
Public key cryptography is nowadays a crucial component of global communications which are critical to our economy, security and way of life. The quantum computers are expected to be a threat and the widely used RSA, ECDSA, ECDH, and DSA cryptosystems will need to be replaced by quantum safe cryptography. The main objective of the HORIZON Europe PQ-REACT project is to design, develop and validate a framework for a faster and smoother transition from classical to quantum safe cryptography for a wide variety of contexts and usage domains that could have a potential interest for defence purposes. This framework will include Post Quantum Cryptography (PQC) migration paths and cryptographic agility methods and will develop a portfolio of tools for validation of post quantum cryptographic systems using Quantum Computing. A variety of real-world pilots using PQC and Quantum Cryptography, i.e., Smart Grids, 5G and Ledgers will be deployed and a series of open calls for SMEs and other stakeholders will be launched to bring and test their PQC algorithms and external pilots on the PQ-REACT Quantum Computing Infrastructure.
Workshop CSA
Operation Assessment in cyberspace: Understanding the effects of Cyber Deception
Salvador Llopis Sanchez (Universitat Politecnica de Valencia, Spain), David Lopes Antunes (Universitat Politecnica de Valencia, Spain)
Full Paper
Cyber planners face a considerable challenge in finding holistic solutions for a cyber defence decision-support system - a core module of a cyber situation awareness capability. Due to a fast-evolving cyberspace, decision makers assisted by technical staff are prone to carry out qualitative assessments when planning and conducting cyber operations instead of exclusively relying on quantitative assessments to articulate cyber defence mechanisms. A hybrid setting combining both types of assessments would be key to have the ability to monitor progression, anticipate deviations from initial plans and evaluate effectiveness towards mission accomplishment. In line with this rationale, the authors propose a thorough analysis and tailorness of the operation assessment framework applied to the characteristics of the cyberspace in view of identifying a proper methodology able to regularly assess the situation and provide mitigation measures to fix goal alignment problems including measuring effects of cyber deception. Such goals are considered decisive conditions of the operation design. The results are expected to shed some light about measuring the required performance of action and effectiveness using mission impact and risk calculations among others.
Workshop CSA

CUING

A Case Study on the Detection of Hash-Chain-based Covert Channels Using Heuristics and Machine Learning
Jeff Schymiczek (University of Helsinki, Finland), Tobias Schmidbauer (Nuremberg Institute of Technology, Germany), Steffen Wendzel (Worms University of Applied Sciences, Germany)
Full Paper
Reversible network covert channels are a security threat that allows its users to restore the carrier object before sending it to the overt receiver, drawing detection challenging. Some of these covert channels utilize computational intensive operations, such as the calculation of cryptographic hash chains. Currently, these computational intensive reversible covert channels are considered difficult to detect.
This paper proposes ways of utilizing shape analysis of packet runtime distributions to detect such computational intensive covert channels. To this end, we simulated the latency of traffic modified by a hash-chain based covert channel by adding mock hash-reconstruction runtimes to runtimes of legitimate ping traffic. After qualitatively observing the changes in the empirical probability distribution between modified and natural traffic, we investigated machine learning algorithms for their ability to detect the covert channel’s presence. We show that a decision tree-based AdaBoost classifier using the investigated statistical measures as input vector and a convolutional neural network applied directly to the packet runtime empirical probability distribution are able to classify sets of 50 ping measurements with high accuracy for low to medium high latency connections. Our approach improves significantly over previous work done on the detection of computational intensive covert
channels as our approach both requires smaller sampling window sizes and achieves significantly higher detection rates on the same reference dataset.
Workshop CUING
How to evade modern web cryptojacking detection tools? A review of practical findings
Pawel Rajba (University of Wroclaw, Poland), Krzysztof Chmiel (University of Wroclaw, Poland)
Full Paper
One of the foundations of cryptocurrencies based on proof-of-work consensus is mining. This is an activity which consumes a lot of computational resources, so malicious actors introduce cryptojacking malware to exploit users computers and in result use their victim resources. Cryptojacking emerged several years ago together with the increasing adoption and prevalence of cryptocurrencies. This type of malware may have several types, but in this paper we consider malicious scripts embedded into the websites. As the threat is real and we hear regularly about affected websites including major web content providers, in this paper we analyzed selected promising detection methods based on more sophisticated techniques which are not only based on blacklisting which is the most common way of preventing this kind of attacks. The analysis resulted in findings showing all the considered solutions can be tricked from the controlled server. Fortunately, we also show the ways how the considered solutions can be improved, so the proposed methods can be efficient again.
Workshop CUING
Trustworthiness and explainability of a watermarking and machine learning-based system for image modification detection to combat disinformation
Andrea Rosales (Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya, Spain), Agnieszka Malanowska (Warsaw University of Technology, Poland), Tanya Koohpayeh Araghi (Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya, Barcelona, Spain, Spain), Minoru Kuribayashi (Center for Data-driven Science and Artificial Intelligence at Tohoku University Japan, Japan), Marcin Kowalczyk (Warsaw University of Technology, Poland), Daniel Blanche-Tarragó (Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya, Center, Spain), Wojciech Mazurczyk (Warsaw University of Technology, Poland), David Megías (Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya, Barcelona, Spain, Spain)
Full Paper
The widespread of digital platforms, that prioritize content based on engagement metrics and reward content creators accordingly, has contributed to the expansion of disinformation with all its social and political impact. We propose a verification system to counterbalance disinformation in two stages. First, a system that allows media industries to watermark their image and video content.

Second, a user platform for news consumers to verify if images and video over the internet have been modified. However, digital platforms, often developed as black boxes that hide their rationale from users and prioritize the investor’s interests over ethical and social concerns, have contributed to this disinformation and to a general lack of trust in verification systems. In this paper, we address trustworthiness and explainability in the development of the user platform to increase its trustworthiness and acceptance based on three iterations of an international user study.
Workshop CUING
ZW-IDS: Zero-Watermarking-based network Intrusion Detection System using data provenance
Omair Faraj (Telecom SudParis, Institut Polytechnique de Paris, France), David Megias (Internet Interdisciplinary Institute, Universitat Oberta de Catalunya, Spain), Joaquin Garcia-Alfaro (Telecom SudParis, Institut Polytechnique de Paris, France)
Full Paper
In the rapidly evolving digital world, network security is a critical concern. Traditional security measures often fail to detect unknown attacks, making anomaly-based Network Intrusion Detection Systems (NIDS) using Machine Learning (ML) vital. However, these systems face challenges such as computational complexity and misclassification errors. This paper presents ZW-IDS, an innovative approach to enhance anomaly-based NIDS performance. We propose a two-layer classification NIDS integrating zero-watermarking with data provenance and ML. The first layer uses Support Vector Machines (SVM) with ensemble learning model for feature selection. The second layer generates unique zero-watermarks for each data packet using data provenance information. This approach aims to reduce false alarms, improve computational efficiency, and boost NIDS classification performance. We evaluate ZW-IDS using the CICIDS2017 dataset and compare its performance with other multi-method ML and Deep Learning (DL) solutions.
Workshop CUING
Natural Language Steganography by ChatGPT
Martin Steinebach (Fraunhofer, Germany)
Full Paper
Natural language steganography as well as natural language watermarking have been challenging because of the complexity and lack of noise in natural language. But with the advent of LLMs like ChatGPT, controlled synthesis of written language has become available. In this work, we show how ChatGPT can be utilized to generate synthetic texts of a given topic that act as stego covers for hidden messages.
Workshop CUING
Single-image steganalysis in real-world scenarios based on classifier inconsistency detection
Daniel Lerch-Hostalot (Universitat Oberta de Catalunya, Spain), David Megías Jimenez (Universitat Oberta de Catalunya, Spain)
Full Paper
This paper presents an improved method for estimating the accuracy of a model based on images intended for prediction, enhancing
the standard Detection of Classifier Inconsistencies (DCI) method. The conventional DCI method typically requires a large enough set of images from the same source to provide accurate estimations, which limits its practicality. Our enhanced approach overcomes this limitation by generating a set of images from a single original image, thereby enabling the application of the standard DCI method without requiring more than one target image. This method ensures that the generated images maintain the statistical properties of the original, preserving any embedded steganographic messages, through the use of non-destructive image manipulations such as flips, rotations, and shifts. Experimental results demonstrate that our method produces results comparable to those of the traditional DCI method, effectively estimating model accuracy with as few as 32 generated images. The robustness of our approach is also confirmed in challenging scenarios involving cover source mismatch (CSM), making it a viable solution for real-world applications.
Workshop CUING
Are Deepfakes a Game-changer in Digital Images Steganography Leveraging the Cover-Source-Mismatch?
Arthur Méreur (Troyes University of Technology, France), Antoine Mallet (Troyes University of Technology, France), Rémi Cogranne (Troyes University of Technology, France)
Full Paper
This work explores the potential of synthetic media generated by AI, often referred to as Deepfakes, as a source of cover-objects for steganography. Deepfakes offer a vast and diverse pool of media, potentially improving steganographic security by leveraging cover-source mismatch, a challenge in steganalysis where training and testing data come from different sources.

The present paper proposes an initial study on Deepfakes' effectiveness in the field of steganography. More precisely, we propose an initial study to assess the impact of Deepfakes on image steganalysis performance in an operational environment. Using a wide range of image generation models and state-of-the-art methods in steganography and steganalysis, we show that Deepfakes can significantly exploit the cover-source mismatch problem but that mitigation solutions also exist. The empirical findings can inform future research on steganographic techniques that exploit cover-source mismatch for enhanced security.
Workshop CUING
A Comprehensive Pattern-based Overview of Stegomalware
Fabian Strachanski (University of Duisburg-Essen, Germany), Denis Petrov (Worms University of Applied Sciences, Germany), Tobias Schmidbauer (Nuremberg Institute of Technology, Germany), Steffen Wendzel (Worms University of Applied Sciences, Germany)
Full Paper
In recent years, malware is increasingly using steganographic methods (so-called stegomalware) to remain hidden as long as possible. It not only covers its tracks on the infected system, but also tries to hide its communication with adversary infrastructure.

This paper reviews 105 stegomalware cases on the basis of 142 reports, ranging from digital media (audio, video, images) to text and network steganography. For this purpose, the covert channels used by the malware are categorized and introduced using a pattern-based approach. Our survey reveals that solely a small set of patterns are used and the most frequent methods rely on modulation of states and values. We also analyzed the commonalities of media, text and network stegomalware and found that least significant bit (LSB) steganography is exclusively utilized for media steganography. Our results indicate, that only a small variation of network protocols, media types and hiding methods are utilized by stegomalware and therefore, research may focus on these to counter malicious activities covered by steganography.
Workshop CUING
No Country for Leaking Containers: Detecting Exfiltration of Secrets Through AI and Syscalls
Marco Zuppelli (Institute for Applied Mathematics and Information Technologies, Italy), Massimo Guarascio (ICAR-CNR, Italy), Luca Caviglione (CNR - IMATI, Italy), Angelica Liguori (ICAR-CNR, Italy)
Full Paper
Containers offer lightweight execution environments for implementing microservices or cloud-native applications. Owing to their ubiquitous diffusion jointly with the complex interplay of hardware, computing, and network resources, effectively enforcing container security is a difficult task. Specifically, runtime detection of threats poses many challenges since containers are often immutable (i.e., they cannot be instrumented or inspected), and many malware deploys obfuscation or elusive mechanisms. Therefore, in this work we propose a deep-learning-based approach for identifying the presence of two containers colluding to covertly leak secret information. In more detail, we consider a threat actor trying to exfiltrate a 4,096-bit private TLS key via five different covert channels. To decide whether containers are colluding for leaking data, the deep learning model is fed with statistical indicators of the syscalls, which are built starting from simple counters. Results indicate the effectiveness of our approach, even if some adjustments are needed to reduce the number of false positives.
Workshop CUING
Robust and Homomorphic Covert Channels in Streams of Numeric Data
Jörg Keller (FernUniversität in Hagen, Germany), Carina Heßeling (FernuUniversitaet Hagen, Germany), Steffen Wendzel (Worms University of Applied Sciences, Germany)
Full Paper
A steganographic network storage channel that uses a carrier with a stream of numeric data must consider the possibility that the carrier data is processed before the covert receiver can extract the secret data. A sensor data stream, which we take as an example scenario, may be scaled by multiplication, shifted into a different range by addition, or two streams might be merged by adding their values. This raises the question if the storage channel can be made robust against such carrier modifications. On the other hand, if the pieces of secret data are numeric as well, adding and merging two streams each comprising covert data might be exploited to form a homomorphic covert channel. We investigate both problems together as they are related and give positive and negative results. In particular, we present the first homomorphic storage covert channel. Moreover, we show that such type of covert channel is not restricted to sensor data streams, but that very different scenarios are possible.
Workshop CUING

EDId

An Identity Key Management System with Deterministic Key Hierarchy for SSI-native Internet of Things
Alice Colombatto (LINKS Foundation, Italy), Luca Giorgino (LINKS Foundation, Italy), Andrea Vesco (LINKS Foundation, Italy)
Full Paper
The key to secure implementation of the Self-Sovereign Identity (SSI) model in IoT nodes is the Key Management System (KMS). A KMS for a large number of identity key pairs, bound to an appropriate combination of the IoT node hardware and firmware, and possibly running in a Trusted Execution Environment to ensure a high level of trust in the isolation, access control, and validity of key material and cryptographic operations. This paper presents the design of a novel KMS for SSI native IoT nodes, which adapts the principles of the deterministic key hierarchy used by cryptocurrency wallets to provide trusted key pair generation and usage to any SSI framework.

The implementation of the identity path and identity key derivation algorithm on a constrained IoT node demonstrates the feasibility of the design.
Workshop EDId
Service Provider Accreditation: Enabling and Enforcing Privacy-by-Design in Credential-based Authentication Systems
Stefan More (Graz University of Technology and Secure Information Technology Center Austria (A-SIT), Austria), Jakob Heher (Graz University of Technology and Secure Information Technology Center Austria (A-SIT), Austria), Edona Fasllija (Graz University of Technology and Secure Information Technology Center Austria (A-SIT), Austria), Maximilian Mathie (Graz University of Technology, Austria)
Full Paper
In credential-based authentication systems (wallets), users transmit personally identifiable and potentially sensitive data to Service Providers (SPs). Here, users must often trust that they are communicating with a legitimate SP and that the SP has a lawful reason for requesting the information that it does. In the event of data misuse, identifying and holding the SP accountable can be difficult.

In this paper, we first enumerate the privacy requirements of electronic wallet systems. For this, we explore applicable legal frameworks and user expectations. Based on this, we argue that forcing each user to evaluate each SP individually is not a tractable solution. Instead, we outline technical measures in the form of an SP accreditation system. We delegate trust decisions to an authorized Accreditation Body (AB), which equips each SP with a machine-readable set of data permissions. These permissions are checked and enforced by the user's wallet software, preventing over-sharing sensitive data. The accreditation body we propose is publicly auditable. By enabling the detection of misconduct, our accreditation system increases user trust and thereby fosters the proliferation of the system.
Workshop EDId
Long-Lived Verifiable Credentials: Ensuring Durability Beyond the Issuer’s Lifetime
Ricardo Bochnia (HTW Dresden, Germany), Jürgen Anke (HTW Dresden, Germany)
Full Paper
The use of Self-Sovereign Identity (SSI) and Verifiable Credentials (VCs) to digitize physical credentials is gaining momentum. In particular, credentials such as diplomas may need to remain valid for decades, sometimes outliving their issuers. For instance, a university diploma remains valid even if the issuing university merges or dissolves. We are therefore exploring the challenges that Long-Lived Verifiable Credentials (LLVCs) face in maintaining their value and verifiability over the long term. Although verifiers do not directly contact issuers when verifying a VC, they may still rely on an existing issuer, e.g., to verify the credential's revocation state maintained by the issuer. If the issuer dissolves, the SSI trust triangle is broken, and the VC may lose its value, requiring approaches to preserve the longevity of LLVCs. To address these and other challenges of long-lived credentials, we analyze the management and requirements of physical education credentials as a prime example of long-lived physical credentials (LLPCs), leveraging them as a model for designing LLVCs. Our findings suggest a combination of approaches to effectively design LLVCs to address the unique challenges of long-lived credentials. Beyond technical approaches, such as the potential use of ledgers, our research also highlights the need for sustainable governance structures that extend beyond the life of the issuer to ensure that LLVCs achieve durability comparable to their physical counterparts.
Workshop EDId
Towards Post-Quantum Verifiable Credentials
Tim Wood (Digital Catapult, United Kingdom), Keerthi Thomas (Digital Catapult, United Kingdom), Matthew Dean (Digital Catapult, United Kingdom), Swaminathan Kannan (Digital Catapult, United Kingdom), Robert Learney (Digital Catapult, United Kingdom)
Full Paper
Verifiable Credentials (VCs) allow users to assert claims about themselves in a cryptographically-verifiable way. In last the few years, several different VC schemes have emerged, offering varying levels of privacy through different cryptographic techniques. Current VC implementations aim for security against attacks that use classical computers, but the cryptography in use is vulnerable to attacks if the full power of quantum computing is ever realised. Addressing this threat is important as VCs are gaining traction for applications with safety and security implications (e.g. the mobile Driver's License (mDL)). This work examines the cryptographic underpinnings of VCs to discuss quantum-safety, and makes recommendations regarding the next steps in the transition to post-quantum cryptography.
Workshop EDId
Towards Functions for Verifiable Credentials in a 2-Holder Model
Markus Batz (Stadt Köln, Germany), Sebastian Zickau (Stadt Köln, Germany)
Full Paper
The trust model commonly used to describe digital identity ecosystems covers the roles issuer, holder and verifier which in general interact through the activities issue/hold, present/verify and revoke. The use case "German health certificate" discussed here reveals that processes may incorporate more than just one holder and require credential exchange between them. After issuance to one holder other holders occur which also may or even must present the credential in the further course. Therefore, a holder must be able to execute functions on credentials in its wallet such that some other holder also holds this credential and is able to present it successfully. To formally describe such functions and the necessary data structures in credentials, the "1-holder"-trust triangle is extended to a "2-holder"-model with two holders. Based on this extended model possible and relevant functions and their semantics in terms of verification results are defined. A concept to extend SD-JWT data structures to support this semantics is presented and its applicability is shown.
Workshop EDId
DistIN: Analysis and Validation of a Concept and Protocol for Distributed Identity Information Networks
Michael Hofmeier (University of the Bundeswehr Munich, Germany), Daniela Pöhn (University of the Bundeswehr Munich, Germany), Wolfgang Hommel (University of the Bundeswehr Munich, Germany)
Full Paper
Identity management enables users to access services around the globe. The user information is managed in some sort of identity management system. With the proposed shift to self-sovereign identities, self-sovereign control is shifted to the individual user. However, this also includes responsibilities, for example, in case of incidents. This is the case although they typically do not have the capability to do so. In order to provide users with more control and less responsibilities, we unite identity management systems with public key infrastructures. This consolidation allows more flexible and customized trust relationships to be created and validated. This paper explains, analyzes, and validates our novel design for a Distributed Identity Information Network (DistIN) that allows a high degree of decentralization while aiming for high security, privacy, usability, scalability, and sovereignty. The primary advantage of the system lies in its flexibility and ease of use, which also enables smaller organizations or even private individuals to participate in the network with a service. This work compiles categorized requirements from the literature and analyzes the verification and authentication data flows. On this basis, the security analysis and validation are following. This work is an essential step to reach the goal of the final web-based DistIN protocol and application.
Workshop EDId

ENS

SoK: A Taxonomy for Hardware-Based Fingerprinting in the Internet of Things
Christian Spinnler (Siemens AG, FAU Erlangen-Nürnberg, Germany), Torsten Labs (Siemens AG, Germany), Norman Franchi (FAU Erlangen-Nürnberg, Chair of Electrical Smart City Systems, AIN, Germany)
Full Paper
In IoT applications, embedded devices acquire and transmit data to control and optimize industrial processes. In order to trust this data, the trustworthiness of the data acquisition system, such as the sensors and the integrated signal processing components, is a crucial requirement. Software authenticity is provided with concepts like measured boot. Expanding authenticity to hardware components requires and motivates new approaches like hardware fingerprinting.

In this paper, we review and systematize current research and trends in hardware fingerprinting. We provide insights to current research directions by reviewing multiple survey and review papers and derive a common definition for fingerprinting based on the reviewed literature.

We identify three different fingerprinting techniques: Hardware Fingerprinting, Behavior Fingerprinting and Radio Frequency Fingerprinting, which can be used for multiple application scenarios. By decomposing a common embedded system architecture, we provide four trust domains from which we can create a hardware fingerprint: Main Processing Domain, On-Device Communication Domain, Peripheral Domain and Environmental Domain.

With this in mind, a new fingerprinting taxonomy is developed, taking into account different data sources and evaluation techniques. We distinguish between intrinsic and extrinsic data sources and direct and indirect data evaluation.

In order to get an understanding of the scope of the fingerprinting techniques w.r.t. their trust domain and application scenarios, a new categorization model is created which binds the data sources to a physical asset of the device, thus making it possible to determine to what extend a device's components can be trusted and in which applications it may be applicable.
Workshop ENS
Identity and Access Management Architecture in the SILVANUS Project
Pawel Rajba (Warsaw University of Technology, Poland), Natan Orzechowski (Warsaw University of Technology, Poland), Karol Rzepka (Warsaw University of Technology, Poland), Przemysław Szary (Warsaw University of Technology, Poland), Dawid Nastaj (Warsaw University of Technology, Poland), Krzysztof Cabaj (Warsaw University of Technology, Poland)
Full Paper
SILVANUS is a scientific collaboration EU-funded project with the goal to mitigate the growing impact of wildfires caused by global climate change by implementing a comprehensive global fire prevention strategy. Due to the significant complexity and collaborative nature of the project which involves more than 50 parties, it is a challenge to ensure unified and governed security especially that the platform is based on heterogeneous and multi-component architecture. To ensure that the expectations are delivered, different architecture perspectives need to be considered and one of these is identity and access management.

In this paper we describe the identity and access management architecture perspective of the SILVANUS project. We start with the high level overview supported by requirements expresses as policies, introduce the identity governance and administration as well as access management areas, and then analyze the next level of the IAM architectuer based on XACML concept. We also cover IAM processes and monitoring which are inherent constituents of the complete solution. Finally, in certain aspects we consider different maturity levels and position appropriately the current development stage.
Workshop ENS
Future-proofing Secure V2V Communication against Clogging DoS Attacks
Hongyu Jin (KTH Royal Institute of Technology, Sweden), Zhichao Zhou (KTH Royal Institute of Technology, Sweden), Panos Papadimitratos (KTH Royal Institute of Technology, Sweden)
Full Paper
Clogging Denial of Service (DoS) attacks have disrupted or disabled various networks, in spite of security mechanisms. External adversaries can severely harm networks, especially when high-overhead security mechanisms are deployed in resource-constrained systems. This can be especially true in the emerging standardized secure Vehicular Communication (VC) systems: mandatory message signature verification can be exploited to exhaust resources and prevent validating information that is, critical, often, for transportation safety. Although efficient message verification schemes and better provisioned devices could serve as potential remedies, we point out the limitations of existing solutions, challenges to address for scalable and resilient secure VC systems, and, most notably, the need for integrating defense mechanisms against clogging DoS attacks. We position that the existing secure VC protocols are vulnerable to clogging DoS attacks and recommend symmetric key chain based pre-validation with mandatory signature verification to thwart clogging DoS attacks, while maintaining all key security properties, including non-repudiation to enable accountability.
Workshop ENS
Introducing a Multi-Perspective xAI Tool for Better Model Explainability
Marek Pawlicki (Bydgoszcz University of Science and Technology, Poland), Damian Puchalski (ITTI Sp. z o.o., Poland), Sebastian Szelest (ITTI Sp. z o.o., Poland), Aleksandra Pawlicka (ITTI Sp. z o.o., Poland), Rafal Kozik (Bydgoszcz University of Science and Technology, Poland), Michał Choraś (Bydgoszcz University of Science and Technology, Poland)
Full Paper
This paper introduces an innovative tool equipped with a multiperspective, user-friendly dashboard designed to enhance the explainability of AI models, particularly in cybersecurity. By enabling users to select data samples and apply various xAI methods, the tool provides insightful views into the decision-making processes

of AI systems. These methods offer diverse perspectives and deepen the understanding of how models derive their conclusions, thus demystifying the "black box" of AI. The tool’s architecture facilitates easy integration with existing ML models, making it accessible to users regardless of their technical expertise. This approach promotes transparency and fosters trust in AI applications by aligning decision-making with domain knowledge and mitigating potential biases.
Workshop ENS
Leveraging Overshadowing for Time-Delay Attacks in 4G/5G Cellular Networks: An Empirical Assessment
Virgil Hamici-Aubert (IMT Atlantique, IRISA, UMR CNRS 6074, France), Julien Saint-Martin (IMT Atlantique, IRISA, UMR CNRS 6074, France), Renzo E. Navas (IMT Atlantique, IRISA, UMR CNRS 6074, France), Georgios Z. Papadopoulos (IMT Atlantique, IRISA, UMR CNRS 6074, France), Guillaume Doyen (IMT Atlantique, IRISA, UMR CNRS 6074, France), Xavier Lagrange (IMT Atlantique, IRISA, UMR CNRS 6074, France)
Full Paper
Ensuring both reliable and low-latency communications over 4G or 5G Radio Access Network (RAN) is a key feature for services such as smart power grids and the metaverse. However, the lack of appropriate security mechanisms at the lower-layer protocols of the RAN--a heritage from 4G networks--opens up vulnerabilities that can be exploited to conduct stealthy Reduction-of-Quality attacks against the latency guarantees. This paper presents an empirical assessment of a proposed time-delay attack that leverages overshadowing to exploit the reliability mechanisms of the Radio Link Control (RLC) in Acknowledged Mode. By injecting falsified RLC Negative Acknowledgements, an attacker can maliciously trigger retransmissions at the victim User Equipment (UE), degrading the uplink latency of application flows. Extensive experimental evaluations on open-source and commercial off-the-shelf UEs demonstrate the attack's effectiveness in increasing latency, network load, and buffer occupancy. The attack impact is quantified by varying the bitrate representing different applications and the number of injected negative acknowledgments controlling the attack intensity. This work studies a realistic threat against the latency quality of service in 4G/5G RANs and highlights the urgent need to revisit protocol security at the lower-RAN layers for 5G (and beyond) networks.
Workshop ENS
Enhancing Network Security Through Granular Computing: A Clustering-by-Time Approach to NetFlow Traffic Analysis
Mikołaj Komisarek (ITTI Sp. z o.o., Poland), Marek Pawlicki (Bydgoszcz University of Science and Technology, Poland), Salvatore D'Antonio (Naples University Parthenope, Italy), Rafał Kozik (Bydgoszcz University of Science and Technology, Poland), Aleksandra Pawlicka (Warsaw University, POLAND, Poland), Michał Choraś (Bydgoszcz University of Science and Technology, Poland)
Full Paper
This paper presents a study of the effect of the size of the time window from which network features are derived on the predictive ability of a Random Forest classifier implemented as a network intrusion detection component. The network data is processed using granular computing principles, gradually increasing the time windows to allow the detection algorithm to find patterns in the data at different levels of granularity. Experiments were conducted iteratively with time windows ranging in size from 2 to 1024 seconds. Each iteration involved time-based clustering of the data, followed by splitting into training and test sets at a ratio of 67% - 33%. The

Random Forest algorithm was applied as part of a 10-fold cross-validation. Assessments included standard detection metrics: accuracy, precision, F1 score, BCC, MCC and recall. The results show a statistically significant improvement in the detection of cyber attacks in network traffic with a larger time window size (p-value 0.001953125). These results highlight the effectiveness of using longer time intervals in network data analysis, resulting in increased anomaly detection.
Workshop ENS
Trustworthy AI-based Cyber-Attack Detector for Network Cyber Crime Forensics
Damian Puchalski (ITTI Sp. z o.o., Poland), Marek Pawlicki (Bydgoszcz University of Science and Technology, Poland), Rafał Kozik (Bydgoszcz University of Science and Technology, Poland), Rafał Renk (ITTI Sp. z o.o., Poland), Michał Choraś (Bydgoszcz University of Science and Technology, Poland)
Full Paper
In recent years, the increasing sophistication and proliferation of cyberthreats have underscored the necessity for robust network security measures, as well as a comprehensive approach to cyberprotection at large. As cyberthreats are continuously more and more complex, and their detection, response and mitigation often involve dealing with big data, the need for novel solutions is present also in cyber-criminal law enforcement (LEA) and network forensics contexts. Traditional, anomaly-based or signature-based intrusion detection systems (IDS) often face challenges in adapting to the evolving cyberattack landscape. On the other hand, Machine Learning (ML) has emerged as a promising approach, proving its ability to detect complex patterns in big data, including applications such as intrusion detection and classification of threats in the network environment, with high accuracy and precision (reduced rate of false positives). In this paper we present the Trustworthy Cyberattack Detector tool (TCAD), benefiting from the machine learning algorithms for the detection and classification of cyberattacks. TCAD can be used for monitoring the network in real-time and for offline analysis of collected network data. We believe that the TCAD can be successfully applied for the task of detecting and classifying evidence during criminal investigations related to network cyber attacks, but also can be helpful for the correlation of discovered network-based events over time with other collected non-network evidence.
Workshop ENS

EPESec

Vulnerability management digital twin for energy systems
Jessica B. Heluany (Norwegian University of Science and Technology, Norway), Johannes Goetzfried (Siemens Energy AG - Industrial Cybersecurity, Germany), Bernhard Mehlig (Siemens Energy AG - Industrial Cybersecurity, Germany), Vasileios Gkioulos (Norwegian University of Science and Technology, Norway)
Full Paper
Increasing cyber attacks underscore the importance of addressing system vulnerabilities to reduce security risks. To structure our workflow of vulnerability management, we made use of relevant and widely adopted industrial standards, while also incorporating the concept of digital twins. Therefore, this research suggests a vulnerability management digital twin that aligns with the ISO 23247-2 framework. It specifically emphasizes recommendations for the ‘data collection’ function following the workflow outlined in IEC 62443-2-3, and exemplifying use cases based on a typical automation architecture of energy systems. We evaluated the CVSS framework to prioritize scores and also examined ways to integrate CVSS with other contextual information to develop a mitigation deployment strategy. The goal was to assist asset owners in optimizing resource utilization in addressing vulnerabilities.
Workshop EPESec
Anomaly detection mechanisms for in-vehicle and V2X systems
Alexios Lekidis (University of Thessaly, Greece)
Full Paper
Modern V2X systems have an increasing number of interfaces that allow remote connectivity, but also include the risk of exposure to cyber threats. The attack surface for such threats is hence constantly increasing and in combination with privacy issues that may arise through the presence of sensitive data from users in the V2X ecosystem, this necessitates the requirement for security mechanisms. However, the existing mechanisms to ensure protection against such threats face major hurdles, such as 1) the lack of in-vehicle addressing schemes, 2) the abundance of V2X interfaces and 3) the manufacturer-specific architecture of each vehicle consisting of a variety of different systems. On top of these hurdles, a solution should satisfy the real-time requirements of the resource-constrained in-vehicle architecture by remaining lightweight and highly reliable as well as by avoiding false positive indications and alarms. This article presents a novel anomaly detection solution for addressing the main challenges of security mechanisms by simultaneously keeping a minimal impact on the real-time in-vehicle requirements. The solution is demonstrated through an Electric Vehicle (EV) charging hub testbed that implements anomaly detection schemes to detect proof-of-concept cyber-attacks targeting EV charging profile and causing cascading effects by zeroing the vehicle speed.
Workshop EPESec
The Cyber Safe Position: An STPA for Safety, Security, and Resilience Co-Engineering Approach
Georgios Gkoktsis (Fraunhofer SIT | ATHENE, Germany), Ludger Peters (Fraunhofer SIT | ATHENE, Germany)
Full Paper
Model Based Security Engineering (MBSE) is a growing field of research, which is gaining popularity in the domain of Safety, Security, and Resilience Co-Engineering. The System Theoretic Process Analysis (STPA) is a method for systematically analyzing the behavior of complex systems to investigate their failure modes and the Unsafe Control Actions (UCA) that can lead to those failure modes. This paper expands the methodological scope of STPA, by including an iterative Root-Cause Analysis element, which examines the possible emergence of UCAs due to either malfunction, or malicious action. Output of the method are the attributes and constraints of Resilience Modes of system configuration and operation, named ''Cyber Safe Position`` (CSP). The proposed method is applied in the case study of a Photovoltaic Plant connected to a Virtual Power Plant (VPP).
Workshop EPESec
An Analysis of Security Concerns in Transitioning Battery Management Systems from First to Second Life
Julian Blümke (CARISSMA Institute of Electric, Connected and Secure Mobility, Technische Hochschule Ingolstadt, Germany), Kevin Gomez Buquerin (CARISSMA Institute of Electric, Connected and Secure Mobility, Technische Hochschule Ingolstadt, Germany), Hans-Joachim Hof (CARISSMA Institute of Electric, Connected and Secure Mobility, Technische Hochschule Ingolstadt, Germany)
Full Paper
With the ongoing shift to electric vehicles, lithium-ion batteries are becoming essential components for vehicles. Battery management systems manages these batteries. While battery management systems typically used to be placed deep in the vehicle architecture, away from the external facing surface of vehicles, they are now more and more connected to backend systems, e.g., to improve monitoring battery properties and optimize charging. Hence, battery management systems have moved closer to the attack surface, increasing the risk of security incidents in these systems. Also, batteries will soon be reused in so-called second life applications, e.g., as an energy storage system in a private home. While conventional methods involve removing the battery and reusing it with a new battery management system, modern methods use the original battery management system. Security controls already exist in first and second life applications. However, there is a lack of research activities regarding the transition phase. This paper analyzes the phase of transferring the battery management system from the first to the second life of particular relevance for security, privacy, and intellectual property. We try to close this research gap by analyzing the security aspects of a battery management system life cycle and its altering system environment. We are defining the transition phase, identifying necessary activities, and providing cybersecurity needs for the transitioning of battery management system from first to second life.
Workshop EPESec

ETACS

Tackling the cybersecurity workforce gap with tailored cybersecurity study programs in Central and Eastern Europe
Marko Zivanovic (PhD Student, Faculty of Technical Science, Novi Sad, Serbia, Serbia), Imre Lendák (Professor, Faculty of Technical Science, Novi Sad, Serbia, Serbia), Ranko Popovic (Retired professor, Faculty of Technical Science, Novi Sad, Serbia, Serbia)
Full Paper
Digitalization of society brought improvement in many aspects of life but it also brought new cybersecurity challenges. The number of sophisticated, targeted cyber attacks is increasing, which requires constant improvements in Cybersecurity education. Despite this pressing need, the cybersecurity workforce gap is getting bigger. This paper presents a new approach for dynamic cybersecurity curriculum development that utilizes keyword extraction from various sources such as job ads, courses, and curricula with machine learning to quantify curriculum alignment with cybersecurity industry demands and address the workforce gap. The analysis illustrates curricula in the Central East Europe (CEE) region, maps cyber security job ads to curricula and quantifies coverage of courses, industry, and reference framework topics based on keyword matching. The case study conducted with curricula from CEE illustrates coverage according to the ENISA’s European Cybersecurity Skills Framework (ECSF) roles and optimization progress after adjustment application. The results demonstrate the importance of dynamic curriculum updates for academic institutions including cybersecurity workforce gap reduction and lack of real progress towards alignment with ECSF.
Workshop ETACS
Enhancing Cybersecurity Curriculum Development: AI-Driven Mapping and Optimization Techniques
Petr Dzurenda (Brno University of Technology, Czechia), Sara Ricci (Brno University of Technology, Czechia), Marek Sikora (Brno University of Technology, Czechia), Michal Stejskal (Brno University of Technology, Czechia), Imre Lendák (Faculty of technical sciences, Serbia), Pedro Adao (Instituto Superior Tecnico, Portugal)
Full Paper
Cybersecurity has become important, especially during the last decade. The significant growth of information technologies, internet of things, and digitalization in general, increased the interest in cybersecurity professionals significantly. While the demand for cybersecurity professionals is high, there is a significant shortage of these professionals due to the very diverse landscape of knowledge and the complex curriculum accreditation process.

In this article, we introduce a novel AI-driven mapping and optimization solution enabling cybersecurity curriculum development. Our solution leverages machine learning and integer linear programming optimization, offering an automated, intuitive, and user-friendly approach. It is designed to align with the European Cybersecurity Skills Framework (ECSF) released by the European Union Agency for Cybersecurity (ENISA) in 2022. Notably, our innovative mapping methodology enables the seamless adaptation of ECSF to existing curricula and addresses evolving industry needs and trend. We conduct a case study using the university curriculum from Brno University of Technology in the Czech Republic to showcase the efficacy of our approach. The results demonstrate the extent of curriculum coverage according to ECSF profiles and the optimization progress achieved through our methodology.
Workshop ETACS
Beyond the Bugs: Enhancing Bug Bounty Programs through Academic Partnerships
Andrej Krištofík (CERIT, Faculty of Informatics, and Institute of Law and Technology, Faculty of Law, Masaryk University, Slovakia), Jakub Vostoupal (CERIT, Faculty of Informatics, and Institute of Law and Technology, Faculty of Law, Masaryk University, Czechia), Kamil Malinka (Institute of Computer Science and Faculty of Informatics, Masaryk University, Czechia), František Kasl (CERIT, Faculty of Informatics, and Institute of Law and Technology, Faculty of Law, Masaryk University, Czechia), Pavel Loutocký (CERIT, Faculty of Informatics, and Institute of Law and Technology, Faculty of Law, Masaryk University, Czechia)
Full Paper
This paper explores the growing significance of vulnerability disclosure and bug bounty programs within the cybersecurity landscape, driven by regulatory changes in the European Union. The effectiveness of these programs relies heavily on the expertise of participants, presenting a challenge amid a shortage of skilled cybersecurity professionals, particularly in less sought-after sectors. To address this issue, the paper proposes a collaborative approach between academia and bug bounty issuers.

By integrating bug bounty programs into cybersecurity courses, students gain practical skills and soft skills essential for bug hunting and cybersecurity work. The collaboration benefits both issuers, who gain manageable manpower, and students, who receive valuable hands-on experience. A pilot conducted during the current academic year yielded positive results, indicating the potential of this approach to address the demand for skilled cybersecurity professionals. The insights gained from the pilot inform future considerations and advancements in this collaborative model.
Workshop ETACS
Assessing the Impact of Large Language Models on Cybersecurity Education: A Study of ChatGPT's Influence on Student Performance
Marc Ohm (University of Bonn & Fraunhofer FKIE, Germany), Christian Bungartz (University of Bonn, Germany), Felix Boes (University of Bonn, Germany), Michael Meier (University of Bonn & Fraunhofer FKIE, Germany)
Full Paper
The popularity of chatbots to facilitate day-to-day business, including students and their study exercises, is on the rise. This paper investigates the extent and effects on the academic performance of students that leverage such tools. While many other approaches are hypothesized and discussed, we measure empirically. We recorded and compared the performance of cybersecurity students in weekly exercises and final exams over a period of three years.
This allows us to have three groups with varying degrees of ChatGPT influence, namely no access, uncontrolled access, and controlled access. In an anonymous survey, we found that approximately 80% of our students utilize ChatGPT during the weekly assignments in 2023. However, none of them indicated this on their submission, despite it being a mandatory requirement. Through statistical analysis of achieved points in our sample groups, we identified that students perform similarly on the weekly assignments. However, their performance on the final examination deteriorates.
Workshop ETACS
Event-based Data Collection and Analysis in the Cyber Range Environment
Willi Lazarov (Brno University of Technology, Czechia), Samuel Janek (Brno University of Technology, Czechia), Zdenek Martinasek (Brno University of Technology, Czechia), Radek Fujdiak (Brno University of Technology, Czechia)
Full Paper
The need to educate users on cybersecurity to some extent is critical due to the ever-increasing cyber threats. A number of web presentations, books, and other study materials can be used for this purpose. In contrast to passive learning methods, hands-on training offers a deeper perspective but poses considerable technical challenges to its implementation, which can be resolved using cyber range platforms. However, in order to thoroughly evaluate the training and provide sufficient feedback, data must be collected and analyzed. Our paper addresses this problem by developing an event-based approach for data collection and analysis. The use of events allows us to keep a history of an event and reconstruct it retrospectively, especially for further analysis and evaluation. We validated the implemented approach in a cyber range environment, in which we developed an interactive interface to visualize the analyzed data.
Workshop ETACS

FARES

Enhancing Algorithmic Fairness: Integrative Approaches and Multi-Objective Optimization Application in Recidivism Models
Michael Farayola (Lero Research Centre, School of Computing, Dublin City University, Ireland), Malika Bendechache (Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland), Takfarinas Saber (Lero Reseach Centre, School of Computer Science, University of Galway, Ireland), Regina Connolly (Lero Research Centre, School of Business, Dublin City University, Ireland), Irina Tal (Lero Research Centre, School of Computing, Dublin City University, Ireland)
Full Paper
The fairness of Artificial Intelligence (AI) has gained tremendous attention within the criminal justice system in recent years, mainly when predicting the risk of recidivism. The primary reason is attributed to evidence of bias towards demographic groups when deploying these AI systems. Many proposed fairness-improving techniques applied at each of the three phases of the fairness pipelines, pre-processing, in-processing and post-processing phases, are often ineffective in mitigating the bias and attaining high predictive accuracy. This paper proposes a novel approach by integrating existing fairness-improving techniques: Reweighing, Adversarial Learning, Disparate Impact Remover, Exponential Gradient Reduction, Reject Option-based Classification, and Equalized Odds optimization across the three fairness pipelines simultaneously. We evaluate the effect of combining these fairness-improving techniques on enhancing fairness and attaining accuracy. In addition, this study uses multi- and bi-objective optimization techniques to provide and to make well-informed decisions when predicting the risk of recidivism. Our analysis found that one of the most effective combinations (i.e., disparate impact remover, adversarial learning, and equalized odds optimization) demonstrates a substantial enhancement and balances achievement in fairness through various metrics without a notable compromise in accuracy.
Workshop FARES
Toward a Log-based Anomaly Detection System for Cyber Range Platforms
Francesco Blefari (University of Calabria, Italy), Francesco Aurelio Pironti (University of Calabria, Italy), Angelo Furfaro (University of Calabria, Italy)
Full Paper
Nowadays, the Information Technology landscape is permeated by a multitude of vulnerabilities and threats. The constantly rising number of heterogeneous devices makes difficult or even impossible a complete mapping of all possible threats to which they are exposed. Antivirus and Anti-malware tools have been developed to quickly detect anomalous software or behaviors. However, these solutions often rely on a knowledge base stored in such a kind of database. They are not effective against unknown attacks, also known as zero-day attacks. By relying on real-time (network/system) log analysis it is possible to detect attacker activities.

The log analysis plays a crucial role against cyber threats providing an effective tool in order to detect them rapidly and build advanced monitoring systems. However, log consultation can often be a challenging and costly task. Over time, useful tools and utilities have been developed to simplify the task for analysts.

This paper presents a system capable to detect attackers' activities in a Cyber Range platform enabling the visualization of the attackers' activity traces exploiting the attack graph.
Workshop FARES
SBOM Ouverture: What We Need and What We Have
Gregorio Dalia (University of Sannio, Italy), Corrado Aaron Visaggio (University of Sannio, Italy), Andrea Di Sorbo (University of Sannio, Italy), Gerardo Canfora (University of Sannio, Italy)
Full Paper
A Software Bill of Materials (SBOM) is an inventory of the software components used to build a product, which can help customers track security risks throughout the development lifecycle. The popularity of SBOMs grew in May 2021 when the White House issued an executive order to improve the security of the software supply chain and the transparency of the government’s software inventory.

Although the growing interest in SBOM, many open challenges need to be addressed to help reduce exposure to cyber risks and enhance the security of software supply chains. To help the industry and research assemble the roadmap to achieve SBOM adoption in practice, in this paper, we analyze the challenges related to enabling technologies and the open issues that research must investigate. Furthermore, we perform a comparative analysis of the existing tools to generate SBOMs, demonstrating that the enabling technologies have not yet reached full automation and maturity.
Workshop FARES
Towards realistic problem-space adversarial attacks against machine learning in network intrusion detection
Marta Catillo (Università degli Studi del Sannio, Italy), Antonio Pecchia (Università degli Studi del Sannio, Italy), Antonio Repola (Università degli Studi del Sannio, Italy), Umberto Villano (Università degli Studi del Sannio, Italy)
Full Paper
Current trends in network intrusion detection systems (NIDS) capitalize on the extraction of features from the network traffic and the use of up-to-date machine and deep learning techniques to infer a detection model; in consequence, NIDS can be vulnerable to adversarial attacks. Differently from the plethora of contributions that apply (and misuse) feature-level attacks envisioned in application domains far from NIDS, this paper proposes a novel approach to adversarial attacks, which consists in a realistic problem-space perturbation of the network traffic. The perturbation is achieved through a traffic control utility. Experiments are based on normal and Denial of Service workloads in both legitimate and adversarial conditions, and the application of four popular techniques to learn the NIDS models. The results highlight the transferability of the adversarial examples generated by the proposed problem-space attack as well as the effectiveness at inducing traffic misclassifications across the NIDS models assessed.
Workshop FARES
The Right to Be Zero-Knowledge Forgotten
Ivan Visconti (DIEM, University of Salerno, Italy)
Full Paper
The main goal of the EU GDPR is to protect personal data of individuals within the EU. This is expressed in several rights and, among them, in this work we focus on the Right to Erasure, more commonly known as the Right to Be Forgotten (RtBF).

There is an intriguing debate about the affordable costs and the actual technical feasibility of satisfying the RtBF in digital platforms. We note that some digital platforms process personal data in order to derive and store correlated data raising two main issues: 1) removing personal data could create inconsistencies in the remaining correlated data; 2) correlated data could also be personal data. As such, in some cases, erasing personal data can trigger an avalanche on the remaining information stored in the platform.

Addressing the above issues can be very challenging in particular when a digital platform has been originally built without embedding in its design specific methodologies to deal with the RtBF.

This work aims at illustrating concrete scenarios where the RtBF is technically hard to guarantee with traditional techniques. On the positive side, we show how zero-knowledge (ZK) proofs can be leveraged to design affordable solutions in various use cases, especially when considered at design time. ZK proofs can be instrumental

for compliance to the RtBF revolutionizing the current approaches to design compliant systems. Concretely, we show an assessment scheme allowing to check compliance with th RtBF leveraging the power of ZK proofs. We analyze the above assessment scheme considering specific hard-to-address use cases.
Workshop FARES
On Implementing Linear Regression on Homomorphically Encrypted Data: A Case-Study
Gianluca Dini (University of Pisa, Italy)
Full Paper
Fully Homomorphic Encryption (FHE) is a key technological enabler for secure computations as it allows a third-party to perform arbitrary computations on encrypted data learning neither the input nor the results of a computation. Notwithstanding the recent theoretical breakthroughs in FHE, building a secure and efficient FHE-based application is still a challenging engineering task where optimal choices are heavily application-dependent.

Taking linear regression as a case-study, we investigate the programming and configuration solutions to implement FHE-based applications. We show that, although obviously slower than the non-homomorphic version, the implementation of linear regression on homomorphically encrypted data is viable provided the programmer adopts appropriate programming expedients and parameters selection.
Workshop FARES
A Systematic Review of Contemporary Applications of Privacy-Aware Graph Neural Networks in Smart Cities
Jingyan Zhang (Dublin City University, Ireland), Irina Tal (Dublin City University, Ireland)
Full Paper
In smart cities, graph embedding technologies, Graph Neural Networks (GNNs), and related variants are extensively employed to address predictive tasks within complex urban networks, such as traffic management, the Internet of Things (IoT), and public safety. These implementations frequently require processing substantial personal information and topological details in graph formats, thereby raising significant privacy concerns. Mitigating these concerns necessitates an in-depth analysis of existing privacy preservation techniques integrated with GNNs in the specific context of smart cities. To this end, this paper provides a comprehensive systematic review of current applications of privacy-aware GNNs in smart cities.

Our research commenced with a methodical literature search that identified 14 pertinent papers and summarized prevalent privacy preservation mechanisms, including federated learning, differential privacy, homomorphic encryption, adversarial learning, and user-trust-based approaches. Subsequent analysis examined how the integration of these technologies with GNNs enhances privacy security and model utility in smart city applications. Further, we proposed an analytical framework for privacy-aware GNNs across the machine learning lifecycle, assessing the challenges of current integration from a practical viewpoint. The paper concluded by suggesting potential directions for future research.
Workshop FARES
Modelling the privacy landscape of the Internet of Vehicles
Ruben Cacciato (University of Catania, Italy), Mario Raciti (IMT School for Advanced Studies Lucca, Italy), Sergio Esposito (University of Catania, Italy), Giampaolo Bella (University of Catania, Italy)
Full Paper
Within the dynamic realm of Intelligent Transportation Systems (ITS), the Internet of Vehicles (IoV) marks a significant paradigm shift. The IoV represents an interconnected network linking vehicles, infrastructures, and the Internet itself, driven by wireless communication technologies. This paper dissects the privacy landscapes of ITS and IoV, exploring gaps in standards and academic literature. Leveraging European Telecommunications Standards Institute (ETSI) ITS G5 standards and IoV analyses in literature, we build two relational models to depict their current privacy landscape. A contrastive analysis reveals structural disparities and thematic differences. ITS, governed by established standards, exhibits a robust structure, while IoV, in its nascent stage, lacks formalisation. Privacy concerns differ, with IoV emphasising user consent and multi-party privacy. Detailed analysis highlights data collection, sharing, and privacy policy challenges. As ITS transitions to IoV, data volume expands, necessitating enhanced privacy safeguards. Addressing these challenges requires collaborative efforts to develop comprehensive privacy policies, prioritise user awareness, and integrate privacy by design principles. This paper offers insights into navigating the evolving landscape of transportation technologies, laying the groundwork for privacy-preserving ITS and IoV ecosystems.
Workshop FARES

GRASEC

NORIA UI: Efficient Incident Management on Large-Scale ICT Systems Represented as Knowledge Graphs
Lionel Tailhardat (Orange, France), Yoan Chabot (Orange, France), Antoine Py (Orange, France), Perrine Guillemette (Orange, France)
Full Paper
Incident management in telecom and computer networks requires correlating and interpreting heterogeneous technical information sources. While knowledge graphs have proven flexible for data integration and logical reasoning, their use in network and cybersecurity monitoring systems (NMS/SIEM) is not yet widespread. In this work, we explore the integration of knowledge graphs to facilitate the diagnosis of complex situations from the perspective of NetOps/SecOps experts who use NMS/SIEMs. Through expert interviews, we identify expectations in terms of ergonomics and decision support functions, and propose a Web-based client-server software architecture using an RDF knowledge graph that describes network systems and their dynamics. Based on a UI/UX evaluation and feedback from a user panel, we demonstrate the need to go beyond simple data retrieval from the knowledge graph. We also highlight the importance of synergistic reasoning and interactive analysis of multi-layered systems. Overall, our work provides a foundation for future designs of knowledge-graph-based NMS/SIEM decision support systems with hybrid logical/probabilistic reasoning.
Workshop GRASEC
A Model-based Approach for Assessing the Security of Cyber-Physical Systems
Hugo Teixeira De Castro (Télécom Sud Paris, France), Ahmed Hussain (KTH Royal Institute of Technology, Sweden), Gregory Blanc (Institut Mines-Télécom, Télécom SudParis, Institut Polytechnique de Paris, France), Jamal El Hachem (Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Bretagne Sud (UBS), France), Dominique Blouin (Telecom Paris, France), Jean Leneutre (Telecom Paris, France), Panos Papadimitratos (KTH Royal Institute of Technology, Sweden)
Full Paper
Cyber-Physical Systems ( CPS s) complexity, automation, and interconnection have been continuously increasing to support new opportunities and functionalities in numerous life-impacting applications, such as e-health, Internet of Things ( IoT ) devices, or Industrial Control Systems (ICSs). These characteristics introduce new critical security challenges to both industrial practitioners and academics. This work investigates how Model-Based System Engineering (MBSE) and attack graph approaches could be leveraged to model and analyze secure CPS solutions for identifying high-impact attacks at the architecture phase of the secure system development life cycle. To achieve this objective, we propose a new framework that comprises (1) a modeling paradigm for secure CPS representation, easily usable by system architects with limited cybersecurity expertise, (2) an attack-graph-based solution for CPS automatic quantitative security analysis, based on the MulVAL security tool formalisms, (3) a model-based code generator tool - a set of Model-To-Text (MTT) transformation rules to bridge the gap between the CPS specific extensions of SysML and MulVAL. We illustrated the ability of our proposed framework to model, analyze, and identify attacks in CPS s through an autonomous ventilation system example. The results confirm that the framework can accurately represent CPS and their vulnerabilities. Attack scenarios, including a Denial of Service ( DoS ) attack targeting an industrial communication protocol, were identified and displayed as attack graphs. Furthermore, success probabilities were computed to assess the level of risk quantitatively. In future work, we intend to ex- tend the approach to connect it to dynamic security databases and address challenges such as automatic countermeasure selection.
Workshop GRASEC
FedHE-Graph: Federated Learning with Hybrid Encryption on Graph Neural Networks for Advanced Persistent Threat Detection
Atmane Ayoub Mansour Bahar (École Nationale Supérieure d’Informatique, Alger, Algérie, Algeria), Kamel Soaïd Ferrahi (École Nationale Supérieure d’Informatique, Alger, Algérie, Algeria), Mohamed-Lamine Messai (Université Lumière Lyon 2, France), Hamida Seba (University Lyon 1, France), Karima Amrouche (École Nationale Supérieure d’Informatique, Alger, Algérie, Algeria)
Full Paper
Intrusion Detection Systems (IDS) play a crucial role in safeguarding systems and networks from different types of attacks. However, IDSes face significant hurdles in detecting Advanced Persistent Threats (APTs), which are sophisticated cyber-attacks characterized by their stealth, duration, and advanced techniques. Recent research has explored the effectiveness of Graph Neural Networks (GNNs) in APT detection, leveraging their ability to analyse intricate-relationships within graph data. However, existing approaches often rely on local models, limiting their adaptability to evolving APT-tactics and raising privacy-concerns. In response to these challenges, this paper proposes integrating Federated-Learning (FL) into the architectures of GNN-based Intrusion Detection Systems. Federated Learning is a distributed-learning paradigm that enables collaborative model-training without centralizing sensitive-data. By leveraging FL, hosts can contribute to a collective knowledge-base while preserving the confidentiality of their local datasets. This approach not only mitigates hardware strain and addresses privacy concerns; but also enhances model robustness by capturing diverse-insights from multiple sources. Moreover, our solution includes an enhanced encryption-system of the clients’ weights to safely send them to the server through the system’s network. This solution prevents man-in-the-middle (MitM) attacks from intercepting the weights and reconstructing clients data using reverse engineering. We evaluate our approach on several datasets, demonstrating promising results in reducing false-positive rates compared to state-of-the-art Provenance-based IDSes (PIDS).
Workshop GRASEC
Advancing ESSecA: a step forward in Automated Penetration Testing
Massimiliano Rak (University of Campania, Luigi Vanvitelli, Italy), Felice Moretta (University of Campania "Luigi Vanvitelli", Italy), Daniele Granata (Università della Campania "Luigi Vanvitelli", Italy)
Full Paper
The growing importance of Information Technology (IT) services is accompanied by a surge in security challenges. While traditional security tests focus on single applications, today's interconnected systems require a broader evaluation. Vulnerability Assessment and Penetration Testing (VAPT) is a method to tackle this, aiming to assess whole systems thoroughly. However, performing VAPT manually is time-consuming and costly. Therefore, there's a strong need for automating these processes. In response to these challenges, a novel methodology, named ESSecA built upon existing literature to guide the penetration testers during the assessment of a system based on threat intelligence mechanisms. This paper presents enhancements to the ESSecA methodology, including a formal Penetration Test Plan (PTP) model, a taxonomy for Penetration Test phases, and an innovative pattern match system integrated with a Tool Catalogue knowledge base used to improve the Expert System. These developments culminated in an algorithm facilitating the automatic generation of Penetration Test Plans, thus advancing the automation of security assessment processes.
Workshop GRASEC
Comparing Hyperbolic Graph Embedding models on Anomaly Detection for Cybersecurity
Mohamed Yacine Touahria Miliani (École Nationale Supérieure d’Informatique, Algeria), Souhail Abdelmouaiz Sadat (École Nationale Supérieure d’Informatique, Algeria), Hamida Seba (University Lyon1, France), Mohammed Haddad (Université Claude Bernard Lyon-1, France)
Full Paper
Graph-based anomaly detection has emerged as a powerful tool in cybersecurity for identifying malicious activities within computer systems and networks. While existing approaches often rely on embedding graphs in Euclidean space, recent studies have suggested that hyperbolic space provides a more suitable geometry for capturing the inherent hierarchical and complex relationships present in graph data. In this paper, we explore the efficacy of hyperbolic graph embedding for anomaly detection in the context of cybersecurity. We conduct a comparison of six state-of-the-art hyperbolic graph embedding methods, evaluating their performance on a well-known intrusion detection dataset. Our analysis reveals the strengths and limitations of each method, demonstrating the potential of hyperbolic graph embedding for enhancing security.
Workshop GRASEC

IMTrustSec

Threat-TLS: A Tool for Threat Identification in Weak, Malicious, or Suspicious TLS Connections
Diana Gratiela Berbecaru (Politecnico di Torino, Italy), Antonio Lioy (Politecnico di Torino, Italy)
Full Paper
Transport Layer Security protocol is widely used nowadays to secure communication channels in various applications running in network, IoT, and embedded systems environments. In the last decade, several attacks affecting the TLS specification, the implementation, the cryptographic vulnerabilities, or the deployment of the TLS-enabled software have been discovered. Although solutions exist for each class of attacks, an attacker may corrupt the TLS support on an end node (even temporarily) making it vulnerable to attacks. To test the resistance of a TLS server to attacks several tools or services exist, that mainly scan a target host looking for wrong configurations. We propose instead a network-based intrusion detection tool named Threat-TLS, aimed to individuate weak, suspicious, or malicious TLS connections. Attackers might establish such connections to hide and distribute potentially dangerous data content, like malware. Alternatively, weak TLS connections could be opened by (legitimate) systems or servers that have been compromised and are prone to TLS attacks, such as systems whose TLS configuration has been changed to use an old TLS version or outdated cryptographic algorithms. We have tested the proposed tool in a testbed environment, illustrating its performance in detecting some TLS attacks.
Workshop IMTrustSec
Anomaly-Based Intrusion Detection for Blackhole Attack Mitigation
Ashraf Abdelhamid (Nile University, Egypt), Mahmoud Said Elsayed (University College Dublin, Ireland), Heba K. Aslan (Nile University, Egypt), Marianne A. Azer (National Telecommunication Institute, Egypt)
Full Paper
In the contemporary environment, mobile ad hoc networks (MANETs) are becoming necessary. They are absolutely vital in a variety of situations where setting up a network quickly is required; however, this is infeasible due to low resources. Ad hoc networks have many applications: education, on the front lines of battle, rescue missions, etc. These networks are distinguished by high mobility and constrained compute, storage, and energy capabilities. As a result of a lack of infrastructure, they do not use communication tools related to infrastructure. Instead, these networks rely on one another for routing and communication. Each node in a MANET searches for another node within its communication range and uses it as a hop to relay the message through a subsequent node, and so on. Traditional networks have routers, servers, firewalls, and specialized hardware. In contrast, each node in ad hoc networks has multiple functions. Nodes, for instance, manage the routing operation. Consequently, they are more vulnerable to attacks than traditional networks. This study's main goal is to develop an approach for detecting blackhole attacks using anomaly detection based on Support Vector Machine (SVM). This detection system looks at node activity to scan network traffic for irregularities. In blackhole scenarios, attacking nodes have distinct behavioral characteristics that distinguish them from other nodes. These traits can be efficiently detected by the proposed SVM-based detection system. To evaluate the effectiveness of this approach, traffic under blackhole attack is created using the OMNET++ simulator. Based on the categorization of the traffic into malicious and non-malicious, the malicious node is then identified. The results of the suggested approach show great accuracy in detecting blackhole attacks.
Workshop IMTrustSec
Analysis of the Capability and Training of Chat Bots in the Generation of Rules for Firewall or Intrusion Detection Systems
Bernardo Louro (Universidade da Beira Interior, Portugal), Raquel Abreu (Universidade da Beira Interior, Portugal), Joana Cabral Costa (Universidade da Beira Interior and Instituto de Telecomunicações, Portugal), João B. F. Sequeiros (Universidade da Beira Interior and Instituto de Telecomunicações, Portugal), Pedro R. M. Inácio (Universidade da Beira Interior and Instituto de Telecomunicações, Portugal)
Full Paper
Large Language Models (LLMs) have the potential to aid in closing the knowledge gap in several specific technical areas, such as cybersecurity, by providing a means to translate instructions defined in natural language into specialized system or software specifications (e.g., firewall rules). The work described herein aims at an evaluation of the capability of LLMs to generate rules for firewall and Intrusion Detection Systems (IDS).

A preliminary evaluation has shown that widely available chat bots have limited capability to generate correct rules and that caution is needed when using their outputs for the aforementioned objective.

This work explores three fine-tuning approaches to address these limitations, each of them with a different objective and achieving different success rates. The first approach aimed at testing how well the model was able to use the knowledge obtained from the prompts when the question was structured differently, achieving a success rate of 89%. The second approach aimed at testing how well the model could link the knowledge obtained from two different prompts and reached a success rate of 61%. The final approach aimed at testing if the model could create complex rules by first learning simple rules, achieving a success rate of 79%.

It can be concluded that fine-tuning is sufficient to improve chat bots into creating syntactically and technically correct rules for firewalls and IDS. Results suggest that the development of a specialized model for as many attacks, firewalls and IDSs can indeed be achieved.
Workshop IMTrustSec
Acceleration of DICE Key Generation using Key Caching
Dominik Lorych (Fraunhofer SIT | ATHENE, Germany), Lukas Jäger (Fraunhofer SIT | ATHENE, Germany), Andreas Fuchs (Fraunhofer SIT | ATHENE, Germany)
Full Paper
DICE is a Trusted Computing standard intended to secure resource-constrained off-the-shelf hardware. It implements a Root of Trust that can be used to construct a Chain of Trust boot system, with symmetric keys representing firmware integrity and device identity. Based on this, asymmetric keys can be generated, but this slows down the boot process significantly as the keys need to be generated on every boot. Asymmetric keys provide multiple advantages when compared to symmetric ones, especially for updatable systems. This prevents the adoption of DICE in fields with strict boot time requirements, for example in the automotive context.

Boot times can be accelerated if keys can be cached in flash memory. However, keys must not be accessible if the state of the system changes, as otherwise the keys would not represent the state anymore. We implement two approaches for this and evaluate them on multiple MCUs regarding automotive requirements.
Workshop IMTrustSec

IWAPS

Advanced methods for generalizing time and duration during dataset anonymization
Jenno Verdonck (DistriNet, KU Leuven, Belgium), Kevin De Boeck (DistriNet, KU Leuven, Belgium), Michiel Willocx (DistriNet, KU Leuven, Belgium), Vincent Naessens (DistriNet, KU Leuven, Belgium)
Full Paper
Time is an often recurring quasi-identifying attribute in many datasets. Anonymizing such datasets requires generalizing the time attribute(s) in the dataset. Examples are start dates and durations, which are traditionally generalized leading to intervals that do not embrace the relation between time attributes. This paper presents advanced methods for creating generalization hierarchies for time data. We propose clustering-based and Mondrian-based techniques to construct generalization hierarchies. These approaches take into account the relation between different time attributes and are designed to improve the utility of the anonymized data. We implemented these methods and conducted a set of experiments comparing them to traditional generalization strategies. The results show that our proposed methods improve the utility of the data for both statistical analysis and machine learning applications. Our approach demonstrates a significant increase in hierarchy quality and configuration flexibility, demonstrating the potential of our advanced techniques over existing methods.
Workshop IWAPS
ARGAN-IDS: Adversarial Resistant Intrusion Detection Systems using Generative Adversarial Networks
João Costa (INOV INESC Inovação, Portugal), Filipe Apolinário (INOV INESC Inovação, Portugal), Carlos Ribeiro (Universidade de Lisboa, Portugal)
Full Paper
Neural Networks (NNs) are not secure enough to be deployed on security-critical tasks such as Network Intrusion Detection Systems(NIDS). NNs are vulnerable to Adversarial Attacks (AAs), which affect their accuracy in identifying malicious activity, by introducing perturbations on network traffic. This work proposes "Adversarial Resistant Intrusion Detection Systems using GANs" (ARGAN-IDS) a method to address these vulnerabilities. ARGAN-IDS is implemented as a Generative Adversarial Network (GAN) trained on network traffic to protect NIDS. ARGAN-IDS, greatly mitigates the impact of AAs, achieving comparable results to a non-perturbed execution. We show GANs have limitations in differentiating between malicious traffic and traffic altered by AAs. And we address this in ARGAN-IDS by training the GAN on network traffic containing malicious packets. This enhancement significantly improved the GAN’s performance, enabling it to identify even highly perturbed adversarial attacks effectively. ARGAN-IDS acts as a neutralizer of perturbations introduced by AAs and mitigates the NIDS vulnerabilities. We have integrated ARGAN-IDS with a state-of-the-art anomaly-based detector, Kitsune. We achieve a reduction of 99.27% of false positives and an improvement of 99.29% of the true negatives, leading to an improvement of roughly 36.75% in overall system accuracy while under AAs.
Workshop IWAPS
Multimodal Security Mechanisms for Critical Time Systems using blockchain in Chriss project
Mari-Anais Sachian (BEIA CONSULT INTERNATIONAL, Romania), George Suciu (BEIA CONSULT INTERNATIONAL, Romania), Maria Niculae (BEIA CONSULT INTERNATIONAL, Romania), Adrian Paun (BEIA CONSULT INTERNATIONAL, Romania), Petrica Ciotirnae (BEIA CONSULT INTERNATIONAL, Romania), Ivan Horatiu (BEIA CONSULT INTERNATIONAL, Romania), Cristina Tudor (BEIA CONSULT INTERNATIONAL, Romania), Robert Florescu (BEIA CONSULT INTERNATIONAL, Romania)
Full Paper
This paper presents an in-depth exploration of blockchain architecture within the context of the CHRISS (Critical infrastructure
High accuracy and Robustness increase Integrated Synchronization Solutions) project. Specifically, the focus lies on elucidating the design principles, functionalities, and security measures embedded within the blockchain architecture envisioned for CHRISS. The CHRISS project endeavors to revolutionize critical infrastructure, particularly in telecommunications networks, by integrating Galileo-based timing distribution with blockchain technology. By leveraging blockchain’s inherent characteristics, such as immutability, decentralization, and cryptographic security, the architecture aims to enhance the resilience and security of time distribution services, thereby mitigating risks associated with GNSS signal interference, jamming, spoofing, and cyber-attacks. This paper delves into the intricacies of the envisioned blockchain architecture, elucidating its functionalities tailored to the specific needs of CHRISS.

Furthermore, it outlines the modalities employed to ensure secure transfer of information between the Timing Synchronization Unit (TSU) and the blockchain, as well as among entities within the blockchain ecosystem. Through a comprehensive analysis of blockchain architecture, this paper not only sheds light on the technical underpinnings of CHRISS but also underscores its potential to revolutionize critical infrastructure by providing robust, secure, and resilient time synchronization solutions.
Workshop IWAPS
Just Rewrite It Again: A Post-Processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text
Stephen Meisenbacher (Technical University of Munich, Germany), Florian Matthes (Technical University of Munich, Germany)
Full Paper
The study of Differential Privacy (DP) in Natural Language Processing often views the task of text privatization as a rewriting task, in which sensitive input texts are rewritten to hide explicit or implicit private information. In order to evaluate the privacy-preserving capabilities of a DP text rewriting mechanism, empirical privacy tests are frequently employed. In these tests, an adversary is modeled, who aims to infer sensitive information (e.g., gender) about the author behind a (privatized) text. Looking to improve the empirical protections provided by DP rewriting methods, we propose a simple post-processing method based on the goal of aligning rewritten texts with their original counterparts, where DP rewritten texts are rewritten again. Our results shown that such an approach not only produces outputs that are more semantically reminiscent of the original inputs, but also texts which score on average better in empirical privacy evaluations. Therefore, our approach raises the bar for DP rewriting methods in their empirical privacy evaluations, providing an extra layer of protection against malicious adversaries.
Workshop IWAPS
PAKA: Pseudonymous Authenticated Key Agreement without bilinear cryptography
Raphael Schermann (Institute of Technical Informatics, Graz University of Technology, Austria), Simone Bussa (Department of Control and Computer Engineering, Politecnico di Torino, Italy), Rainer Urian (Infineon Technologies AG, Augsburg, Germany), Roland Toegl (Infineon Technologies Austria AG, Austria), Christian Steger (Institute of Technical Informatics, Graz University of Technology, Austria)
Full Paper
Anonymity and pseudonymity are important concepts in the domain of the Internet of Things. The existing privacy-preserving key agreement schemes are only concerned with maintaining the privacy of the communicated data that appears on the channel established between two honest entities. However, privacy should also include anonymity or pseudonymity of the device identity. This means there should not exist any correlation handle to associate different communications done by the device.

This paper proposes a privacy-preserving key agreement method, called Pseudonymous Authenticated Key Agreement Protocol (PAKA), that also provides device unlinkability across different domains. This protocol is based on an Elliptic-Curve Diffie-Hellman using standard cryptographic primitives and curves, i.e., no pairing-based cryptography or other computationally intensive cryptography is necessary.

For the security analysis, we provide a mathematical proof and an automatic cryptographic protocol verification utilizing Proverif. Last, we show the integration with the Trusted Platform Module and a Proof-of-Concept implementation.
Workshop IWAPS
SYNAPSE - An Integrated Cyber Security Risk & Resilience Management Platform, With Holistic Situational Awareness, Incident Response & Preparedness Capabilities
Panagiotis Bountakas (Sphynx Technology Solutions, Switzerland), Konstantinos Fysarakis (Sphynx Technology Solutions, Switzerland), Thomas Kyriakakis (Dienekes SI IKE, Greece), Panagiotis Karafotis (Dienekes SI IKE, Greece), Sotiropoulos Aristeidis (AEGIS IT RESEARCH GmbH, Germany), Maria Tasouli (Insuretics Limited, Cyprus), Cristina Alcaraz (University of Malaga, Spain), George Alexandris (Nodalpoint Systems, Greece), Vassiliki Andronikou (Nodalpoint Systems, Greece), Tzortzia Koutsouri (Cyberalytics Limited, Cyprus), Romarick Yatagha (Framatome, Germany), George Spanoudakis (Sphynx Technology Solutions, Switzerland), Sotiris Ioannidis (Dienekes SI IKE, Greece), Fabio Martinelli (Consiglio Nazionale delle Ricerche, Italy), Oleg Illiashenko (Consiglio Nazionale delle Ricerche, Italy)
Full Paper
In an era of escalating cyber threats, the imperative for robust and comprehensive cybersecurity measures has never been more pressing. To address this challenge, SYNAPSE presents a pioneering approach by conceptualising, designing, and delivering an Integrated Cyber Security Risk \& Resilience Management Platform. This platform embodies a holistic framework, synthesising key elements of situational awareness, incident response, and preparedness (i.e., cyber range), augmented by advanced AI capabilities. Through its holistic approach, SYNAPSE aims to elevate cyber resilience by not only mitigating threats but also fostering a culture of proactive defence, informed decision-making, and collaborative response within organisations and across industries.
Workshop IWAPS
Towards 5G Advanced network slice assurance through isolation mechanisms
Alexios Lekidis (University of Thessaly, Greece)
Full Paper
The sixth generation of telecommunication network (6G) offers even faster data rates, lower latency, greater reliability, and higher device density than the currently available 5G infrastructure. Nevertheless, simultaneously to these advancements include several challenges in different domains slowing substantially the transition to it. Hence, 3GPP opts to gradually tackle these challenges in a second phase 5G Advanced release. One of the most significant challenges amongst them lies in the constantly increasing threat landscape from the use of Network Function Virtualization (NFV) technologies for offering services over a shared mobile infrastructure. A mechanism that allows protection against attacks over established network slices is network isolation. This paper proposes isolation schemes to tackle the threats that arise in 5G slices. Such schemes are integrated in a Slice Manager components, responsible for the implementation of a fully-automated orchestration and lifecycle management of network slices as well as their individual network segments. The schemes are implemented through Quality of Service (QoS) policies in an Electric Vehicle (EV) charging infrastructure, which includes the EV charging stations, the management platform, a Slice Manager on the edge segment as well as the orchestration components in a Ultra-Reliable Low Latency Communications (URLLC) network slice.
Workshop IWAPS
Entity Recognition on Border Security
George Suciu (Beia Consult Int, Romania), Mari-Anais Sachian (Beia Consult Int, Romania), Razvan Bratulescu (Beia Consult Int, Romania), Kejsi Koci (Beia Consult Int, Romania), Grigor Parangoni (Beia Consult Int, Romania)
Full Paper
Entity recognition, also known as named entity recognition (NER), is a fundamental task in natural language processing (NLP) that involves identifying and categorizing entities within text. These entities, such as names of people, organizations, locations, dates, and numerical values, provide structured information from unstructured text data. NER models, ranging from rule-based to machine learning-based approaches, decode linguistic patterns and contextual information to extract entities effectively. This article explores the roles of entities, tokens, and NER models in NLP, detailing their significance in various applications like information retrieval and border security. It delves into the practices of implementing NER in legal document analysis, travel history analysis, and document verification, showcasing its transformative impact in streamlining processes and enhancing security measures. Despite challenges such as ambiguity and data scarcity, ongoing research and emerging trends in multilingual NER and ethical considerations promise to drive innovation in the field. By addressing these challenges and embracing new developments, entity recognition is poised to continue advancing NLP capabilities and powering diverse real-world applications.
Workshop IWAPS
Integrating Hyperledger Fabric with Satellite Communications: A Revolutionary Approach for Enhanced Security and Decentralization in Space Networks
Anastassios Voudouris (University of Piraeus, Greece), Aristeidis Farao (University of Piraeus, Greece), Aggeliki Panou (University of Piraeus, Greece), John Polley (School of Communication, University of Southern California, United States), Christos Xenakis (University of Piraeus, Greece)
Full Paper
This paper explores the integration of blockchain technology, specifically Hyperledger Fabric, with satellite communications to enhance the security and reliability of global navigation satellite systems (GNSS). Given the inherent vulnerabilities in satellite systems, such as the susceptibility to various cyberattacks and the risk posed by GNSS signal attacks, this research proposes a novel security framework. By leveraging the decentralized and immutable nature of blockchain, the paper aims to fortify the integrity and verification of GNSS data. This is achieved through a consensus mechanisms that aims to prevent unauthorized data alterations, as well as, to provide robust anti-spoofing and anti-jamming capabilities. The integration of blockchain with satellite communications not only ensures data security but also fosters a transparent and decentralized operational model by enhancing the trustworthiness of satellite-derived data. This paper also outlines the current state-of-the-art, the architecture of the proposed solution, and discusses the potential challenges and future research directions in optimizing blockchain for space applications.
Workshop IWAPS
AIAS: AI-ASsisted cybersecurity platform to defend against adversarial AI attacks
Georgios Petihakis (University of Piraeus, Greece), Aristeidis Farao (University of Piraeus, Greece), Panagiotis Bountakas (University of Piraeus, Greece), Athanasia Sabazioti (Department of Tourism Studies, University of Piraeus, Greece), John Polley (School of Communication, University of Southern California, Greece), Christos Xenakis (University of Piraeus, Greece)
Full Paper
The increasing integration of Artificial Intelligence (AI) in critical sectors such as healthcare, finance, and cybersecurity has simultaneously exposed these systems to unique vulnerabilities and cyber threats. This paper discusses the escalating risks associated with adversarial AI and outlines the development of the AIAS framework. AIAS is a comprehensive, AI-driven security solution designed to enhance the resilience of AI systems against such threats. We introduce the AIAS platform that features advanced modules for threat simulation, detection, mitigation, and deception, using adversarial defense techniques, attack detection mechanisms, and sophisticated honeypots. The platform leverages explainable AI (XAI) to improve the transparency and effectiveness of threat countermeasures. Through meticulous analysis and innovative methodologies, AIAS aims to revolutionize cybersecurity defenses, enhancing the robustness of AI systems against adversarial attacks while fostering a safer deployment of AI technologies in critical applications. The paper details the components of the AIAS platform, explores its operational framework, and discusses future research directions for advancing AI security measures.
Workshop IWAPS
NITRO: an Interconnected 5G-IoT Cyber Range
Aristeidis Farao (University of Piraeus, Greece), Christoforos Ntantogian (Ionian University - Department of Informatics, Greece), Stylianos Karagiannis (Ionian University - Department of Informatics, Greece), Emmanouil Magkos (Ionian University - Department of Informatics, Greece), Alexandra Dritsa (University of Piraeus, Greece), Christos Xenakis (University of Piraeus, Greece)
Full Paper
In the contemporary digital landscape, the convergence of Fifth Generation (5G) wireless technology and the Internet of Things (IoT) has ushered in an era of unprecedented connectivity and innovation. This synergy promises to revolutionize industries ranging from healthcare and transportation to manufacturing and agriculture. However, with the proliferation of connected devices and the exponential growth of data transmission, the cybersecurity landscape faces increasingly complex challenges. One of the primary rationales for the implementation of a 5G-IoT Cyber Range lies in the imperative need for comprehensive training programs tailored to the unique characteristics of 5G and IoT technologies. Unlike traditional networks, 5G infrastructure introduces novel architectural paradigms, including network slicing and edge computing, which demand specialized skill sets among cybersecurity professionals. Moreover, the heterogeneity and sheer volume of IoT devices exacerbate the attack surface, rendering conventional cybersecurity methodologies inadequate. Challenges such as interoperability issues, resource constraints, and the dynamic nature of IoT deployments further compound the complexity of securing 5G-enabled IoT ecosystems.
Workshop IWAPS
Immutability and non-repudiation in the exchange of key messages within the EU IoT-Edge-Cloud Continuum
Salvador Cuñat (Universitat Politècnica de València, Spain), Raúl Reinosa (Universitat Politècnica de València, Spain), Ignacio Lacalle (Universitat Politècnica de València, Spain), Carlos E. Palau (Universitat Politècnica de València, Spain)
Full Paper
The work reflects about the importance of trust in data exchanges in the context of ever-increasing distributed computing ecosystems. It proposes the utilisation of an open-source technology that implements a direct acyclic graph incorporating peer nodes to validate messages in a decentralised network. The tool, IOTA, promises to solve the hindrances of blockchain solutions in highly heterogeneous, IoT-assimilable scenarios, adopting a more lightweight approach, removing the need of mining. The article explores the functioning of IOTA in distributed computing continuum cases, understanding the figures and mechanisms that govern the process. The authors link those reflections to the direct transfer into a research project, aerOS, that uses such a tool as intrinsic part of an IoT-Edge-Cloud continuum framework, enabling the immutability and non-repudiation of key messages in such environments. Also, the authors conclude analysing which next steps might follow to evolve from a not-fully decentralised implementation with the next releases of the tool, and the adaptations for the studied application.
Workshop IWAPS
Open V2X Management Platform Cyber-Resilience and Data Privacy Mechanisms
Alexios Lekidis (University of Thessaly, Greece), Hugo Morais (Universidade de Lisboa, Portugal)
Full Paper
Vehicle-to-Everything (V2X) technologies are recently introduced to provide enhanced connectivity between the different smart grid segments as well as Electric Vehicles (EVs). The EVs draw or power to the grid and may be used as an energy flexibility resource for households and buildings. The increased number of interconnections through is augmenting substantially the cyber-security and data privacy threats that may occur in the V2X ecosystem. In this paper, such threats are categorized in cyber-attack classes which serve as a basis to derive Tactics, Techniques and Procedures (TTPs) for the V2X ecosystem. Additionally, the sensitive data that are exchanged in charging and discharging scenarios are reviewed. Then, an analysis of the existing cyber-security mechanisms is provided and further mechanisms/tools are proposed for detecting/preventing the categorized threats, which are being developed in an Open V2X Management Platform (O-V2X-MP) within the EV4EU project. These mechanisms will provide security-by-design in the O-V2X-MP offered services as well as ensure protection in the V2X interactions.
Workshop IWAPS

PCSCI

PROGRESS: the sectoral approach to cyber resilience
Lior Tabansky (Tel Aviv University, Israel), Eynan Lichterman (LIACOM, Israel),
Full Paper
Resilience in complex systems is an emergent behavior that emerges from interactions between components. Complex socio-technical-economic systems (STES), not singular assets, provide essential services. Yet a fundamental discrepancy remains: cybersecurity practice and maturity models still focus on the robustness of separate components. Promoting Global Cyber Resilience for Sectors (PROGRESS) Cyber-Capability Maturity Model (CCMM) incorporates the science of complex systems, cybersecurity frameworks, and over two decades of Critical Infrastructure Protection (CIP) operations to enable a comprehensive and systematic improvement of capabilities through a sector-wide vision. The sector, defined as a coordinated group of organizations that provide a particular service in a defined region, is the optimal unit of analysis.

We present the model architecture, using the financial sector as an illustration. Expected value of the sectoral approach was proven across eleven countries in Africa and Asia at full-scale model implementation in four critical sectors – health, financial services, electricity, and digital infrastructure. The implementation projects have resulted in actionable recommendations outlining ways to mature specific cyber capabilities. Real-life experience emphasizes the benefits of the sectoral approach to cyber resilience development guidance: creating feedback loops within the sector, integrating supply chain and third-party risks, and weighting in the links and processes between authority levels in cybersecurity issues.

PROGRESS CCMM is the first framework that recognizes how resilience emerges in complex ‎systems ‎of sector agents.
Workshop PCSCI
Towards Availability of Strong Authentication in Remote and Disruption-Prone Operational Technology Environments
Mohammad Nosouhi (Deakin Cyber Research and Innovation Centre, Deakin University, Geelong, Australia, Australia), Divyans Mahansaria (Tata Consultancy Services (TCS) Ltd., Kolkata, India, India), Zubair Baig (Deakin Cyber Research and Innovation Centre, Deakin University, Geelong, Australia, Australia), Lei Pan (Deakin Cyber Research and Innovation Centre, Deakin University, Geelong, Australia, Australia), Robin Doss (Deakin Cyber Research and Innovation Centre, Deakin University, Geelong, Australia, Australia), Keshav Sood (Deakin Cyber Research and Innovation Centre, Deakin University, Geelong, Australia, Australia), Debi Prasad Pati (Tata Consultancy Services (TCS) Ltd., Kolkata, India, India), Praveen Gauravaram (Tata Consultancy Services (TCS) Ltd., Brisbane, Australia, Australia)
Full Paper
Implementing strong authentication methods in a network requires stable connectivity between the service providers deployed within the network (i.e., applications that users of the network need to access) and the Identity and Access Management (IAM) server located at the core segment of the network. This becomes challenging when it comes to Operational Technology (OT) systems deployed in a remote area, as they often get disconnected from the core segment of the network owing to unavoidable network disruptions. As a result, weak authentication methods and shared credential approaches are still adopted in these OT environments, exposing system vulnerabilities to increasingly sophisticated cyber threats. In this work, we propose a solution to enable highly available multi-factor authentication (MFA) services for OT environments. The proposed solution is based on Proof-of-Possession (PoP) tokens generated by an IAM server for registered users. The tokens are securely linked to user-specific parameters (e.g., physical security keys, biometrics, PIN, etc.), enabling strong user authentication (during disconnection time) through token validation. We deployed the Tamarin Prover software-based toolkit to verify security of the proposed authentication scheme. For performance evaluation, we implemented the designed solution in real-world settings. The results of our analysis and experiments confirm the efficacy of the proposed solution.
Workshop PCSCI
SOVEREIGN - Towards a Holistic Approach to Critical Infrastructure Protection
Georg Becker (DCSO GmbH, Germany), Thomas Eisenbarth (Universität zu Lübeck, Germany), Hannes Federrath (Universität Hamburg, Germany), Mathias Fischer (Universität Hamburg, Germany), Nils Loose (Universität zu Lübeck, Germany), Simon Ott (Fraunhofer AISEC, Germany), Joana Pecholt (Fraunhofer AISEC, Germany), Stephan Marwedel (Airbus Commercial Aircraft, Germany), Dominik Meyer (Helmut Schmidt Universität, Germany), Jan Stijohann (Langlauf Security Automation, Germany), Anum Talpur (Universität Hamburg, Germany), Matthias Vallentin (Tenzir GmbH, Germany)
Full Paper
In the digital age, cyber-threats are a growing concern for individuals, businesses, and governments alike. These threats can range from data breaches and identity theft to large-scale attacks on critical infrastructure. The consequences of such attacks can be severe, leading to financial losses, threats to national security, and the loss of lives. This paper presents a holistic approach to increase the security of critical infrastructures. For that, we propose an open, self-configurable, and AI-based automated cyber-defense platform that runs on specifically hardened devices and own hardware, can be deeply embedded in critical infrastructures and provides full visibility on network, endpoints, and software. In this paper, starting from a thorough analysis of related work, we describe the vision of our SOVEREIGN platform in the form of an architecture, discuss individual building blocks, and evaluate it qualitatively with respect to our requirements.
Workshop PCSCI

SecIndustry

Vulnerability detection tool in source code by building and leveraging semantic code graph.
Sabine Delaitre (Bosonit group, Spain), José Maria Pulgar Gutiérrez (DocExploit, Spain)
Full Paper
DocExploit team creates innovative and high-quality cybersecurity solutions to meet the increasing security needs of the digital transformation process and Industry4.0.

DocExploit activity focuses on developing different tools to ensure the security of software applications and container environment: the first and core tool is DocSpot which detects vulnerabilities in application source code, Docdocker scans for vulnerabilities in containers and SirDocker manages and monitors containers efficiently and securely. In addition, we project to develop DocIoT (part of firmware), DocAPI (secure API) and DocAir (runtime security) to offer a comprehensive cybersecurity suite over the software supply chain and to support developers holding security as a key component over the Software Development life-cycle.

To prevent cybersecurity attacks, DocExploit wants to improve the quality and security of software mainly by leveraging knowledge graph technology. We design reliable tools by building a semantic graph-based abstraction of the code from the compiler state and reach high accuracy by developing different static code analyzers optimizing the detection of software vulnerabilities in the source code and dependencies. Those mechanisms allow for drastically reducing false positives.

In this workshop paper, we will introduce the different tools composing the suite we are developing to foster developers' autonomy and security automation over the software supply chain. The vulnerability detection tool in the source code, by leveraging the knowledge graph technology, will be detailed. The related work comes from BALDER a national R&D project. Finally, we describe the contributions to improving security in software and IoT applications, and expose the expected benefits.
Workshop SecIndustry
Gateway to the Danger Zone: Secure and Authentic Remote Reset in Machine Safety
Sebastian N. Peters (Technical University of Munich & Fraunhofer AISEC, Germany), Nikolai Puch (Technical University of Munich & Fraunhofer AISEC, Germany), Michael P. Heinl (Technical University of Munich & Fraunhofer AISEC, Germany), Philipp Zieris (Technical University of Munich & Fraunhofer AISEC, Germany), Mykolai Protsenko (Fraunhofer AISEC, Germany), Thorsten Larsen-Vefring (TRUMPF Werkzeugmaschinen SE + Co. KG, Germany), Marcel Ely Gomes (TRUMPF Werkzeugmaschinen SE + Co. KG, Germany), Aliza Maftun (Siemens AG, Germany), Thomas Zeschg (Siemens AG, Germany)
Full Paper
The increasing digitization of modern flexible manufacturing systems has opened up new possibilities for higher levels of automation, paving the way for innovative concepts such as Equipment-as-a-Service. Concurrently, remote access has gained traction, notably accelerated by the COVID-19 pandemic. While some areas of manufacturing have embraced these advancements, safety applications remain localized. This work aims to enable the remote reset of local safety events. To identify necessary requirements, we conducted collaborative expert-workshops and analyzed relevant standards and regulations. These requirements serve as the foundation for a comprehensive security and safety concept, built around a Secure Gateway. It uses secure elements, crypto-agility, PQC, and certificates for secure and authentic communication. To show the applicability, we implemented a prototype, which utilizes a gateway, cameras, and light barriers to monitor the danger zone of a robot and thus enable remote reset via public Internet. The real-world limitations we faced, were used to refine our requirements and concept iteratively. Ultimately, we present a secure and safe solution that enables the remote acknowledgment of safety-critical applications.
Workshop SecIndustry
A SOAR platform for standardizing, automating operational processes and a monitoring service facilitating auditing procedures among IoT trustworthy environments
Vasiliki Georgia Bilali (Institute of Communication & Computer Systems (ICCS), Greece), Eustratios Magklaris (Institute of Communication & Computer Systems (ICCS), Greece), Dimitrios Kosyvas (Institute of Communication & Computer Systems (ICCS), Greece), Lazaros Karagiannidis (Institute of Communication & Computer Systems (ICCS), Greece), Eleftherios Ouzounoglou (Institute of Communication & Computer Systems (ICCS), Greece), Angelos Amditis (Institute of Communication & Computer Systems (ICCS), Greece)
Full Paper
Advanced Threat Intelligence Orchestrator (ATIO) is a sophisticated middleware solution designed to enhance unified threat management (UTM) monitoring processes by adhering Security Orchestration Automation Response (SOAR) capabilities. This paper provides a detailed overview of ATIO, highlighting its multitasking capabilities towards coordinating information from different types of tools, usually bringing with them different types of data. Also, it gives some details on the system implementation and some indicative operational workflows. Central to ATIO's functionality is its ability to concurrently or sequentially automate the execution and processing steps of multiple workflows, while adhering to cyber security standards, organization policies and regulations. The design of ATIO is flexible, accommodating various interconnected services and tools to meet specific requirements, as well as diverse infrastructure interfaces, accommodating different specifications seamlessly adhering standardized formats and Cyber Threat Information (CTI) languages, such as STIX2.1. This integration enhances interoperability and expands the scope of cyber-threat intelligence operations by enabling connectivity with various systems and diversified data types. Moreover, ATIO automation nature, boosting detection and acknowledge efficiency and responsiveness in threat intelligence operations. It enables users to alter and filter workflow steps, preparing information for correlation and tracking cyber threat information (CTI) effectively. Additionally, ATIO includes robust mechanisms for monitoring user actions within the system, ensuring accountability and providing valuable insights into operational activities.
Workshop SecIndustry
An IEC 62443-security oriented domain specific modelling language
Jolahn Vaudey (Inria, France), Stéphane Mocanu (Grenoble INP, France), Gwenaël Delaval (Université Grenoble alpes, France), Eric Rutten (Inria, France)
Full Paper
As the historically isolated industrial control systems become increasingly connected, the threat posed by cyberattacks soars. To remedy this issue, industrial standards dedicated to the cybersecurity of ICS have been developed in the last twenty years, namely the IEC 62443 series. These standards provides guidelines to the creation and maintenance of a secure ICS, from the concept phase to its eventual disposal. This standard notably assume a specific Zone/Conduit model for systems, as a basis for building the security program. This model currently lacks computer-aided design tools, which are essential to the adoption of a standard. In this paper, we will present a domain specific modeling language, able to describe IEC 62443 compliant systems. Our main contributions are the DSL's syntax, which tries to formalize the informal model found in the standard, and the validation rules applied to it that ensure the described installations are secure by design, according to a set of hypotheses.
Workshop SecIndustry
EmuFlex: A Flexible OT Testbed for Security Experiments with OPC UA
Alexander Giehl (Fraunhofer, Germany), Michael P. Heinl (Fraunhofer AISEC, Germany), Victor Embacher (Fraunhofer AISEC, Germany)
Full Paper
Things (IIoT) like the Open Platform Communications Unified Architecture (OPC UA) were developed with security in mind. However, their correct implementation in operational technology (OT)
environments is often neglected due to a lack of appropriate monetary and human resources, especially among small and mediumsized enterprises. We present a flexible, inexpensive, and easy to use testbed enabling OT operators to experiment with different security scenarios. Our testbed is purely virtual so that procurement and construction of physical or hybrid test environments is not required. It can be operated as a web-hosted service and leverages Docker as well as OPC UA. The testbed therefore combines usability and support for modern technologies enabling future-oriented security studies as well as flexible usage across verticals and company boundaries.
Workshop SecIndustry
Using Artificial Intelligence in Cyber Security Risk Management for Telecom Industry 4.0
Ijeoma Ebere-Uneze (Royal Holloway, University of London, United Kingdom), Syed Naqvi (Liverpool John Moores University, United Kingdom)
Full Paper
The intensity and sophistication of cyberattacks have informed the need for artificial intelligence (AI) solutions for cyber security risk management (CSRM). We have studied the impact of using AI for CSRM in Telecommunication Industry 4.0 (TI4.0). This case study is used to develop an AI-enabled approach for enhanced protection of TI4.0. The services and the infrastructure provided by the TI4.0 are characterized by complexities due to the rapid evolution of associated technologies. This has continued to increase the attack surface and expose the industry to more cyber security risks. This article shows how the use of AI impacts CSRM in the TI4.0. Our work provides insights into the application of AI in mitigating cyber security risks. We have found that AI can enhance CSRM and, its effectiveness is determined by the quality of data that it was trained with; the training it received as well as the security of the AI solution.
Workshop SecIndustry

SP2I

Quantum-Resistant and Secure MQTT Communication
Lukas Malina (Brno University of Technology, Czechia), Patrik Dobias (Brno University of Technology, Czechia), Petr Dzurenda (Brno University of Technology, Czechia), Gautam Srivastava (Brandon University, Canada)
Full Paper
In this paper, we deal with the deployment of Post-Quantum Cryptography (PQC) in Internet of Things (IoT). Concretely, we focus on the MQTT (Message Queuing Telemetry Transport) protocol that is widely used in IoT services. The paper presents our novel quantum-resistant security proposal for the MQTT protocol that supports secure broadcast. Our solution omits using TLS with the handshake causing delay and is suitable for sending irregular short messages. Finally, we show how our solution can practically affect concrete use cases by the performance results of the proposed solution.
Workshop SP2I
Identification of industrial devices based on payload
Ondrej Pospisil (Brno University of Technology, Faculty of Electrical Engineering and Communication, Department of Telecommunication, Czechia), Radek Fujdiak (Brno University of Technology, Faculty of Electrical Engineering and Communication, Department of Telecommunication, Czechia)
Full Paper
The identification of industrial devices based on their behavior in network communication is important from a cybersecurity perspective in two areas: attack prevention and digital forensics. In both areas, device identification falls under asset management or asset tracking. Due to the impact of active scanning on these networks, particularly in terms of latency, it is important to take care in industrial networks to use passive scanning. For passive identification, statistical learning algorithms are nowadays the most appropriate. The aim of this paper is to demonstrate the potential for passive identification of PLC devices using statistical learning based on network communication, specifically the payload of the packet. Individual statistical parameters from 15 minutes of traffic based on payload entropy were used to create the features. Three scenarios were performed and the XGBoost algorithm was used for evaluation. In the best of the scenarios, the model achieved an accuracy score of 83% to identify individual devices.
Workshop SP2I
Lattice-based Multisignature Optimization for RAM Constrained Devices
Sara Ricci (Brno University of Technology, Czechia), Vladyslav Shapoval (Brno University of Technology, Czechia), Petr Dzurenda (Brno University of Technology, Czechia), Peter Roenne (University of Luxembourg, Luxembourg), Jan Oupicky (University of Luxembourg, Luxembourg), Lukas Malina (Brno University of Technology, Czechia)
Full Paper
In the era of growing threats posed by the development of quantum computers, ensuring the security of electronic services has become fundamental. The ongoing standardization process led by the National Institute of Standards and Technology (NIST) emphasizes the necessity for quantum-resistant security measures. However, the implementation of Post-Quantum Cryptographic (PQC) schemes, including advanced schemes such as threshold signatures, faces challenges due to their large key sizes and high computational complexity, particularly on constrained devices. This paper introduces two microcontroller-tailored optimization approaches, focusing on enhancing the DS2 threshold signature scheme. These optimizations aim to reduce memory consumption while maintaining security strength, specifically enabling the implementation of DS2 on microcontrollers with only 192 KB of RAM. Experimental results and security analysis demonstrate the efficacy and practicality of our solution, facilitating the deployment of DS2 threshold signatures on resource-constrained microcontrollers.
Workshop SP2I
DECEPTWIN: Proactive Security Approach for IoV by Leveraging Deception-based Digital Twins and Blockchain
Mubashar Iqbal (University of Tartu, Institute of Computer Science, Estonia), Sabah Suhail (Queen's University Belfast, United Kingdom), Raimundas Matulevičius (University of Tartu, Institute of Computer Science, Estonia)
Full Paper
The proliferation of security threats in connected systems necessitates innovative approaches to enhance security resilience. The Internet of Vehicles (IoV) presents a rapidly evolving and interconnected ecosystem that raises unprecedented security challenges, including remote hijacking, data breaches, and unauthorized access. Digital Twin (DT) and blockchain-based deception can emerge as a promising approach to enhance the security of the IoV ecosystem by creating a secure, realistic, dynamic, and interactive deceptive environment that can deceive and disrupt malicious actors. In accordance with this, we propose a proactive security approach for IoV by leveraging DECEPtion-based digiTal tWins and blockchaIN (DECEPTWIN) that entails hunting for security threats and gaps in IoV security posture before an incident or breach occurs.
Workshop SP2I
Secure and Privacy-Preserving Car-Sharing Systems
Lukas Malina (Brno University of Technology, Czechia), Petr Dzurenda (Brno University of Technology, Czechia), Norbert Lövinger (Brno University of Technology, Czechia), Ijeoma Faustina Ekeh (University of Tartu, Estonia), Raimundas Matulevicius (University of Tartu, Estonia)
Full Paper
With increasing smart transportation systems and services, potential security and privacy threats are growing. In this work, we analyze privacy and security threats in car-sharing systems, and discuss the problems with the transparency of services, users' personal data collection, and how the legislation manages these issues. Based on analyzed requirements, we design a compact privacy-preserving solution for car-sharing systems. Our proposal combines digital signature schemes and group signature schemes, in order to protect user privacy against curious providers, increase security and non-repudiation, and be efficient even for systems with restricted devices. The evaluation of the proposed solution demonstrates its security and a practical usability for constrained devices deployed in vehicles and users' smartphones.
Workshop SP2I
DDS Security+: Enhancing the Data Distribution Service With TPM-based Remote Attestation
Paul Georg Wagner (Fraunhofer IOSB, Germany), Pascal Birnstill (Fraunhofer IOSB, Germany), Tim Samorei (Karlsruhe Institute of Technology, Germany), Jürgen Beyerer (Karlsruhe Institute of Technology, Germany)
Full Paper
The Data Distribution Service (DDS) is a widely accepted industry standard for reliably exchanging data over the network using a publish-subscribe model. While DDS already includes basic security features such as participant authentication and access control, the possibilities of leveraging Trusted Platform Modules (TPMs) to increase the security and trustworthiness of DDS-based applications have not been sufficiently researched yet. In this work, we show how TPM-based remote attestation can be effectively integrated into the existing DDS security architecture. This enables application developers to verify the code integrity of remote DDS participants during the operation of the distributed system. Our solution transparently extends the DDS secure channel handshake, while cryptographically binding the established communication channels to the attested software stacks. We show the security properties of our proposal by formally verifying the resulting remote attestation protocol using the Tamarin theorem prover. We also implement our solution as a fork of the popular eProsima FastDDS library and evaluate the resulting performance impact when conducting TPM-based remote attestations of DDS applications.
Workshop SP2I
Comparison of Multiple Feature Selection techniques for Machine Learning-Based Detection of IoT Attacks
Viet Anh Phan (Brno University of Technology, Czechia), Jan Jerabek (Brno University of Technology, Czechia), Lukas Malina (Brno University of Technology, Czechia)
Full Paper
The practicality of IoT is becoming more and more apparent, including smart homes, autonomous vehicles, environmental monitoring, and the internet everywhere. The rapid spread has also lead to a large number of cybersecurity threats such as Denial of Service attacks, Information stealing attacks, and so on. Machine learning techniques have been proved to be a valuable tool for detecting network threats in IoT. Feature selection has been proven to overcome excessive features of the dataset in the feature reduction phase, which helps reducing computational costs while still keeping the generalization of machine learning model. However, most existing studies have only focused on using a limited number of methods for feature selection (typically one). Moreover, there is very few research evaluating which technique is the most effective across various datasets, and can be used as a best choice method in general. Therefore, this work aims to test 5 feature selection techniques: Random Forest, Recursive Feature Elimination, Logistic Regression, XGBoost Regression and Information Gain. The new dataset (CIC-IoT 2023) is applied to evaluate the performance of those feature selection methods. This study also performs IoT attacks detection based on 5 Machine learning models: Decision Tree (DT), Random Forest (RF), k-Nearest Neighbours (k-NN), Gradient Boosting (GB) and Multi-layer Perceptron (MLP). We look at the computational metrics such as accuracy, precision, recall and F1-score to evaluate the performance of each technique over three actual datasets. Overall, the research shows that Recursive Feature Elimination stands out as the top feature selection method, achieving the average accuracy of 95.55%, as well as the highest accuracy of 99.57% when being used in combination with RF in case of 30 selected features.
Workshop SP2I

SPETViD

STAM

A Multi-layer Approach through Threat Modelling and Attack Simulation for Enhanced Cyber Security Assessment
Eider Iturbe (TECNALIA Research Innovation, Basque Research and Technology Alliance (BRTA), Spain), Javier Arcas (TECNALIA Research Innovation, Basque Research and Technology Alliance (BRTA), Spain), Erkuden Rios (TECNALIA Research Innovation, Basque Research and Technology Alliance (BRTA), Spain)
Full Paper
There is a growing concern about the dynamic landscape of cyber security threats escalating, and the need for improvement in defence capabilities against emerging sophisticated incidents. In response, this paper presents a solution called the Cyber Incident Simulation System, which enables system security engineers to simulate cyber-physical attacks and incidents without the requirement to affect or disrupt the ongoing business operation of the system. Leveraging graph-based threat modelling and AI-generated incident data, the system empowers professionals to predict the effect of the incident within the system under study. The synthetic data is used by anomaly-based Intrusion Detection Systems (IDSs) and other additional security controls to improve their detection algorithms to enhance their accuracy and effectiveness. The Cyber Incident Simulation System is designed to enhance the cyber security measures through the simulation of various incident scenarios.
Workshop STAM
Automated Passport Control: Mining and Checking Models of Machine Readable Travel Documents
Stefan Marksteiner (AVL List Gmbh, Austria / Mälardalen University, Sweden), Marjan Sirjani (Mälardalen University, Sweden), Mikael Sjödin (Mälardalen University, Sweden)
Full Paper
Passports are part of critical infrastructure for a very long time. They also have been pieces of automatically processable information devices, more recently through the ISO/IEC 14443 (Near-Field Communication - NFC) protocol. For obvious reasons, it is crucial that the information stored on devices are sufficiently protected. The International Civil Aviation Organization (ICAO) specifies exactly what information should be stored on electronic passports (also Machine Readable Travel Documents - MRTDs) and how and under which conditions they can be accessed. We propose a model-based approach for checking the conformance with this specification in an automated and very comprehensive manner: we use automata learning to learn a full model of passport documents and use equivalence checking techniques (trace equivalence and bisimlarity) to check the conformance with an automaton modeled after the ICAO standard. The result is an automated (non-interactive), yet very thorough test for compliance. This approach can also be used with other applications for which a specification automaton can be modeled and is therfore broadly applicable.
Workshop STAM
A Framework for In-network Inference using P4
Huu Nghia Nguyen (Montimage, France), Manh-Dung Nguyen (Montimage, France), Edgardo Montes de Oca (Montimage, France)
Full Paper
Machine learning (ML) has been widely used in network security monitoring. Although, its application to highly data intensive use cases and those requiring ultra-low latency remains challenging. It is caused by the large amounts of network data and the need of transferring data to a central location hosting analyses services.

In this paper, we present a framework to perform in-network analysis by offloading ML inference tasks from end servers to P4-capable programmable network devices. This helps reducing transfer latency and, thus, allows faster attack detection and mitigation. It also improves privacy since the data is processed at the networking devices.

The paper also presents an experimental use-case of the framework to classify network traffic, and to early detect and rapidly mitigate against IoT malicious traffic.
Workshop STAM
AI-Powered Penetration Testing using Shennina: From Simulation to Validation
Stylianos Karagiannis (Ionian University - Department of Informatics, PDM, Greece), Camilla Fusco (University of Naples Federico II, Italy), Leonidas Agathos (PDM, Portugal), Wissam Mallouli (Montimage, France), Valentina Casola (University of Naples Federico II, Italy), Christoforos Ntantogian (Ionian University - Department of Informatics, Greece), Emmanouil Magkos (Department of Informatics, Ionian University, Corfu, Greece, Greece)
Full Paper
Artificial intelligence has been greatly improved nowadays, providing innovative approaches in cybersecurity both on offensive and defensive tactics. AI can be specifically utilized to automate and conduct penetration testing, a task that is usually time-intensive, involves high-costs, and requires cybersecurity professionals of high expertise. This research paper utilizes an AI penetration testing framework to validate and identify the impact and potential benefits of using AI on that perspective. More specifically, the research involves a validation process and tests the approach in a realistic environment to collect information and collect the relevant datasets. The research analyzes the behavior of the AI penetration testing framework in order to adapt and upgrade further. Finally, the research provides as a result the importance of using such frameworks or approaches to generate datasets and a methodology to retrieve the deep details of the attack simulation.
Workshop STAM
A comprehensive evaluation of interrupt measurement techniques for predictability in safety-critical systems
Daniele Lombardi (University of Naples Federico II Department of Electrical Engineering and Information Technologies, Italy), Mario Barbareschi (University of Naples Federico II Department of Electrical Engineering and Information Technologies, Italy), Salvatore Barone (Università degli Studi di Napoli - Federico II Department of Electrical Engineering and Information Technologies, Italy), Valentina Casola (University of Naples Federico II Department of Electrical Engineering and Information Technologies, Italy)
Full Paper
In the last few decades, the increasing adoption of computer systems for monitoring and control applications has fostered growing attention to real-time behavior, i.e., the property that ensures predictable reaction times to external events. In this perspective, performance of the interrupt management mechanisms are among the most relevant aspects to be considered. Therefore, the service-latency of interrupts is one of the metrics considered while assessing the predictability of such systems. To this purpose, there are different techniques to estimate it, including the use of on-board timers, oscilloscopes and logic analyzers, or even real-time tracers. Each of these techniques, however, is affected by some degrees of inaccuracy, and choosing one over the other have pros and cons. In this paper, we review methodologies for measuring interrupt-latency from the scientific literature and, for the first time, we define an analytical model that we exploit to figure out measurement errors committed. Finally, we prove the effectiveness of the model relying on measurements taken from Xilinx MPSoC devices and present a case study whose purpose is to validate the proposed model.
Workshop STAM
AI4SOAR: A Security Intelligence Tool for Automated Incident Response
Manh-Dung Nguyen (Montimage EURL, France), Wissam Mallouli (Montimage EURL, France), Ana Rosa Cavalli (Montimage EURL, France), Edgardo Montes de Oca (Montimage EURL, France)
Full Paper
The cybersecurity landscape is fraught with challenges stemming from the increasing volume and complexity of security alerts. Traditional manual or semi-automated approaches to threat analysis and incident response often result in significant delays in identifying and mitigating security threats. In this paper, we address these challenges by proposing AI4SOAR, a security intelligence tool for automated incident response. AI4SOAR leverages similarity learning techniques and integrates seamlessly with the open-source SOAR platform Shuffle. We conduct a comprehensive survey of existing open-source SOAR platforms, highlighting their strengths and weaknesses. Additionally, we present a similarity-based learning approach to quickly identify suitable playbooks for incoming alerts. We implement AI4SOAR and demonstrate its application through a use case for automated incident response against SSH brute-force attacks.
Workshop STAM
Transfer Adversarial Attacks through Approximate Computing
Valentina Casola (University of Naples Federico II, Italy), Salvatore Della Torca (Università degli Studi di Napoli Federico II, Italy)
Full Paper
Convolutional Neural Networks (CNNs), have demonstrated remarkable performance across a range of domains, including computer vision and healthcare. However, they encounter challenges related to the increasing demands for resources and their susceptibility to adversarial attacks. Despite the significance of these challenges, they are often addressed independently in the scientific literature, which has led to conflicting findings.

In addressing the issue of resource demands, approaches have been developed which leverage the inherent error resilience of DNNs. The Approximate Computing (AxC) design paradigm reduces the resource requirements of DNNs by introducing controlled errors. With regard to the security domain, the objective is to develop precise adversarial attacks.

This paper introduces a novel technique for transferring adversarial attacks from CNN approximated through the AxC design paradigm (AxNNs), and other CNNs, regardless of their architecture and implementations. AxNNs are created by replacing components that require significant resources with approximate ones. Subsequently, adversarial attacks are generated targeting AxNNs and transferred to new CNNs.

The experimental results indicate that it is possible to transfer adversarial samples from an AxNN to target CNNs, especially whne the source AxNN has either a high accuracy, or an architecture that is deeper than the ones of the target CNNs.
Workshop STAM
NERO: Advanced Cybersecurity Awareness Ecosystem for SMEs
Charalambos Klitis (eBOS Technologies Ltd, Cyprus), Ioannis Makris (METAMIND INNOVATIONS IKE, Greece), Pavlos Bouzinis (METAMIND INNOVATIONS IKE, Greece), Dimitrios Christos Asimopoulos (METAMIND INNOVATIONS IKE, Greece), Wissam Mallouli (MONTIMAGE EURL, France), Kitty Kioskli (TRUSTILIO BV, Netherlands), Eleni Seralidou (TRUSTILIO BV, Netherlands), Christos Douligeris (UNIVERSITY OF PIRAEUS RESEARCH CENTER, Greece), Loizos Christofi (eBOS Technologies Ltd, Cyprus)
Full Paper
NERO represents a sophisticated Cybersecurity Ecosystem comprising five interconnected frameworks designed to deliver a Cybersecurity Awareness initiative, as advocated by ENISA as the optima
method for cultivating a security-centric mindset among employees to mitigate the impact of cyber threats. It integrates activities, resources, and training to nurture a culture of cybersecurity. NERO primarily equips SMEs with a repository of Cyber Immunity Toolkits, a Cyber Resilience Program, and Gamified Cyber Awareness Training, all accessible through a user-friendly Marketplace. The efficacy and performance of this concept will be affirmed through three distinct use case demonstrations across various sectors: Improving Patient Data Security in Healthcare with Cybersecurity Tools, Enhancing Supply Chain Resilience in the Transportation and Logistics Industry through Cybersecurity Awareness, and Elevating Financial Security via Enhanced Cybersecurity Awareness and Tools.
Workshop STAM
Towards the adoption of automated cyber threat intelligence information sharing with integrated risk assessment
Valeria Valdés Ríos (Université Paris-Saclay - Montimage, France), Fatiha Zaidi (Université Paris-Saclay, CNRS, ENS Paris-Saclay, Laboratoire Méthodes Formelles, France), Ana Rosa Cavalli (Institut Polytechnique, Telecom SudParis - Montimage, France), Angel Rego (Tecnalia, Basque Research and Technology Alliance (BRTA), Spain)
Full Paper
In the domain of cybersecurity, effective threat intelligence and information sharing are critical operations for ensuring appropriate and timely response against threats, but limited in automation, standardization, and user-friendliness in current platforms. This paper introduces a Cyber Threat Intelligence (CTI) Information Sharing platform, designed for critical infrastructures and cyber-physical systems. Our platform integrates existing cybersecurity tools and leverages digital twin technology, enhancing threat analysis and mitigation capabilities. It features an automated process for disseminating standardized and structured intelligence, utilizing the Malware Information Sharing Platform (MISP) for effective dissemination. A significant enhancement is the integration of risk assessment tools, which enriches the shared intelligence with detailed risk information, supporting an informed decision-making. The platform encompasses an user-friendly dashboard and a robust backend, streamlining the threat intelligence cycle and transforming raw data coming from diverse sources into actionable insights. Overall the CTI4BC platform presents a solution to overcome challenges in the CTI sharing, contributing to a more resilient cybersecurity domain.
Workshop STAM
The PRECINCT Ecosystem Platform for Critical Infrastructure Protection: Architecture, Deployment and Transferability
Djibrilla Amadou Kountche (AKKODIS Reaserach, France), Jocelyn Aubert (Luxembourg Institute of Science and Technology, Luxembourg), Manh Dung Nguyen (Montimage, France), Natalia Kalfa (ATTD, Greece), Nicola Durante (ENGINEERING, Italy), Cristiano Passerini (LEPIDA, Italy), Stephane Kuding (KONNECTA, Greece)
Full Paper
Critical infrastructures (CIs) are equipped with sensors and actuators which communicate using open (for e.g., MQTT, AMQP, CoAP, Modbus, DNP3) or commercially licensed protocols (LoRA, IEC 6870-5-101, Profibus) to share data and commands. The management of these systems are also built on Information Communication Technologies (ICT) which are considered as Critical Information Infrastructure (CII). As identified by a recent European Union Agency for Cybersecurity (ENISA) study, the software used in CIs are subjected to supply chain compromise of software dependencies, human error (misconfigurations), ransomware attack, Artificial Intelligence abuse, the usage of legacy systems inside cyber-physical systems within CIs. This paper presents an approach to re-use ICT tools for Critical Infrastructures Protection (CIP) exploiting Topology and Orchestration Specification for Cloud Applications (TOSCA), reference architectures and ICT automation tools as well as to describe, deploy and orchestrate them. Therefore, our proposed approach will help in the re-usability of the outcomes of CIP research projects and the transferability of knowledge gained during these projects and help researchers to identify human errors, ease system updates, recovery and identify conceptual errors in the CI software architectures
Workshop STAM
Automating Side-Channel Testing for Embedded Systems: A Continuous Integration Approach
Philipp Schloyer (Technical University of Applied Sciences Augsburg, Germany), Peter Knauer (Technical University of Applied Sciences Augsburg, Germany), Bernhard Bauer (Uni Augsburg, Germany), Dominik Merli (Technical University of Applied Sciences Augsburg, Germany)
Full Paper
Software testing is vital for strengthening the security of embedded systems by identifying and rectifying code errors, flaws and vulnerabilities. This is particularly significant when addressing vulnerabilities associated with side-channel attacks, given that they introduce a distinctive class of vulnerabilities, primarily subject to manual testing procedures. Manual testing remains prevalent despite advances in automation, posing challenges, particularly for complex environments. This research aims to automate embedded software testing on hardware in a modular and scalable manner, addressing the limitations of manual testing. We present a system designed to automate testing, including Side-Channel Analysis (SCA), in Continuous Integration (CI) environments, emphasizing accessibility and collaboration through open-source tools. Our evaluation setup based on GitLab, Jenkins and the ChipWhisperer framework shows that automating and integrating SCA in CI environments is possible in an efficient way.
Workshop STAM
A Framework Towards Assessing the Resilience of Urban Transport Systems
Gérald Rocher (Université Côte d'Azur (UniCA), Centre National de la Recherche Scientifique (CNRS, I3S), France), Jean-Yves Tigli (Université Côte d'Azur (UniCA), Centre National de la Recherche Scientifique (CNRS, I3S), France), Stéphane Lavirotte (Université Côte d'Azur (UniCA), Centre National de la Recherche Scientifique (CNRS, I3S), France), Nicolas Ferry (Université Côte d'Azur (UCA), Institut national de recherche en sciences et technologies du numérique (INRIA, Kairos), France)
Full Paper
As critical cyber-physical systems, urban transport systems are vul- nerable to natural disasters and deliberate attacks. Ensuring their resilience is crucial for sustainable operations and includes the abil- ity to withstand, absorb and recover efficiently from disruptions. Assessing the resilience of such systems requires a comprehensive set of performance indicators covering social, economic, organi- sational, environmental and technical concerns. In addition, the interdependence of the different modes of transport and the result- ing human activities requires the inclusion of the spatial dimension to capture potential cascading failures. Furthermore, the integration of both aleatory (data) and epistemic (modelling) uncertainties is essential for robust performance indicators.

Current methods for assessing the resilience of transport systems lack standardised performance indicator systems and assessment methods, making comparative analysis and benchmarking of dis- ruption management strategies difficult. This paper proposes a unified framework for modelling and assessing performance indica- tors for urban transport systems. The framework is demonstrated using a simulated scenario in Eclipse SUMO and paves the way for future research in this area.
Workshop STAM

TRUSTBUS

Individual privacy levels in query-based anonymization
Sascha Schiegg (University of Passau, Germany), Florian Strohmeier (University of Passau, Germany), Armin Gerl (HM University of Applied Sciences Munich, Germany), Harald Kosch (University of Passau, Germany)
Full Paper
Artificial intelligence systems like large language models (LLM) source their knowledge from large datasets. Systems like ChatGPT therefore rely on shared data to train on. For enterprises, releasing data to the public domain requires anonymization as soon as a individual is identifiable. While multiple privacy models exist that guarantee a specific level of distortion applied to a dataset, to mitigate re-identification with e.g. k-anonymity, the required level is in general defined by the data processor. We propose the idea to combine individual privacy levels defined by the data subjects themselves with a privacy language such as LPL (Gerl et al., 2018) to get a more fine-granular understanding of the effectively required privacy level. Queries targeting subsets of the to be released dataset can only profit from lower privacy requirements set by data subjects as these response subsets may do not contain users with high privacy requirements, which can then lead to more utility. By analyzing the results of different queries directed at a privacy-aware data-transforming database system, we demonstrate the characteristics needed for this assumption to really take effect. For a more realistic evaluation we also take changes of the underlying data sources in consideration.
Workshop TrustBus
Aligning eIDAS and Trust Over IP: A Mapping Approach
Cristian Lepore (IRIT, France), Romain Laborde (IRIT, France), Jessica Eynard (Uniersity Toulouse Capitole, France)
Full Paper
On 29 February 2024, the European Parliament approved the amendment of the eIDAS Regulation. The revision introduces new elements and a new EU Digital Identity Wallet, expected to be ready by the end of 2026. Even after the wallet is released, the numerous digital identity schemes operating within the Member States will continue to function for some time. The introduction of the new wallet and the coexistence of numerous digital identity schemes will pose challenges for service providers, who will need to adapt to support various means of identity, including the EU wallet, for their services. In response to this challenge, this study examines how to plan interoperability between eIDAS and existing frameworks. First, we organize the eIDAS components in a knowledge graph that encodes information through entities and their relations. While doing this, we highlight various design patterns and use a graph entity alignment method to map components of eIDAS and the Trust Over IP.
Workshop TrustBus
A Unified Framework for GDPR Compliance in Cloud Computing
Argyri Pattakou (Dept. of Cultural Technology and Communication, University of the Aegean Lesvos, Greece, Greece), Vasiliki Diamantopoulou (Dept. of Information and Communication Systems Engineering, University of the Aegean Samos, Greece, Greece), Christos Kalloniatis (Dept. of Cultural Technology and Communication, University of the Aegean Lesvos, Greece, Greece), Stefanos Gritzalis (Department of Digital Systems University of Piraeus, Greece, Piraeus, Greece, Greece)
Full Paper
In parallel with the rapid development of Information and Communication technologies and the digitization of information in every aspect of daily life, the enforcement of the GDPR, in May 2018, brought significant changes to the processes that organisations should follow during collecting, processing, and storing personal data and revealed the immediate need for integrating the Regulation’s requirements for integrating into organisational activities that process personal and sensitive data. On the other hand, cloud computing is a cutting-edge technology that is widely used in order to support most, if not every, organisational activities. As a result, such infrastructure constitutes huge pools of personal data and, in this context, a careful consideration and implementation of the rules imposed by the Regulation is considered crucial. In this paper, after highlighting the need to consider the GDPR requirements when designing cloud-based systems, we determined those GDPR compliance controls that should be incorporated at the early stages of the system design process. As a next step, those compliance controls were integrated into a holistic framework that considers both the security and privacy aspects of a cloud-based system as well as the requirements arising from the Regulation during the design of such systems.
Workshop TrustBus
A Framework for Managing Separation of Duty Policies
Sebastian Groll (University of Regensburg, Germany), Sascha Kern (Nexis GmbH, Germany), Ludwig Fuchs (Nexis GmbH, Germany), Günther Pernul (Universität Regensburg, Germany)
Full Paper
Separation of Duty (SoD) is a fundamental principle in information security. Especially large and highly regulated companies have to manage a huge number of SoD policies. These policies need to be maintained in an ongoing effort in order to remain accurate and compliant with regulatory requirements. In this work we develop a framework for managing SoD policies that pays particular attention to policy comprehensibility. We conducted seven semi-structured interviews with SoD practitioners from large organizations in order to understand the requirements for managing and maintaining SoD policies. Drawing from the obtained insights, we developed a framework, which includes the relevant stakeholders and tasks, as well as a policy structure that aims to simplify policy maintenance. We anchor the proposed policy structure in a generic IAM data model to ensure compatibility and flexibility with other IAM models. We then show exemplary how our approach can be enforced within Role-Based Access Control. Finally, we evaluate the proposed framework with a real-world IAM data set provided by a large finance company.
Workshop TrustBus
Further Insights: Balancing Privacy, Explainability, and Utility in Machine Learning-based Tabular Data Analysis
Wisam Abbasi (Informatics and Telematics Institute (IIT) of National Research Council, Italy), Paolo Mori (IIT-CNR, Italy), Andrea Saracino (Consiglio Nazionale delle Ricerche, Italy)
Full Paper
In this paper, we present further contributions to the field of privacy-preserving and explainable data analysis applied to tabular datasets. Our approach defines a comprehensive optimization criterion that balances the key aspects of data privacy, model explainability, and data utility. By carefully regulating the privacy parameter and exploring various configurations, our methodology identifies the optimal trade-off that maximizes privacy gain and explainability similarity while minimizing any adverse impact on data utility. To validate our approach, we conducted experiments using five classifiers on a binary classification problem using the well-known Adult dataset, which contains sensitive attributes. We employed (epsilon, delta)-differential privacy with generative adversarial networks as a privacy mechanism and incorporated various model explanation methods. The results showcase the capabilities of our approach in achieving the dual objectives of preserving data privacy and generating model explanations.
Workshop TrustBus
Article 45 of the eIDAS Directive Unveils the need to implement the X.509 4-cornered trust model for the WebPKI
Ahmad Samer Wazan (Zayed University, United Arab Emirates), Romain Laborde (Université Toulouse 3 Paul Sabatier, France), Abdelmalek Benzekri (Université Toulouse 3 Paul Sabatier, France), Imran Taj (Zayed University, United Arab Emirates)
Full Paper
Article 45 of the new eIDAS Directive (eIDAS 2.0) is causing a bit of shock on the Internet as it gives European governments the power to make EU-certificated web certificates accepted without the approval of web browsers/OS, which are considered to be the current gatekeepers of the WebPKI ecosystem. This paper goes beyond the current debate between the WebPKI gatekeepers and the European Commission (EC) about the implications of Article 45. It shows how both approaches do not provide full protection to web users. We propose a better approach that Europe can follow to regulate web X.509 certificates: Rather than regulating the issuance of web X.509 certificates, the EC can play the role of a validator that recommends the acceptance of certificates at the web scale.

Workshop TrustBus
Create, Read, Update, Delete: Implications on Security and Privacy Principles regarding GDPR
Michail Pantelelis (University of the Aegean, Greece), Christos Kalloniatis (Department of Cultural Technology and Communication-University of the Aegean, Greece)
Full Paper
Create, Read, Update and Delete operations (CRUD) are a well-established abstraction to model data access in software systems of different architectures. Most system requirements, generated during the specification phase, will be realized by combining these operations on different entities of the system under development. The majority of these requirements will be business operations and objectives. Security requirements come on top of business requirements in a mostly network-connected world and risk the existence of a software system as a business. Through the enforcement of privacy laws, modern systems must also legally comply with privacy requirements or face the possibility of high fines. While there is a great interest in methodologies to elicit security and privacy requirements, little has been done to practically apply those requirements during the software development phase. This paper investigates the implication of those four basic operations regarding security and privacy principles as they are implied by the law. Analysis findings aim to raise awareness among developers about privacy when implementing high-level business requirements, and result in a bottom-up compliance procedure regarding privacy and the GDPR by proposing a systematic approach in this direction.
Workshop TrustBus
The Trade-off Between Privacy & Quality for Counterfactual Explanations
Vincent Dunning (Netherlands Organisation for Applied Scientific Research (TNO), Netherlands), Dayana Spagnuelo (Netherlands Organisation for Applied Scientific Research (TNO), Netherlands), Thijs Veugen (Netherlands Organisation for Applied Scientific Research (TNO), University of Twente, Netherlands), Sjoerd Berning (Netherlands Organisation for Applied Scientific Research (TNO), Netherlands), Jasper van der Waa (Netherlands Organisation for Applied Scientific Research (TNO), Netherlands)
Full Paper
Counterfactual explanations are a promising direction of explainable AI in many domains such as healthcare. These explanations produce a counterexample from the dataset that shows, for example, what should change about a patient to reduce their risk of developing diabetes type 2. However, this poses a clear privacy risk when the dataset contains information about people. Recent literature shows that this risk can be mitigated by using $k$-anonymity to generalise the explanation, such that it is not about a single person. In this paper, we investigate the trade-offs between privacy and explanation quality in the medical domain. Our results show that for around 40\% of the explained cases, the real gain in privacy is limited as the generalisation increases while the explanations continue decreasing in quality.

These findings suggest that this can be an unsuitable strategy in some situations, as its effectiveness depends on characteristics of the underlying dataset.
Workshop TrustBus
Deployment of Cybersecurity Controls in the Norwegian Industry 4.0
Kristian Kannelønning (NTNU, Norway), Sokratis Katsikas (Norwegian University of Science and Technology, Norway)
Full Paper
Cybersecurity threats and attacks on Industry are increasing, and the outcome of a successful cyber-attack can be severe for organizations. A successful cyber-attack on an Industry where Cyber-Physical Systems are present can be particularly devastating as such systems could cause harm to people and the environment if they malfunction. This paper reports on the results of a survey investigating what security measures organizations implement within the industry to strengthen their security posture. The survey instrument used has been developed using the NIST Special Publication "Guide to Operational Technology" and contained 70 questions to determine the level of security controls deployed within the Norwegian Industry. The results show that the average usage of the different security controls is 63%, and 53% of the organizations have a security controls usage of 60% or more. The most used security control is backup of critical software, whereas the two least used are specific-OT cybersecurity training and response planning. Both are highlighted as areas for improvement. Dedicated OT security standards have not been found to influence the level of security controls used. However, employees within an organization following a dedicated security standard have higher cybersecurity knowledge.
Workshop TrustBus
Trust-minimizing BDHKE-based e-cash mint using secure hardware and distributed computation
Antonín Dufka (Masaryk University, Czechia), Jakub Janků (Masaryk University, Czechia), Petr Švenda (Masaryk University, Czechia)
Full Paper
The electronic cash (or e-cash) technology based on the foundational work of Chaum is emerging as a scalability and privacy layer atop of expensive and traceable blockchain-based currencies. Unlike trustless blockchains, e-cash designs inherently rely on a trusted party with full control over the currency supply. Since this trusted component cannot be eliminated from the system, we aim to minimize the trust it requires.

We approach this goal from two angles. Firstly, we employ misuse-resistant hardware to mitigate the risk of compromise via physical access to the trusted device. Secondly, we divide the trusted device's capabilities among multiple independent devices, in a way that ensures unforgeability of its currency as long as at least a single device remains uncompromised. Finally, we combine both these approaches to leverage their complementary benefits.

In particular, we surveyed blind protocols used in e-cash designs with the goal of identifying those suitable for misuse-resistant, yet resource-constrained devices. Based on the survey, we focused on the BDHKE-based construction suitable for the implementation on devices with limited resources. Next, we proposed a new multi-party protocol for distributing the operations needed in BDHKE-based e-cash and analyzed its security. Finally, we implemented the protocol for the JavaCard platform and demonstrated the practicality of the approach by measuring its performance on a physical smartcard.
Workshop TrustBus
Elevating TARA: A Maturity Model for Automotive Threat Analysis and Risk Assessment
Manfred Vielberth (Continental Engineering Services GmbH, Germany), Kristina Raab (University of Regensburg, Germany), Magdalena Glas (University of Regensburg, Germany), Patrick Grümer (Continental Engineering Services GmbH, Portugal), Günther Pernul (University of Regensburg, Germany)
Full Paper
The importance of automotive cybersecurity is increasing in tandem with the evolution of more complex vehicles, fueled by trends like V2X or over-the-air updates. Regulatory bodies are trying to cope with this problem with the introduction of ISO 21434, which standardizes automotive cybersecurity engineering. One piece of the puzzle for compliant cybersecurity engineering is the creation of a TARA (Threat Analysis and Risk Assessment) for identifying and managing cybersecurity risks. The more time security experts invest in creating a TARA, the more detailed and mature it becomes. Thus, organizations must balance the benefits of a more mature TARA against the costs and resources required to achieve it. However, there is a lack of guidance on determining the appropriate level of effort. In this paper, we propose a data-driven maturity model as a management utility facilitating the decision on the maturity-cost trade-off for creating TARAs. To evaluate the model, we conducted interviews with seven automotive cybersecurity experts from the industry.
Workshop TrustBus
What Johnny thinks about using two-factor authentication on GitHub: A survey among open-source developers
Agata Kruzikova (Masaryk University, Czechia), Jakub Suchanek (Masaryk University, Czechia), Milan Broz (Masaryk Universtiy, Czechia), Martin Ukrop (Red Hat, Czechia), Vashek Matyas (Masaryk Universtiy, Czechia)
Full Paper
Several security issues in open-source projects demonstrate that developer accounts get misused or stolen if weak authentication is used. Many services have started to enforce second-factor authentication (2FA) for their users. This is also the case for GitHub, the largest open-source development platform. We surveyed 110 open-source developers in GitHub to explore how they perceive the importance of authentication on GitHub. Our participants perceived secure authentication as important as other security mechanisms (e.g., commit signing) to improve open-source security. 2FA usage of the project owner was perceived as one of the most important mechanisms.

Around half of the participants (51%) were aware of the planned 2FA enforcement on GitHub. Their perception of this enforcement was rather positive. They agreed to enforce 2FA for new devices and new locations, but they were slightly hesitant to use it after some time. They also rather agreed to enforce various user groups on GitHub to use 2FA. Our participants also perceived GitHub authentication methods positively with respect to their usability and security. Most of our participants (68%) reported that they had enabled 2FA on their GitHub accounts.
Workshop TrustBus
A Trust and Reputation System for Examining Compliance with Access Control
Thomas Baumer (Nexis GmbH, Germany), Johannes Grill (Universität Regensburg, Germany), Jacob Adan (Universität Regensburg, Germany), Günther Pernul (Universität Regensburg, Germany)
Full Paper
Trust is crucial when a truster allows a trustee to carry out desired services. Regulatory authorities thus set requirements for organizations under their jurisdiction to ensure a basic trust level. Trusted auditors periodically verify the auditee's compliance with these requirements. However, the quality of the auditees' compliance and the auditors' verification performance often remain unclear and unavailable to the public. In this work, we examine the regulations of Identity and Access Management (IAM) and identify typical patterns. We enhance these patterns to include trust measurements for the auditee providing services and the auditors verifying compliance. We demonstrate the feasibility of this approach for an application utilizing decentralized blockchain technologies and discuss the implications, potential, and benefits of this architecture.
Workshop TrustBus
OOBKey: Key Exchange with Implantable Medical Devices Using Out-Of-Band Channels
Mo Zhang (University of Birmingham, UK; University of Melbourne, Australia, United Kingdom), Eduard Marin (Telefonica Research, Spain), Mark Ryan (University of Birmingham, UK, United Kingdom), Vassilis Kostakos (The University of Melbourne, Australia), Toby Murray (University of Melbourne and Data61, Australia), Benjamin Tag (Monash University, Australia), David Oswald (The University of Birmingham, School of Computer Science, United Kingdom),
Full Paper
Implantable Medical Devices (IMDs) are widely deployed today and often use wireless communication. Establishing a secure communication channel to these devices is challenging in practice. To address this issue, researchers have proposed IMD key exchange protocols, particularly ones that leverage an Out-Of-Band (OOB) channel such as audio, vibration and physiological signals. While these solutions have advantages over traditional key exchange, they are often proposed in an ad-hoc manner and lack a systematic evaluation of their security, usability and deployability properties. In this paper, we provide an in-depth analysis of existing OOB-based solutions for IMDs and, based on our findings, propose a novel IMD key exchange protocol that includes a new class of OOB channel based on human bodily motions. We implement prototypes and validate our designs through a user study (N = 24). The results demonstrate the feasibility of our approach and its unique features, establishing a new direction in the context of IMD security.
Workshop TrustBus
DealSecAgg: Efficient Dealer-Assisted Secure Aggregation for Federated Learning
Daniel Demmler (ZAMA, Germany), Joshua Stock (Universität Hamburg, Germany), Henry Heitmann (Universität Hamburg, Germany), Janik Noel Schug (Universität Hamburg, Germany), Daniel Demmler (ZAMA, Germany)
Full Paper
Federated learning eliminates the necessity of transferring private training data and instead relies on the aggregation of model updates. Several publications on privacy attacks show how these individual model updates are vulnerable to the extraction of sensitive information. State-of-the-art secure aggregation protocols provide privacy for participating clients, yet, they are restrained by high computation and communication overhead.

We propose the efficient secure aggregation protocol DealSecAgg. The cryptographic scheme is based on a lightweight single-masking approach and allows the aggregation of the global model under encryption. DealSecAgg utilizes at least one additional dealer party to outsource the aggregation of masks and to reduce the computational complexity for mobile clients. At the same time, our protocol is scalable and resilient against client dropouts.

We provide a security proof and experimental results regarding the performance of DealSecAgg. The experimental evidence on the CIFAR-10 data set confirms that using our protocol, model utility remains unchanged compared to federated learning without secure aggregation. Furthermore, the results show how our work outperforms other state-of-the-art masking strategies both in the number of communication rounds per training step and in computational costs, which grows linearly in the amount of active clients. By employing our protocol, runtimes can be reduced by up to 87.8% compared to related work.
Workshop TrustBus

WSDF & COSH

Forensic Analysis of Artifacts from Microsoft’s Multi-Agent LLM Platform AutoGen
Clinton Walker (Louisiana State University, United States), Taha Gharaibeh (Louisiana State University, United States), Ruba Alsmadi (Louisiana State University, United States), Cory Hall (MITRE, United States), Ibrahim Baggili (Louisiana State University, United States)
Full Paper
Innovations in technology bring new challenges that need to be addressed, especially in the field of technical artifact discovery and analysis that enables digital forensic practitioners. Digital forensic analysis of these innovations is a constant challenge for digital investigators. In the rapidly evolving landscape of Artificial Intelligence ( AI), keeping up with the digital forensic analysis of each new tool is a difficult task. New, advanced Large Language Model (LLM)s can produce human-like artifacts because of their complex textual processing capabilities. One of the newest innovations is a multi-agent LLM framework by Microsoft called AutoGen. AutoGen enables the creation of a team of specialist LLM-backed agents where the agents "chat" with each other to plan, iterate, and determine when a given task is complete. Typically one of the agents represents the human user while the other agents work autonomously after the human gives each agent a responsibility on the team. Thus, from a digital forensics perspective, it is necessary to determine which artifacts are created by the human user and which artifacts are created by the autonomous agents. Analysis in this work indicates that the current implementation of AutoGen has little in artifacts for attribution outside of particular memory artifacts, yet has strong indicators of usage in disk and network artifacts. Our research provides the initial account on the digital artifacts of the LLM technology AutoGen and first artifact examination for a LLM framework.
Workshop WSDF
Forensic Investigation of Humanoid Social Robot: A Case Study on Zenbo Robot
Joseph Brown (Louisiana State University, United States), Abdur Rahman Onik (Mr, United States), Ibrahim Baggili (Louisiana State University, United States)
Full Paper
Internet of Things (IoT) plays a significant role in our daily lives as interconnection and automation positively impact our societal needs. In contrast to traditional devices, IoT devices require connectivity and data sharing to operate effectively. This interaction necessitates that data resides on multiple platforms and often across different locations, posing challenges from a digital forensic investigator’s perspective. Recovering a full trail of data requires piecing together elements from various devices and locations. IoT-based forensic investigations include an increasing quantity of objects of forensic interest, the uncertainty of device relevance in terms of digital artifacts or potential data, blurry network boundaries, and edgeless networks, each of which poses new challenges for the identification of significant forensic artifacts. One example of the positive societal impact of IoT devices is that of Humanoid robots, with applications in public spaces such as assisted living, medical facilities, and airports. These robots use IoT to provide varying functionality but rely heavily on supervised learning to customize their utilization of the IoT to various environments. A humanoid robot can be a rich source of sensitive data about individuals and environments, and this data may assist in digital investigations, delivering additional information during a crime investigation. In this paper, we present our case study on the Zenbo Humanoid Robot, exploring how Zenbo could be a witness to a crime. In our experiments, a forensic examination was conducted on the robot to locate all useful evidence from multiple locations, including root-level directories using logical acquisition.
Workshop WSDF
Blue Skies from (X’s) Pain: A Digital Forensic Analysis of Threads and Bluesky
Joseph Brown (Louisiana State University, United States), Abdur Rahman Onik (Mr, United States), Ibrahim Baggili (Louisiana State University, United States)
Full Paper
This paper presents a comprehensive digital forensic analysis of the social media platforms Threads and Bluesky, juxtaposing their unique architectures and functionalities against X. This research fills a gap in the extant literature by offering a novel forensic analy- sis of Threads and Bluesky, based on established techniques. Mobile forensic analysis of both platforms yielded few results. Network analysis produced a variety of artifacts for Bluesky, including plain- text passwords. Threads proved to be robust, and a presentation of its security and API flow is presented. A detailed depiction of the forensic analysis performed for this paper is presented to aid future investigators.
Workshop WSDF
Give Me Steam: A Systematic Approach for Handling Stripped Symbols in Memory Forensics of the Steam Deck
Ruba Alsmadi (Louisiana State University, United States), Taha Gharaibeh (Louisiana State University, United States), Andrew Webb (Louisiana State University, United States), Ibrahim Baggili (Louisiana State University, United States)
Full Paper
The Steam Deck, developed by Valve, combines handheld gaming with desktop functionality, creating unique challenges for digital forensics due to its Linux-based SteamOS and its stripped symbol tables. This research addresses how to conduct reliable memory forensics on the Steam Deck. Employing the ~\ac{LiME} and Volatility 3, we acquire and analyze volatile memory, a process complicated by Steam's stripped symbol table that obscures forensic reconstruction of memory structures. Our approach reconstructs these symbols and adapts forensic tools to the Steam Deck’s architecture. Our results include the successful generation and validation of symbol tables and the patching of profiles to align with system configurations. During gameplay, we observed a significant increase in platform-related and game-related processes, highlighting the system's dynamic operation while gaming. These findings contribute to improving forensic methodologies for similar Linux-based devices, enhancing our capability to extract valuable forensic data from modern gaming consoles.
Workshop WSDF
Don’t, Stop, Drop, Pause: Forensics of CONtainer CheckPOINTs (ConPoint)
Taha Gharaibeh (Louisiana State University, United States), Steven Seiden (Louisiana State University, United States), Mohamed Abouelsaoud (Louisiana State University, United States), Elias Bou-Harb (Louisiana State University, United States), Ibrahim Baggili (Louisiana State University, United States)
Full Paper
In the rapidly evolving landscape of cloud computing, containerization technologies such as Docker and Kubernetes have become instrumental in deploying, scaling, and managing applications. However, these containers pose unique challenges for memory forensics due to their ephemeral nature. As memory forensics is a crucial aspect of incident response, our work combats these challenges by acquiring a deeper understanding of the containers, leading to the development of a novel, scalable tool for container memory forensics. Through experimental and computational analyses, our work investigates the forensic capabilities of container checkpoints, which capture a container's state at a specific moment in time. We introduce \textit{ConPoint}, a tool created for the collection of these checkpoints. We focused on three primary research questions: \textit{What is the most forensically sound approach for checkpointing a container's memory and filesystem?}, \textit{How long does the volatile memory evidence reside in memory?}, and \textit{How long does the checkpoint process take on average to complete?} Our proposed approach allowed us to successfully take checkpoints, and recover all intentionally planted artifacts, that is artifacts generated at runtime from the tested container checkpoints. Our experiments determined the average time for checkpointing a container to be 0.537 seconds by acquiring a total of $(n=45)$ checkpoints from containers running different databases. The proposed work demonstrates the pragmatic feasibility of implementing checkpointing as an overarching strategy for container memory forensics and incident response.
Workshop WSDF
Sustainability in Digital Forensics
Sabrina Friedl (University of Regensburg, Germany), Charlotte Zajewski (Universität Regensburg, Germany), Günther Pernul (Universität Regensburg, Germany)
Full Paper
Sustainability has become a crucial aspect of modern society and research. The emerging fusion of digital spaces with societal functions highlights the importance of sustainability. With digital technologies becoming essential, cybersecurity and digital forensics are gaining prominence. While cybersecurity's role in sustainability is recognized, sustainable practices in digital forensics are still in their early stages. This paper presents a holistic view of innovative approaches for the sustainable design and management of digital forensics concerning people, processes, and technology. It outlines how these aspects contribute to sustainability, which aligns with the core principles of economic viability, social equity, and environmental responsibility. As a result, this approach provides novel perspectives on the development of sustainability in the field of digital forensics.
Workshop WSDF
ScaNeF-IoT: Scalable Network Fingerprinting for IoT Device
Tadani Nasser Alyahya (University of Southampton School of Electronics and Computer Science , United Kingdom), Leonardo Aniello (University of Southampton School of Electronics and Computer Science , United Kingdom), Vladimiro Sassone (University of Southampton School of Electronics and Computer Science , United Kingdom)
Full Paper
Recognising IoT devices through network fingerprinting contributes to enhancing the security of IoT networks and supporting forensic activities. Machine learning techniques have been extensively utilised in the literature to optimize IoT fingerprinting accuracy. Given the rapid proliferation of new IoT devices, a current challenge in this field is around how to make IoT fingerprinting scalable, which involves efficiently updating the used machine learning model to enable the recognition of new IoT devices. Some approaches have been proposed to achieve scalability, but they all suffer from limitations like large memory requirements to store training data and accuracy decrease for older devices.

In this paper, we propose ScaNeF-IoT, a novel scalable network fingerprinting approach for IoT devices based on online stream learning and features extracted from fixed-size session payloads. Employing online stream learning allows to update the model without retaining training data. This, alongside relying on fixed-size session payloads, enables scalability without deteriorating recognition accuracy. We implement ScaNeF-IoT by analysing TPC/UDP payloads and utilising the Aggregated Mandrian Forest as the online stream learning algorithm. We provide a preliminary evaluation of ScaNeF-IoT accuracy and how it is affected as the model is updated iteratively to recognise new IoT devices. Furthermore, we compare ScaNeF-IoT accuracy with other IoT fingerprinting approaches, demonstrating that it is comparable to the state of the art and does not worsen as the classifier model is updated, despite not requiring to retain any training data for older IoT devices.
Workshop WSDF
Timestamp-based Application Fingerprinting in NTFS
Michael Galhuber (Wittur Group, Austria), Robert Luh (St. Pölten University of Applied Sciences, Austria)
Full Paper
The NTFS file system contains crucial (meta-)information that plays a significant role in forensic analysis. Among these details are the eight file timestamps, which serve as the foundation for constructing a reliable timeline. However, beyond their temporal significance, these timestamps also harbor valuable clues. Specifically, the patterns of file handling by user programs are reflected in these timestamps. By analyzing these "fingerprint" patterns, it becomes possible to identify the applications responsible for creating and editing files. This discovery facilitates event reconstruction in digital forensics investigations.

In this study, we explore the extent to which timestamp patterns can be harnessed for application fingerprinting. Our approach involves creating classification models based on neural networks and evaluating their performance using established machine learning metrics. The results demonstrate that analyzing user file timestamps allows us to associate and narrow down potential user programs for specific file types and applications. By automating this process, we significantly reduce the analysis phase duration in forensic investigations, providing relief to resource-constrained IT forensic experts. This novel application fingerprinting method enables swift initial assessments of programs involved in cybercrime incidents.
Workshop WSDF
Manipulating the Swap Memory for Forensic Investigation
Maximilian Olbort (FernUniversität in Hagen, Germany), Daniel Spiekermann (FH Dortmund, Germany), Jörg Keller (FernUniversität in Hagen, Germany)
Full Paper
Swap memory plays a critical role in modern operating systems' memory management. This paper explores the potential for manipulating swap memory to alter memory content at runtime and thereby control the behaviour of the target system. While conventional memory security techniques typically focus on preventing runtime manipulation of memory pages, they often overlook the moment when pages are swapped and later reloaded into memory. Therefore, we investigate the feasibility of manipulating swap memory and describe the necessary steps of extracting involved memory areas as well as techniques to force swapping of relevant processes. We verify this theoretical concept with a prototype implementing a manipulation of memory of a given program.
Workshop WSDF
Using DNS Patterns for Automated Cyber Threat Attribution
Cristoffer Leite (Eindhoven University of Technology, Netherlands), Jerry Den Hartog (Eindhoven University of Technology, Netherlands), Daniel Ricardo dos Santos (Forescout Technologies, Netherlands)
Full Paper
Linking attacks to the actors responsible is a critical part of threat analysis. Threat attribution, however, is challenging. Attackers try to avoid detection and avert attention to mislead investigations. The trend of attackers using malicious services provided by third parties also makes it difficult to discern between attackers and providers. Besides that, having a security team doing manual-only analysis might overwhelm analysts. As a result, the effective use of any trustworthy information for attribution is paramount, and automating this process is valuable. For this purpose, we propose an approach to perform automated attribution with a source of reliable information currently underutilised, the DNS patterns used by attackers. Our method creates recommendations based on similar patterns observed between a new incident and already attributed attacks and then generates a list of the most similar attacks. We show that our approach can, at ten recommendations, achieve 0.8438 precision and 0.7378 accuracy. We also show that DNS patterns have a short lifespan, allowing their utility even in more recent knowledge bases.
Workshop WSDF
A Quantitative Analysis of Inappropriate Content, Age Rating Compliance, and Risks to Youth on the Whisper Platform
Jeng-Yu Chou (University of Massachusetts Amherst, United States), Brian Levine (University of Massachusetts Amherst, United States)
Full Paper
We perform an in-depth, quantitative examination of a prominent app by studying the content it sends to users, including minors. Whisper is a popular app that encourages interactions among anonymous users posting short confessional-style texts overlaid on images. We instrumented a system to collect Whisper data over a nine-week period, consisting of 23,516 unique posts. We trained classifiers to detect sexual content appearing in the text content of these posts, estimating 23\% contain sexual content, including requests to meet up for sex with strangers. Whisper's lowest age rating is set for children 13 and older. Our characterization of the collected Whisper data yielded insight into the content circulating the social media platform such as frequency of posts with detected sexual content, community behavior, and age rating compliance. Our data collection and annotation methodology yielded insight into the limitations of accurately detect age-inappropriate content and potential dangers apps may pose to children.
Workshop COSH

IWSECC & SecHealth (joint session)

Proxy Re-Encryption for Enhanced Data Security in Healthcare: A Practical Implementation
Pablo Cosio (i2CAT Foundation, Spain)
Full Paper
In the rapidly evolving digital healthcare landscape, the imperative for robust, flexible, and scalable data protection solutions has never been more critical. The advent of sophisticated cyber threats, coupled with the increasing complexity of healthcare IT infrastructures, underscores the necessity for advanced security mechanisms that can adapt to a wide range of challenges without compromising the accessibility or integrity of sensitive healthcare data. Within this context, our work introduces the SECANT Privacy Toolkit, a pioneering approach that harnesses the power of Proxy Re-Encryption (PRE) to redefine healthcare data security. We present an implementation prototype that not only serves as a baseline for the quantitative evaluation of healthcare data protection but also exemplifies the SECANT Toolkit's capability to enhance interoperability across disparate healthcare systems, strengthen authentication mechanisms, and ensure scalability amidst the growing data demands of modern healthcare networks. This prototype underscores our commitment to addressing the multifaceted security needs of the healthcare sector by providing a solution that is both comprehensive and adaptable to the dynamic landscape of digital health information security.By integrating cutting-edge cryptographic technologies, including Attribute-Based Encryption (ABE) and Searchable Encryption (SE), with the flexibility and control offered by PRE, the SECANT Privacy Toolkit stands at the forefront of secure and efficient healthcare data management. This integration facilitates not only the secure exchange of data across decentralized networks but also empowers healthcare providers with tools for fine-grained access control and privacy-preserving data searches, thereby addressing key challenges such as data interoperability, cybersecurity threats, and regulatory compliance.Our exploration reveals the toolkit's potential to revolutionize the way healthcare data is protected, shared, and accessed, providing a scalable, efficient, and user-friendly platform for healthcare providers, patients, and stakeholders. The SECANT Privacy Toolkit not only aligns with current healthcare data security requirements but also anticipates future challenges, ensuring that it remains a vital asset in the ongoing effort to safeguard sensitive healthcare information. This work contributes significantly toward enhancing the security and privacy of healthcare data, offering a robust framework for interoperability, authentication, and scalability that responds to the evolving needs of the healthcare industry. Through the deployment of our prototype and the subsequent evaluation, we aim to demonstrate the practicality, effectiveness, and transformative potential of the SECANT Privacy Toolkit in advancing healthcare data protection.
Workshop SecHealth
The State of Boot Integrity on Linux - a Brief Review
Robert Haas (Institute of IT Security Research, St.Pölten University of Applied Sciences, Austria), Martin Pirker (Institute of IT Security Research, St.Pölten University of Applied Sciences, Austria)
Full Paper
With the upcoming generational change from Windows 10 to Windows 11, the Trusted Platform Module as a security supporting component will be a requirement for every common PC. While the TPM has seen use with some applications already, its near future ubiquitous presence in all PCs motivates an updated review of TPM supporting software. This paper focuses on the software ecosystem that supports secure boot, a chain of measurements for integrity assessments, and challenges in remote attestation. An brief reflection on the state of the various projects gives a rough overview, but is not an exhaustive and in-depth survey. Still, this short paper contributes to the ongoing adoption and reflection of TPM v2’s features and opportunities.
Workshop IWSECC
Telemetry data sharing based on Attribute-Based Encryption schemes for cloud-based Drone Management system
Alexandr Silonosov (Blekinge Institute of Technology, Sweden), Lawrence Henesey (Blekinge Institute of Technology, Sweden)
Full Paper
The research presented in the paper evaluates practices of Attribute-Based Encryption, leading to a proposed end-to-end encryption strategy for a cloud-based drone management system. Though extensively used for efficiently gathering and sharing video surveilance data, these systems also collect telemetry information with sensitive data.

This paper presents a study addressing the current state of knowledge, methodologies, and challenges associated with supporting cryptographic agility for End-to-End Encryption (E2EE) for telemetry data confidentiality.

To enhance cryptographic agility performance, a new metric has been introduced for cryptographic library analysis that improves the methodology by considering Attribute-Based Encryption (ABE) with a conventional key-encapsulation mechanism in OpenSSL. A comprehensive series of experiments are undertaken to simulate cryptographic agility within the proposed system, showcasing the practical applicability of the proposed approach in measuring cryptographic agility performance.
Workshop IWSECC

IWCC & EPIC-ARES (joined session)

Detection of AI-Generated Emails - A Case Study
Paweł Gryka (Warsaw University of Technology, Poland), Kacper Gradoń (Warsaw University of Technology, Poland), Marek Kozłowski (Warsaw University of Technology, Poland), Miłosz Kutyła (Warsaw University of Technology, Poland), Artur Janicki (Warsaw University of Technology, Poland)
Full Paper
This is a work-in-progress paper on detecting if a text was written by humans or generated by a language model. In our case study, we focused on email messages. For experiments, we used a mixture of publicly available email datasets with our in-house data, containing in total over 10k emails. Then, we generated their "copies" using large language models (LLMs) with specific prompts. We experimented with various classifiers and feature spaces. We achieved encouraging results, with F1-scores of almost 0.98 for email messages in English and over 0.92 for the ones in Polish, using Random Forest as a classifier. We found that the detection model relied strongly on typographic and orthographic imperfections of the analyzed emails and on statistics of sentence lengths. We also observed the inferior results obtained for Polish, highlighting a need for research in this direction.
Workshop IWCC
Unveiling the Darkness: Analysing Organised Crime on the Wall Street Market Darknet Marketplace using PGP Public Keys
Shiying Fan (Fraunhofer SIT, Germany), Paul Moritz Ranly (Fraunhofer SIT, Germany), Lukas Graner (Fraunhofer SIT, Germany), Inna Vogel (Fraunhofer SIT, Germany), Martin Steinebach (Fraunhofer SIT, Germany)
Full Paper
Darknet marketplaces (DNMs) are digital platforms for e-commerce that are primarily used to trade illegal and illicit products. They incorporate technological advantages for privacy protection and contribute to the growth of cybercriminal activities. In the past, researchers have explored methods to investigate multiple identities of vendors covering different DNMs. Leaving aside phenomena such as malicious forgery of identities or Sybil attacks, usernames and their corresponding PGP public keys are used to build brands around users and are considered a trusted method of vendor authentication across DNMs.

This paper aims to demonstrate a forensic method for linking users on a DNM called the Wall Street Market using shared PGP public keys. We developed a trading reputation system to evaluate the transaction behaviour of each user group sharing PGP keys (i.e., PGP groups). Based on the reputation indicators we introduced, we compared PGP groups with high, medium and low reputation levels. Our research suggests that the observed PGP groups exhibit varying organisational structures in relation to their reputation levels, including a more organised and dense cooperation or a looser form of cooperation. As this paper provides an in-depth understanding of user networks on a DNM associated with PGP keys, it is of particular interest for the detection of organised criminal groups on DNMs.
Workshop IWCC
An Exploratory Case Study on Data Breach Journalism
Jukka Ruohonen (University of Southern Denmark, Denmark), Kalle Hjerppe (University of Turku, Finland), Maximilian von Zastrow (University of Southern Denmark, Denmark)
Full Paper
This paper explores the novel topic of data breach journalism and data breach news through the case of databreaches.net, a news outlet dedicated to data breaches and related cyber crime. Motivated by the issues in traditional crime news and crime journalism, the case is explored by the means of text mining. According to the results, the outlet has kept a steady publishing pace, mainly focusing on plain and short reporting but with generally high-quality source material for the news articles. Despite these characteristics, the news articles exhibit fairly strong sentiments, which is partially expected due to the presence of emotionally laden crime and the long history of sensationalism in crime news. The news site has also covered the full scope of data breaches, although many of these are fairly traditional, exposing personal identifiers and financial details of the victims. Also hospitals and the healthcare sector stand out. With these results, the paper advances the study of data breaches by considering these from the perspective of media and journalism.
Workshop IWCC
ParsEval: Evaluation of Parsing Behavior using Real-world Out-in-the-wild X.509 Certificates
Stefan Tatschner (Fraunhofer AISEC; University of Limerick, Germany), Sebastian N. Peters (Fraunhofer AISEC; Technical University of Munich, Germany), Michael P. Heinl (Fraunhofer AISEC; Technical University of Munich, Germany), Tobias Specht (Fraunhofer AISEC, Germany), Thomas Newe (University of Limerick, Ireland)
Full Paper
X.509 certificates play a crucial role in establishing secure communication over the internet by enabling authentication and data integrity. Equipped with a rich feature set, the X.509 standard is defined by multiple, comprehensive ISO/IEC documents. Due to its internet-wide usage, there are different implementations in multiple programming languages leading to a large and fragmented ecosystem. This work addresses the research question “Are there user-visible and security-related differences between X.509 certificate parsers?”. Relevant libraries offering APIs for parsing X.509 certificates were investigated and an appropriate test suite was developed. From 34 libraries 6 were chosen for further analysis. The X.509 parsing modules of the chosen libraries were called with 186,576,846 different certificates from a real-world dataset and the observed error codes were investigated. This study reveals an anomaly in wolfSSL’s X.509 parsing module and that there are fundamental differences in the ecosystem. While related studies nowadays mostly focus on fuzzing techniques resulting in artificial certificates, this study confirms that available X.509 parsing modules differ largely and yield different results, even for real-world out-in-the-wild certificates.
Workshop EPIC-ARES
Submit your paper here!
Your academic contribution for ARES 2024.