Keynote Speakers

We are proud to announce the 2022 confirmed keynote speakers.
Day, time and room of keynotes will be announced prior to conference.

ARES Keynotes

Steven Furnell
University of Nottingham, United Kingdom

The strange world of the password

Despite years of evidence of poor practice, people continue to choose weak passwords and continue to be allowed to do so. Normally, if something is broken then the answer is to fix or replace it. However, with passwords the problem seems able to persist unchecked and we continue to use them extensively despite the flaws. Adding further evidence of the issue, this presentation reports on the fifth run of a study into the provision of password guidance and the enforcement of password rules by a series of leading websites. The investigation has been conducted every 3-4 years since 2007 and the latest findings continue to reveal areas of notable weakness. This includes many sites still offering little or no meaningful guidance, and still permitting users to choose passwords that ought to be blocked at source. It seems that while we remain ready to criticise users for making poor choices, we repeatedly fail to take steps that would help them to do better.

Steven Furnell is a professor of cyber security at the University of Nottingham in the United Kingdom. He is also an Adjunct Professor with Edith Cowan University in Western Australia and an Honorary Professor with Nelson Mandela University in South Africa. His research interests include usability of security and privacy, security management and culture, and technologies for user authentication and intrusion detection. He has authored over 350 papers in refereed international journals and conference proceedings, as well as various books, book chapters and industry reports. Prof. Furnell is the UK representative to Technical Committee 11 (security and privacy) within the International Federation for Information Processing, as well as the editor-in-chief of Information and Computer Security, and a Fellow and board member of the Chartered Institute of Information Security.

CD-MAKE Keynotes

Alexander Jung
Assistant Professor, Aalto University, Finland
Associate Editor, IEEE Signal Processing Letters

Explainable Empirical Risk Minimization

The successful application of machine learning (ML) methods becomes increasingly dependent on their interpretability or explainability. Designing explainable ML systems is instrumental to ensuring transparency of automated decision-making that targets humans. The explainability of ML methods is also an essential ingredient for trustworthy artificial intelligence. A key challenge in ensuring explainability is its dependence on the specific human user (“explainee”).
The users of machine learning methods might have vastly different background knowledge about machine learning principles. One user might have a university degree in machine learning or related fields, while another user might have never received formal training in high-school mathematics. We measure explainability via the conditional entropy of predictions, given some user signal. This user signal might be obtained from user surveys or biophysical measurements.
We propose explainable empirical risk minimization (EERM) principle of learning a hypothesis that optimally balances between the subjective explainability and risk.
The EERM principle is flexible and can be combined with arbitrary machine learning models. We present several practical implementations of EERM for linear models and decision trees. Numerical experiments demonstrate the application of EERM to detecting the use of inappropriate language on social media.

Alexander Jung received the Ph.D. degree (with sub auspiciis) in 2012 from Technical University Vienna (TU Vienna). After Post-Doctoral periods at TU Vienna and ETH Zurich, he joined Aalto University as an Assistant Professor for Machine Learning in 2015. He leads the group “Machine Learning for Big Data” that studies explainable machine learning in network-structured data. Prof. Jung first-authored a paper that won a Best Student Paper Award at IEEE ICASSP 2011. He received an AWS Machine Learning Research Award and was the “Computer Science Teacher of the Year” at Aalto University in 2018. Currently, he serves as an associate editor for the IEEE Signal Processing Letters and as the chair of the IEEE Finland Jt. Chapter on Signal Processing and Circuits and Systems. He authored the textbook, Machine Learning: The Basics (Springer, 2022).

 

Matthew E. Taylor
Director, Intelligent Robot Learning Lab, Associate Professor & Graduate Admissions Chair, Computing Science
Fellow and Fellow-in-Residence, Alberta Machine Intelligence Institute
Canada CIFAR AI Chair, Amii

Reinforcement Learning in the Real World: Challenges and Opportunities for Human-Agent Interaction

While reinforcement learning (RL) has had many successes in video games and toy domains, recent success in high-impact problems shows that this mature technology can be useful in the real world. This talk will highlight some of these successes, with an emphasis on how RL is making an impact in commercial settings, as well as what problems remain before it can become plug-and-play like many supervised learning technologies. Further, we will argue that RL, like all current AI technology, is fundamentally a human-in-the-loop paradigm. This framing will help motivate why additional fundamental research at the interaction of humans and RL agents is critical to helping RL move out of the lab and into the hands of non-academic practitioners.

Matt Taylor is an Associate Professor of Computing Science at the University of Alberta, where he directs the Intelligent Robot Learning Lab. He is also a Fellow and Fellow-in-Residence at Amii (the Alberta Machine Intelligence Institute). His current research interests include fundamental improvements to reinforcement learning, applying reinforcement learning to real-world problems, and human-AI interaction. His book “Reinforcement Learning Applications for Real-World Data” by Osborne, Singh, and Taylor is aimed at practitioners without degrees in machine learning and has an expected release date of Summer 2022.

Workshop Keynotes

SP2I

Dr. Xiaolu Hou
Faculty of Informatics and Information Technologies, Slovak University of Technology

Artificial Intelligence-Assisted Side Channel Attacks
Deep neural networks (DNN) have gained popularity in the last decade due to advances in available computational resources. In particular, side-channel attacks (SCA) have received the most attention as being a classification problem, DNN comes as a natural candidate. In this talk, we will first provide the basics of SCA and explain how it can recover the secret key of a cryptographic implementation. Then, we will present the recent literature on applications of DNN to SCA. As a demonstration, we will detail a work that aims to propose a general framework that helps users with the overall trace analysis aided by DNN, minimizing the necessity for architecture adjustments by the user.

Dr. Xiaolu Hou is currently an Assistant Professor at Slovak University of Technology. She received her Ph.D. degree in Mathematics from Nanyang Technological University, Singapore, in 2017. Her current research focus is on fault injection and side-channel attacks on both cryptographic implementations and neural networks. She also has research experience in AI-assisted cryptanalysis, location privacy, and multiparty computation.

Assoc. prof. Gabriele Costa
IMT School for Advanced Studies Lucca
https://www.imtlucca.it/en

Security-by-Design in Intelligent Infrastructures: the HAII-T orchestrator
In the last years Security-by-Design has emerged as the main methodology for securing the life cycle of software and systems. Its effectiveness is the result of a strong integration with all the development phases, from the earliest conceptualization and design to the final disposal. Large scale, critical infrastructures can benefit the most from this approach. Nevertheless, they also carry an extreme degree of complexity that must be dealt with. In this talk we will consider the SPARTA perspective on the definition and implementation of a secure orchestrator for making intelligent infrastructures Secure-by-Design.

Gabriele Costa is associate professor at the SySMA Group of the IMT School for Advanced Studies. He received his M.Sc. in Computer Science in 2007 and his Ph.D. in Computer Science in 2011, both at the University of Pisa. He was a member of the cybersecurity group of the Istituto di Informatica e Telematica (IIT) of the CNR. His appointments include a period as visiting researcher at ETH Zurich in 2016-2017. He was co-founder of the Computer Security Laboratory (CSec) at DIBRIS (Computer Science and Computer Engineering Department of the University of Genoa). He is co-founder and CRO of Talos, a spin-off of DIBRIS focused on Cybersecurity. His main focus is on studying and applying formal methods for the automatic verification and security testing of mobile and modular systems.