Ciclo de seminários promovido pelo DCC/UFMG conta com palestrantes nacionais e internacionais

Nos próximos dias 11, 12 e 13 de junho, (terça a quinta-feira), o Departamento de Ciência da Computação (DCC) da UFMG irá promover, sob a coordenação do professor Heitor Soares Ramos Filho, na sala 2077 do Instituto de Ciências Exatas (ICEx), um ciclo de palestras com diversos pesquisadores nacionais e internacionais. O evento será aberto à participação de estudantes da UFMG, externos e profissionais interessados nos temas que serão abordados. Com uma programação rica e diversa, o objetivo do evento é proporcionar aos participantes contato com os temas que serão abordados, além de uma discussão produtiva com pesquisadores que são referência em suas áreas de estudo. Para participar, não é necessária inscrição prévia, sendo o evento gratuito.

Terça-feira (11/06) –  13h

Palestrante: Fabrício Murai

Título: Devil in the Noise: Detecting Advanced Persistent Threats with Backbone Extraction

Resumo: 

In the dynamically developing field of cyber security, the detection and differentiated analysis of system attacks represents a constant challenge. While conventional methods primarily analyze raw data to detect anomalies, data provenance shows promising results to advance host intrusion detection systems. However, detecting slow-and-low attacks such as APT campaigns still poses a challenge. Therefore, this work presents backbone extraction as a crucial preprocessing step, filtering out irrelevant edges to detect residuals with distinctive node and edge distributions that indicate security threats. By applying our methodology to state-of-the-art benchmark datasets, we observed an increase in the performance of one-class classifiers by up to 62% on F1-score and 48% on recall in the Streamspot dataset and by up to 40% on F1-score and 33% on recall in the DARPA3 THEIA dataset. Moreover, our results indicate mitigation of the dependency explosion problem and underscore the ability of our methodology to improve the detection landscape by shrinking graph sizes without losing essential aspects capable of characterizing attacks.

This talk is based on a paper recently accepted for publication at the IEEE Symposium on Computers and Communications (ISCC) to be held in Paris this year.

Biografia: 

Fabricio Murai is an Assistant Professor in Computer Science & Data Science at the Worcester Polytechnic Institute, and a tenured faculty member in Computer Science at the Universidade Federal de Minas Gerais (Brazil). He received his Ph.D. in Computer Science from the University of Massachusetts, Amherst in 2016. His research focuses on developing innovative AI/ML techniques that (i) learn from interconnections among real-world entities, (ii) enhance our comprehension of technological and techno-social systems through the analysis of data, and (iii) ensure equitable outcomes in high-stakes applications. Recently, he has been working on new deep learning models for a variety of applications and data modalities, such as text, image, time series and graphs. He has published in flagship conferences in his field such as AAAI, SDM, and ASONAM, as well as in prestigious scientific journals including Data Mining and Knowledge Discovery, ACM TKDD and PLOS ONE. He serves as a TPC member for the ACM SIGKDD, ECML-PKDD and WWW.

Site: https://www.wpi.edu/people/faculty/fmurai

Quarta-feira (12/06) – 10h

Palestrante: Daniela Seabra

Título: Human-Centric Computer Usage Profiling For Continuous Authentication: Feasibility, Challenges, And Applications

Resumo:

In this talk I will discuss an investigation my collaborators and I conducted to understand whether computer usage profiles comprised of process-, network-, mouse-, and keystroke-related events are unique and consistent over time in a naturalistic setting, discussing challenges and opportunities of using such profiles in continuous authentication (CA). In this study, we collected ecologically valid computer usage profiles from 31 Microsoft Windows 10 computer users over 8 weeks and submitted this data to comprehensive machine learning (ML) analysis involving a diverse set of online and offline classifiers. Though we found evidence of temporal consistency for most of the profiles within the study period—with most of them reoccurring every 24 hours—our results suggest that these profiles experienced variations over time, due to factors such as days off. This suggests that online ML models (which allow for periodical retraining) might be better suited for a profile-based CA tool than offline models (where training occurs only once). Network domains accessed by users were more relevant in recognizing them than the keyboard and mouse activity. Thus, our data and analysis suggest that binary classifiers (online and offline) can accurately recognize the profiles, indicating that computer usage profiling can uniquely characterize computer users.

Biografia:

Daniela Oliveira is Program Director at the National Science Foundation  (NSF), Directorate for Computer and Information Science and Engineering (CISE), Division of Computer and Network Systems (CNS), Computer Systems Research Cluster. Prior to joining NSF, she was a Professor of Cyber Security at the University of Florida, where she joined in 2014 as part of the UF Rising to Preeminence Hiring Program (Electrical and Computer Engineering Department) and an Assistant Professor of Computer Science at Bowdoin College. She received her B.S. and M.S. degrees in Computer Science from the Federal University of Minas Gerais in Brazil. She then earned her Ph.D. in Computer Science from the University of California at Davis. Her research interest is multidisciplinary computer security, where she employs successful ideas from other fields to make computer systems more secure. Her current research interests include systems security, including kernel protection, dynamic information flow tracking, malware analysis and detection, and socio-technical aspects of cyber social security, such as cyber social engineering. She received a National Science Foundation CAREER Award in 2012 for her innovative research into operating systems’ defense against attacks using virtual machines, the 2014 Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, and the 2017 Google Security, Privacy and Anti-Abuse Award. She is a National Academy of Sciences Kavli Fellow and a National Academy of Engineers Frontiers of Engineering Symposium Alumni. Her research has been sponsored by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), the National Institutes of Health (NIH), the MIT Lincoln Laboratory, and Google. 

Site: https://danielaseabraoliveira.com/

Quinta-feira (13/06) 9h às 10h30

Palestrante: Alejandro Frery

Título: Introduction to Signal Analysis with Ordinal Patterns

Resumo:

Ordinal Patterns are a non-parametric transformation of subsequences. They were proposed by Bandt & Pompe in 2002 as a means to unveil the underlying dynamics that govern signals. Ordinal Patterns have several interesting properties, among them being resistant to outliers and invariant to strictly increasing transformations. With more than 2600 citations, they are a powerful tool for analysing time series, images, and networks. This talk is an overview of Ordinal Patterns, ways of analysing them, applications, and future research directions.

Biografia: 

Alejandro C. Frery (S’91–M’94–SM’14) was born in Mendoza, Argentina, in 1960. In 1983, he received a B.Sc. in Electronic and Electrical Engineering from the Universidad de Mendoza. His M.Sc. degree was in Applied Mathematics (Statistics) from the Instituto de Matemática Pura e Aplicada (IMPA, Rio de Janeiro, 1990), and his Ph.D. degree was in Applied Computing from the Instituto Nacional de Pesquisas Espaciais (INPE, São José dos Campos, Brazil, 1993). His research interests are data visualisation, statistical computing, and stochastic modelling, with signal and image processing and networks applications. He has held a “Huashan Scholar” position at Xidian University, Xi’an, China, since 2019. Since May 2020, he has been a Statistics and Data Science Professor at the School of Mathematics and Statistics, Victoria University of Wellington, New Zealand. In 2018, he received the IEEE GRSS Regional Leader Award. After serving as Associate Editor for over five years, Prof. Frery was the Editor-in-Chief of the IEEE Geoscience and Remote Sensing Letters from 2014 to 2018. He was an IEEE Geoscience and Remote Sensing Society (GRSS) Distinguished Lecturer from 2015 to 2019. He served as this Society’s AdCom (Advisory Committee) member, in charge of Future Publications, Plagiarism and Regional Symposia from 2019 to 2022. Since 2023, he has been the vice president of Publications.

Site: https://people.wgtn.ac.nz/alejandro.frery

10h30 – 12h 

Palestrante: Amir Houmansadr

Título: Revisiting Poisoning Attacks on Federated Learning

Resumo:

Federated learning (FL) is increasingly adopted by various distributed platforms, in particular Google’s Gboard and Apple’s Siri use FL to train next word prediction models, and WeBank uses FL for credit risk predictions. A key feature that makes FL highly attractive in practice is that it allows training models in collaboration among mutually untrusted clients, e.g., Android users or competing banks. Unfortunately, this makes FL susceptible to a threat known as poisoning: a small fraction of (malicious) FL clients, who are either owned or controlled by an adversary, may act maliciously during the FL training process in order to corrupt the jointly trained global model.

In this talk, I will take a critical look at the existing literature on (mainly, untargeted) poisoning attacks under practical production FL environments, by carefully characterizing the set of realistic threat models and adversarial capabilities. I will discuss some rather surprising findings: contrary to the established belief, we show that FL is highly robust in threat models that represent production systems. I will also discuss some promising directions towards defending against poisoning,  in particular by using supermask learning.

Biografia:

Amir Houmansadr is an Associate Professor of computer science at UMass. Amherst. He received his Ph.D. from the University of Illinois at Urbana-Champaign in 2012, and spent two years at the University of Texas at Austin as a postdoctoral scholar. Amir is broadly interested in the security and privacy of networked/AI systems. To that end, he designs and deploys privacy-enhancing technologies, analyzes network protocols and services (e.g., messaging apps and machine learning APIs) for privacy leakage, and performs theoretical analysis to derive bounds on privacy (e.g., using game theory and information theory). Amir has received several awards including an NSF CAREER Award in 2016, a Google Faculty Research Award in 2015, a DARPA Young Faculty Award (YFA) in 2022, and an ACM CCS Distinguished Paper Award in 2023, and the 2023 Best Practical Paper Award from the FOCI Community, a Best Paper Award at CSAW 2023 Applied Research Competition, and a 2024 Applied Networking Research Prize (ANRP).

Site: https://people.cs.umass.edu/~amir/

ATIVIDADE EXTRA – 12 de junho, às 14 horas

Defesa de Projeto de Tese de Doutorado
Pedro Henrique Silva Souza Barros
Título:
Uncertainty Quantification in Adversarial Federated Learning
Resumo:
This thesis investigates novel methodologies in Federated Learning (FL). This paradigm enables multiple devices to collaboratively develop a shared machine learning model while keeping all training data localized, thus enhancing client privacy. FL trains local models on individual devices, which are then aggregated on a central server. Despite its advantages, FL is vulnerable to model poisoning attacks, where malicious nodes inject fake model updates, jeopardizing the integrity of the global model. This thesis, titled “Uncertainty quantification in Adversarial Federated Learning”, introduces novel approaches to improve the privacy and security of distributed machine learning models against such threats.
We explore three distinct methods for quantifying uncertainty in FL models to achieve this goal. The first, Laplace approximation using the Hessian matrix in neural networks, is explicitly applied to detect Distributed Denial of Service (DDoS) attacks within FL settings. This method leverages the second-order derivatives of the loss function to approximate the uncertainty in model predictions, providing a refined understanding of model confidence in the presence of adversarial attacks and enhancing the detection and mitigation of DDoS attacks. The second method introduces an ad-hoc approach using a deep metric learning technique, namely SMELL. This novel method defines a similarity space (S-Space) to represent data more effectively by mapping pairs of elements from the original feature space into this new auxiliary space. The similarity between data pairs is quantified using markers within the S-Space, allowing for intuitive and flexible detection of anomalies and potential threats in FL environments. The third method extends the ad-hoc approach by employing Bayesian neural networks with variational inference. This extension uses Bayesian principles to model uncertainty by treating network weights as distributions rather than point estimations, allowing for a probabilistic interpretation of model outputs and improved resilience against malicious attacks.

By integrating these uncertainty quantification methods, the thesis aims to mitigate the risks of model poisoning attacks, thereby enhancing the robustness, reliability, and security of FL applications. Experimental results demonstrate the effectiveness of these approaches in reinforcing the integrity of distributed machine learning models under adversarial conditions.

Acesso por PERFIL

Pular para o conteúdo