scispace - formally typeset
F

Fabio Roli

Researcher at University of Cagliari

Publications -  398
Citations -  21672

Fabio Roli is an academic researcher from University of Cagliari. The author has contributed to research in topics: Biometrics & Random subspace method. The author has an hindex of 71, co-authored 383 publications receiving 18681 citations. Previous affiliations of Fabio Roli include Northwestern Polytechnical University & University of Genoa.

Papers
More filters
Book ChapterDOI

Evasion attacks against machine learning at test time

TL;DR: This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Book

Multiple Classifier Systems

TL;DR: Novel computational approaches for deep learning of behaviors as opposed to just static patterns will be presented, based on structured nonnegative matrix factorizations of matrices that encode observation frequencies of behaviors.
Book ChapterDOI

Evasion Attacks against Machine Learning at Test Time

TL;DR: In this paper, the authors present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Proceedings ArticleDOI

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

TL;DR: A thorough overview of the evolution of this research area over the last ten years and beyond is provided, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks.
Proceedings ArticleDOI

Wild patterns: Ten years after the rise of adversarial machine learning half-day tutorial

TL;DR: This tutorial introduces the fundamentals of adversarial machine learning to the security community, and presents novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks.