Newsletter #5

Back to the roots: anomaly detection and outlier detection have always been the most exciting areas of machine learning for me. Although it comes across as a niche, its principles are rather general. 

At the core, it is about density (-level set) estimation, unsupervised learning, and (this is getting more important with all the deep learning hype) representation learning. It is neither restricted to a specific data type (continuous, discrete, time-series, trees, graphs, you name it) nor method nor application. Moreover, there are so many different ideas on how to approach the problem (proximity-based, angle-based, extreme-value, kernel-based, Bayesian, information-theoretic, etc pp.) that it is never getting boring…

Traditionally, KDD—where I am program committee member—has been the main top tier conference for anomaly detection research. However, NeurIPS had its share of exciting papers as well. So, here is a list of some recent AD papers and resources that I find interesting. 
Our picks of the week
Recent work

Deep Variational Semi-Supervised Novelty Detection
T Daniel, T Kurutach, A Tamar – arXiv preprint arXiv:1911.04971, 2019

GANomaly: Semi-supervised Anomaly Detection via Adversarial Training
Samet Akcay, Amir Atapour-Abarghouei, Toby P. Breckon

Transfer Anomaly Detection by Inferring Latent Domain Representations 
Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara

Statistical Analysis of Nearest Neighbor Methods for Anomaly Detection 
Xiaoyi Gu, Leman Akoglu*, Alessandro Rinaldo

*Leman Akoglu (Associate Professor, CMU) is one of the organizers of the important ODD workshop (outlier detection workshop co-located with KDD) and a key figure in anomaly detection research.

PIDForest: Anomaly Detection via Partial Identification 
Parikshit Gopalan, Vatsal Sharan, Udi Wieder

PyOD: A python toolbox for scalable outlier detection 
by Yue Zhao, Zain Nasrullah, and Zheng Li, JMLR 

JMLR Paper