In Google Colab notebooks I share my codes and experiment results of some published algorithms.
Fast Adversarial Training Algorithms
In this notebook, I will present two fast adversarial training algorithms that appear in NeurIPS 2019: Free-m and YOPO-m-n. The algorithms are implemented with Keras and Tensorflow. Experiments show that the two algorithms are faster than the original adversarial training as they require much fewer training epochs, and achieve as good performance.
In this talk, I talk about how to understand noise in machine learning. In the first section, I discuss the existence of critical learning periods. Noise in critical learning periods seriously jeopardizes model training. In the second section I present a paper proving that neural networks can fit random labels, but much more slowly than clean labels. The paper also proposes a dataset complexity measurement that affects both optimization and generalization. Finally, I introduce several tools that can help us analyze the effect of noise, and end up with an open problem.