Hi! PARIS Reading groups “AI Robustness & Security”
The Hi! PARIS reading groups propose to study a topic using scientific articles on a theoretical and a practical point of view. The reading groups are opportunities of interaction between our corporate donors and our affiliates academic teams around selected topics of interest.
Each edition is planned for 2-3 sessions presenting one topic by the mean of 3-4 research papers. For each session: presentation of mathematical models and theoretical advances by a researcher + simulations with a Python notebook by an engineer.
Registration
Please register to the event using your professional email address to get your personal conference link. Please do not share your personalised link with others, it is unique to you. You will receive an email regarding your registration status.
AI Robustness & Security
AI robustness refers to a system’s ability to maintain reliable, accurate behavior even when faced with unexpected inputs, noisy data, distribution shifts, or deliberate adversarial attacks. A robust AI model generalizes well to new or slightly different conditions, is not overly sensitive to small changes in input, and continues to perform safely in real‑world settings such as healthcare, autonomous vehicles, or finance. Ensuring robustness often involves techniques like adversarial training, data augmentation, and formal robustness analysis to reduce the risk of brittle or unreliable behavior as AI becomes more deeply embedded in critical applications.
Session 1/2
Tuesday April 14, 2026 – 2.00-3.30 PM (Online)
- Speaker: Quentin Bouniot (Hi! PARIS Chair Holder), Télécom Paris
- Title: Towards Deep Learning Models Resistant to Adversarial Attacks
- Abstract: Recent work shows that deep neural networks are vulnerable to adversarial examples, inputs nearly indistinguishable from natural data yet misclassified by the network. These vulnerabilities may be an inherent weakness of deep learning. We study adversarial robustness through the lens of robust optimization, providing a broad, principled view that unifies prior work and yields reliable, universal methods for training and attacking networks. This approach offers concrete security guarantees against any adversary and enables training of models significantly more resistant to attacks. It also introduces the notion of security against a first-order adversary as a natural, broad guarantee, an important step toward fully robust deep learning. Code and pre-trained models: MNIST and CIFAR10.
Paper: “Towards Deep Learning Models Resistant to Adversarial Attacks”, Madry et al., ICLR 2018.
Session 2/2
Tuesday May 12, 2026 – 2.00-3.30 PM (Online)
- Speaker: Gianni Franchi, ENSTA
Details for this session will be announced soon. Stay tuned for updates!