VOL_17
13 November 2025 at 6.00 pm
online

Robustness and Efficiency in Large-Scale AI

In the 17th edition of Better AI Meetup, we will look at robustness and efficiency in large-scale AI. As foundation models become more powerful, the dual challenges of ensuring their security and optimizing their computational footprint have never been more critical.

​To tackle these issues head-on, join us for our first-ever joint event. We’re bringing our communities together for a special Czecho-Slovak evening of deep dives and quality networking, featuring Stanislav Fort (AISLE™) and Vladimír Macko (GrizzlyTech) covering both sides of the coin: from the high-level challenge of adversarial attacks right down to the metal with GPU-aware optimization.

​Supported by the Slovak Diaspora Project under slovaks.ai, we’re excited to meet the Slovak AI community in person in Prague. While an online option is available (the link to the stream will be sent on November 12th), we hope to see as many of you as possible on-site for the full experience.

_Supported by

_Recording

_Speakers

Stanislav Fort_bw

Stanislav Fort

Founder and Chief Scientist of AISLE

Dr Stanislav Fort is the Chief Scientist of an AI research startup, specializing in robustness, interpretability, safety, and cybersecurity. He received his PhD in 2022 from Stanford University in the Neural Dynamics and Computation Laboratory with Prof. Surya Ganguli. In the past, Stanislav spent time at Google Brain as an AI Resident, worked on the Claude model at Anthropic, and led the language model team at Stability AI. He received his Bachelor’s and Master’s degrees in theoretical physics from the University of Cambridge. Dr Fort has published over 35 academic papers and which have received over 10,000 citations.

Talk: Security, robustness and interpretability of large-scale AI models

Adversarial attacks pose a significant challenge to the robustness, reliability, alignment and interpretability of deep neural networks from simple computer vision to hundred-billion-parameter language models. Despite their ubiquitous nature, our theoretical understanding of their character and ultimate causes, as well as our ability to successfully defend against them, is noticeably lacking. This talk examines the robustness of modern deep learning methods and the surprising scaling of attacks on them, and showcases several practical examples of transferable attacks on the largest closed-source vision-language models out there. I will conclude with a direct analogy between the problem of adversarial examples and the much larger task of general AI alignment.

vladimir_macko_bw

Vladimír Macko

Founder of GrizzlyTech

With over a decade of experience in machine learning, Vladimir began his journey in the field working with startups. He then joined Google AI as a Machine Learning Researcher, working on large-scale ML for optimization problems and autoML. Over the past six years, Vladimir has collaborated with a wide range of organizations to bring their machine learning visions to life. Among others, he contributed to privacy-preserving authentication systems for a biometric company, time series classification for clinical studies, a resume grading system for a career portal and achieving top 50 NIST with a facial recognition client. Currently, Vladimir focuses on pruning of neural networks, making them smaller and faster with the company GrizzlyTech.

Talk: Model compression and properties of modern GPUs

The rapid scaling of deep learning models has fueled breakthroughs across domains, but has also created a widening gap between algorithmic innovation and hardware efficiency. Model compression through pruning, quantization, distillation, and beyond, offers a principled way to bridge this gap, but its effectiveness is tightly coupled to the evolving properties of modern GPUs. This talk explores the interplay between compression techniques and GPU architectures, highlighting both theoretical insights and practical considerations. We will discuss:

  • How compression impacts computational patterns such as memory bandwidth utilization, sparsity support, and tensor core efficiency.
  • What modern GPU design choices (e.g., mixed precision, high-bandwidth memory, specialized interconnects) imply for compressed models.

_Agenda

Language: English

6:00 pm
_Introduction
6:30 pm
_Stanislav Fort, AISLE: Security, robustness and interpretability of large-scale AI models
7:00 pm
_Vladimír Macko, GrizzlyTech: Model compression and properties of modern GPUs
7:30 pm
_Networking

Past Meetups