Tim Salimans is a Research Scientist Team Lead at Google DeepMind in Amsterdam with over a decade of experience advancing generative modeling, semi- and unsupervised learning, and reinforcement learning. He is widely known for seminal contributions to GANs (including the Inception score and semi-supervised GANs), early VAE reparameterization work that earned the 2014 Lindley Prize, and for practical techniques like distillation and classifier-free guidance that improve large generative models. Beyond theory, he has a strong engineering footprint — contributing to foundational tooling such as Theano (including GPU-era scalar ops), OpenAI’s improved-gan codebase, and gradient-checkpointing work that enables very large models to fit in memory. An entrepreneur and practitioner as well as a researcher, he founded Aidence (medical imaging diagnostics), ran a boutique data-science consultancy, is a multi-time Kaggle winner, and even has experience as a quantitative market maker, demonstrating rare breadth from production systems to high-stakes applied ML.
12 years of coding experience
6 years of employment as a software developer
BSc (Hons), Liberal Arts and Sciences (Magna Cum Laude) Major in Mathematics & Physics, BSc (Hons), Liberal Arts and Sciences (Magna Cum Laude) Major in Mathematics & Physics at University College Utrecht
PhD Econometrics, PhD Econometrics at Erasmus University Rotterdam
Exchange Semester in Australia, Science, Exchange Semester in Australia, Science at Monash University
Code for the paper "Improved Techniques for Training GANs"
Role in this project:
ML Engineer
Contributions:23 commits, 2 PRs, 5 pushes in 1 year 11 months
Contributions summary:Tim contributed to the `openai/improved-gan` repository, which focuses on improving GAN training techniques. Their commits primarily involve modifications to the `nn.py` file, suggesting they worked on core neural network components. These changes included adding, removing, and refactoring various network layers and functions. Furthermore, the user made updates to training scripts, indicating involvement in the model training process.
Contributions:28 commits, 3 PRs, 10 pushes in 3 months
Contributions summary:Tim primarily focuses on improving the memory efficiency of gradient computation within a TensorFlow environment. Their contributions involve modifying existing code and implementing techniques for gradient checkpointing. This includes the implementation of automatic checkpoint selection strategies, modifications to the gradient computation process, and adjustments to improve test coverage and correctness. The user's work directly impacts the ability to train large neural networks within memory constraints.
memorydeep-learningnetsneural-networksfit
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.