Ansh Radhakrishnan

New York, New York, United States
email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts
email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts

Summary

🤩
Rockstar
Ansh Radhakrishnan is a Member of Technical Staff at Anthropic with eight years of software engineering and machine learning experience based in New York. He combines research-facing roles at Redwood Research and production engineering at Google with a Yale degree in Statistics & Data Science (3.85 GPA) and completion of an ML for Alignment bootcamp. Ansh is an active open-source contributor—his work on the HumanCompatibleAI/imitation library improved documentation, added default CNN policy configurations, and fixed flaky tests—demonstrating a knack for making ML tooling more usable and reliable. He excels at bridging research and production, surfacing subtle reliability issues early and improving testability across complex ML systems.
code8 years of coding experience
github-logo-circle

Github Skills (9)

pytorch10
pytest10
python10
documentations9
documentation9
fasterrcnn9
mask-rcnn9
faster-rcnn9
gymnasium8

Programming languages (1)

Python

Github contributions (5)

github-logo-circle
HumanCompatibleAI/imitation

Nov 2022 - Nov 2022

Clean PyTorch implementations of imitation and reward learning algorithms
Role in this project:
userML Engineer & QA Engineer
Contributions:26 reviews, 5 commits, 9 PRs in 7 days
Contributions summary:Ansh contributed significantly to the documentation and testing of the `imitation` library, specifically focusing on the "What is Imitation" section and the integration of CNN policies. They addressed several issues by adding default configurations for CNN policies, fixing broken tests and addressing flaky tests. These contributions focused on improving the library's usability and reliability.
pytorchimplementationsreinforcement-learningcleanmachine-learning
anshradh/trl_custom

Apr 2022 - May 2022

Applying Reinforcement Learning from Human Feedback to language models to teach them to write short story responses to writing prompts.
Contributions:53 commits, 74 pushes, 1 branch in 18 days
promptsreinforcement-learningshortlanguage-modelsteach
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.
Request Free Trial