Sam Bowman is a Member of Technical Staff at Anthropic in San Francisco, managing AI safety and evaluation research teams while on leave from his role as an associate professor in data science, linguistics, and computer science at NYU. He studies artificial neural network models for natural language understanding and marries faculty-level linguistic expertise with production ML engineering from four Google Brain internships and advisory roles at startups. A hands-on open-source contributor, he improved checkpointing, evaluation robustness, and ELMo model saving in the widely used jiant NLP toolkit, helping prevent accidental evaluation of untrained models. Sam also engages with the effective-altruism community as a Giving What We Can participant, reflecting a commitment to impact beyond academia and industry.
Contributions:6 releases, 9 reviews, 140 commits in 2 years 3 months
Contributions summary:Sam primarily contributed to the model's checkpointing and evaluation logic within the "jiant" NLP toolkit. They focused on ensuring that the correct model checkpoints from pretraining and target training runs were loaded and utilized during evaluation. Furthermore, they implemented error handling to prevent the evaluation of untrained models and made fixes related to ELMo model saving. The user demonstrated expertise in model loading, and also contributed to the results script.
Contributions:25 commits, 1 PR, 51 pushes in 1 year 2 months
javascriptreactjekyllworkshop
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.