Hagay Lupesko

San Francisco, California, United States
email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts
email-iconphone-icongithub-logolinkedin-logotwitter-logostackoverflow-logofacebook-logo
Join Prog.AI to see contacts

Summary

🤩
Rockstar
Hagay Lupesko is a senior technology leader in AI inference, currently SVP of AI Inference at Cerebras Systems and based in San Francisco, with a decade of experience building teams and shipping large-scale training and inference products. He led engineering organizations at MosaicML (acquired by Databricks) and Meta AI, and earlier built deep learning tooling at AWS and product teams at Amazon Music. Hagay combines executive strategy with hands-on deployment chops — he contributed operational improvements to awslabs/multi-model-server, refining Docker/GPU configs, install scripts and nginx setups to make model serving more reliable. Known for hiring and scaling high-performing teams, he bridges research and production to accelerate real-world adoption of deep learning.
code10 years of coding experience
github-logo-circle

Github Skills (10)

dockerce10
docker10
ci-cd10
dockers10
nginx9
gunicorn9
deep-learning8
inference8
python8
ai8

Programming languages (11)

TypeScriptJavaShellC++CJavaScriptHTMLXSLT

Github contributions (5)

github-logo-circle
awslabs/multi-model-server

Oct 2017 - Jun 2018

Multi Model Server is a tool for serving neural net models for inference
Role in this project:
userDevOps Engineer
Contributions:2 releases, 26 commits, 16 PRs in 8 months
Contributions summary:Hagay primarily focused on updating and refining the project's build and deployment process. Their contributions involved modifying Docker configurations for both CPU and GPU environments, updating the install scripts to include necessary packages, and configuring the Nginx setup. Additionally, the user bumped the project's version in setup.py and updated the model server configuration file, highlighting their role in managing the overall deployment lifecycle. The user also fixed a multi-file download issue.
pytorchmxnetservingdeep-learninginference
lupesko/incubator-mxnet

Aug 2017 - Jan 2019

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Contributions:8 pushes, 11 branches in 1 year 4 months
pythonschedulerdataflowmutationorchestration
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.
Request Free Trial
Hagay Lupesko