Summary
Gunnar Lund is a machine learning engineer specializing in Responsible AI and NLP, focusing on assessing, derisking, and improving AI systems used by over 100 million users from ethics, bias, toxicity, and privacy perspectives. Based in San Francisco, he translates RAI policy into concrete technical solutions, combining hands-on ML development with governance and cross-functional collaboration. Currently at Apple as a Machine Learning Engineer - Responsible AI, he previously contributed to Grammarly and led research projects at Harvard University on semantic and syntactic theory, as well as pedagogical initiatives in linguistics. With a PhD in Linguistics from Harvard and a BA with honors in Linguistics and Philosophy from Boston College (and studies at Uppsala), Gunnar brings rigorous theory to practical, production-grade AI systems. His work sits at the intersection of linguistics, policy, and engineering, making complex ethical considerations actionable in modern language technologies.
10 years of coding experience
7 years of employment as a software developer
Harvard University
English, French, Swedish, Turkish