Grégory Chatel is a seasoned AI leader and researcher with 11 years of experience, currently guiding R&D and data science at DISAITEK while driving AI innovations at Intel and teaching machine learning at Université Gustave Eiffel. He holds a PhD in computer science with a focus on combinatorial algebra and a deep-rooted passion for deep learning that informs both research and production-grade work. An active open-source contributor, he refined PyTorch-based transformer models, added modular heads for classification and similarity, and improved dataset encoding and loss computations in HuggingFace transformers and the OpenAI LM integration. His academic engagement as a machine learning teacher complements industry roles, enabling him to translate complex theory into practical AI solutions across finance and tech. Based in Gagny, Île-de-France, his profile combines research excellence, robust software engineering, and a track record of shipping scalable ML systems.
🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Role in this project:
ML Engineer
Contributions:25 commits, 11 PRs, 27 comments in 2 months
Contributions summary:Grégory primarily focused on refining and extending the functionality of a PyTorch-based transformer language model. Their contributions included refactoring existing code for clarity, introducing new head modules for tasks such as classification and similarity, and modifying the model's architecture to accommodate different task types. Furthermore, the user made adjustments to the data encoding and loss computation processes. The changes aimed to improve flexibility and support for various downstream applications.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Role in this project:
ML Engineer
Contributions:23 commits, 7 PRs, 44 comments in 7 months
Contributions summary:Grégory primarily focused on developing and refining code related to the SWAG (Situations With Actions and Goals) dataset, a multiple-choice task used for evaluating language models. Their contributions included defining the `SwagExample` class, implementing code to read the dataset, creating the `convert_examples_to_features` function, and integrating it with a `BertForMultipleChoice` model. The user also fixed commentary and improved the code structure for clarity and readability. This work involved integrating BERT models with a new multiple-choice dataset.
pythonbertspeech-recognitionstate-of-the-artflax
Find and Hire Top DevelopersWe’ve analyzed the programming source code of over 60 million software developers on GitHub and scored them by 50,000 skills. Sign-up on Prog,AI to search for software developers.