Software Engineer, Technical Lead, Inference
Plus aucune candidature n'est acceptée pour cette offre d'emploi
Mistral AI
il y a un mois
Date de publicationil y a un mois
S/O
Niveau d'expérienceS/O
Temps pleinType de contrat
Temps pleinGénie logiciel / Développement WebCatégorie d'emploi
Génie logiciel / Développement WebAbout Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role summary
As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment. You will lead the acquisition and automation of benchmarks, collaborate with cross-functional teams, and innovate solutions to enhance our AI-powered applications.
What you will do
• Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
• Lead the acquisition and automation of benchmarks at both micro and macro scales.
• Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
• Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
• Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
• Optimize code and infrastructure to maximize hardware utilization and efficiency.
• Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning.
About you
• Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
• Deep understanding of modern ML architectures and experience with performance optimization for inference.
• Proven track record with large-scale distributed systems, particularly performance-critical ones.
• Familiarity with PyTorch, TensorRT, CUDA, NCCL.
• Strong grasp of infrastructure, continuous integration, and continuous development principles.
• Ability to lead and mentor team members, driving projects from concept to implementation.
• Results-oriented mindset with a bias towards flexibility and impact.
• Passion for staying ahead of emerging technologies and applying them to Al-driven solutions.
• Humble attitude, eagerness to help colleagues, and a desire to see the team succeed.
Our Culture
We're driven to build a strong company culture and are looking for individuals with solid alignment with the following:
• Reason with rigor
• Are you audacious enough?
• Make our customers succeed
• Ship early and accelerate
• Leave your ego aside
Location & Remote
This role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside in Paris or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team.
In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting - currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In that case, we ask all new hires to visit our Paris office:
• for the first week of their onboarding (accommodation and travelling covered)
• then at least 3 days per month
What we offer
Competitive salary and equity
Health insurance
Transportation allowance
Sport allowance
Meal vouchers
Private pension plan
Parental : Generous parental leave policy
Visa sponsorship
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
At Mistral AI, we believe in the power of AI to simplify tasks, save time and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role summary
As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment. You will lead the acquisition and automation of benchmarks, collaborate with cross-functional teams, and innovate solutions to enhance our AI-powered applications.
What you will do
• Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
• Lead the acquisition and automation of benchmarks at both micro and macro scales.
• Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
• Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
• Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
• Optimize code and infrastructure to maximize hardware utilization and efficiency.
• Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning.
About you
• Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
• Deep understanding of modern ML architectures and experience with performance optimization for inference.
• Proven track record with large-scale distributed systems, particularly performance-critical ones.
• Familiarity with PyTorch, TensorRT, CUDA, NCCL.
• Strong grasp of infrastructure, continuous integration, and continuous development principles.
• Ability to lead and mentor team members, driving projects from concept to implementation.
• Results-oriented mindset with a bias towards flexibility and impact.
• Passion for staying ahead of emerging technologies and applying them to Al-driven solutions.
• Humble attitude, eagerness to help colleagues, and a desire to see the team succeed.
Our Culture
We're driven to build a strong company culture and are looking for individuals with solid alignment with the following:
• Reason with rigor
• Are you audacious enough?
• Make our customers succeed
• Ship early and accelerate
• Leave your ego aside
Location & Remote
This role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside in Paris or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team.
In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting - currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In that case, we ask all new hires to visit our Paris office:
• for the first week of their onboarding (accommodation and travelling covered)
• then at least 3 days per month
What we offer
Competitive salary and equity
Health insurance
Transportation allowance
Sport allowance
Meal vouchers
Private pension plan
Parental : Generous parental leave policy
Visa sponsorship
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
RÉSUMÉ DE L' OFFRE
Software Engineer, Technical Lead, Inference
Mistral AI
Paris
il y a un mois
S/O
Temps plein
Plus aucune candidature n'est acceptée pour cette offre d'emploi
Software Engineer, Technical Lead, Inference
Plus aucune candidature n'est acceptée pour cette offre d'emploi