Development & Research Engineer @ Alpes-Grenoble: Deep and Shallow Parallel Data Processing on Supercomputers
Inria
il y a 2 jours
Date de publicationil y a 2 jours
S/O
Niveau d'expérienceS/O
Temps pleinType de contrat
Temps pleinDonnées / Big dataCatégorie d'emploi
Données / Big dataA propos du centre ou de la direction fonctionnelle
The Centre Inria de l'Université de Grenoble groups together almost 600 people in 22 research teams and 7 research support departments.
Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, ...), but also with key economic players in the area.
The Centre Inria de l'Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.
Contexte et atouts du poste
The candidate will join the DataMove INRIA team located on the campus of the Univ. Grenoble Alpes near Grenoble. The DataMove team is a friendly and stimulating group with a strong international visibility, gathering Professors, Researchers, PhD and Master students all pursuing research on High Performance Computing.
This work experience will bring you skills from high performance computing up to deep learning that are in high demand.
This work is part of a joint collaboration with international academic partners, giving you the opportunity to work in an international context.
Hiring date is flexible, starting as early as February 2025. Initial contract will last up to the end of 2026, with possibilities for extension.
The city of Grenoble is surrounded by the Alps mountains, offering a high quality of life, and where you can experience all kinds of mountain related outdoors activities and more.
Principales activités
Dask ( https://www.dask.org/ ) and Ray ( https://www.ray.io/ ) are open source frameworks to distribute the execution of Python tasks an actors on supercomputer and cloud. They provide seamless parallelization of classical data processing libraries like Numpy ( https://numpy.org/ ), Panda ( https://pandas.pydata.org/ ) or Scikit-learn ( https://scikit-learn.org ). Ray and Dask also enable to deploy the classical AI stacks like Pytorch (https://pytorch.org/) or Jax (https://jax.readthedocs.io/), through actors on multiple GPUs for training or inference. This makes these frameworks very popular for advanced high performance data processing in the scientific and machine learning communities.
Classical numerical solvers for scientific computing have been central to the development of supercomputers as they can require up to millions of cores for simulating high resolution systems or phenomena.
Today there is a strong need to mix both large scale solvers and data processing tools and run them in a coupled mode on supercomputers.
We developed the open source library Deisa (code: https://github.com/GueroudjiAmal/deisa - PhD: https://theses.hal.science/tel-04194958 ) to extend Dask with classical parallel solvers based on MPI. The data produced by each MPI process of the solver are routed as soon as available to Dask workers that can execute tasks to process these data. This data are exposed to the user as Dask Arrays a distributed extension of Numpy arrays, that can then conveniently rely on the classical Python Numpy API to process the data in parallel (operations on Dask Arrays are split into tasks distributed automatically to the workers).
We are looking for an engineer that will join our team to extend this work into a consolidated framework and participate to the development of advanced analysis scenarios:
Through this work the candidate will gain strong expertise in high performance computing and high performance data analysis. She/he will integrate a dynamics research team and have the opportunity to work in an international context.
Compétences
We welcome candidates with a master (or equivalent title) in computer science, experience with parallel programming, distributed data processing, deep learning or numerical solvers.
Expected technical skills include Linux, Python and some C/C++ programming practice, mastering of development processes is a plus (git, continuous integration, containers, etc.).
No previous work experience required as long as your are motivated and ready to train yourself to complement your skills.
Experienced candidates are very welcome with income adjusted to your experience.
Candidates with a PhD that are looking to complement their experience are also welcome.
A reasonable level of English is required. French is not mandatory and INRIA will provide French classes if needed.
To apply submit you CV, references, recent marks, and if available your last Intership/Master Thesis manuscript. With your application provide any element (github account, code snippets, etc.) that could help us assess you skills beyond your academic record, as well as a few references of persons we can contact to get some feedback on your qualities.
Avantages
Rémunération
From 2,692 € (depending on experience and qualifications).
The Centre Inria de l'Université de Grenoble groups together almost 600 people in 22 research teams and 7 research support departments.
Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, ...), but also with key economic players in the area.
The Centre Inria de l'Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.
Contexte et atouts du poste
The candidate will join the DataMove INRIA team located on the campus of the Univ. Grenoble Alpes near Grenoble. The DataMove team is a friendly and stimulating group with a strong international visibility, gathering Professors, Researchers, PhD and Master students all pursuing research on High Performance Computing.
This work experience will bring you skills from high performance computing up to deep learning that are in high demand.
This work is part of a joint collaboration with international academic partners, giving you the opportunity to work in an international context.
Hiring date is flexible, starting as early as February 2025. Initial contract will last up to the end of 2026, with possibilities for extension.
The city of Grenoble is surrounded by the Alps mountains, offering a high quality of life, and where you can experience all kinds of mountain related outdoors activities and more.
Principales activités
Dask ( https://www.dask.org/ ) and Ray ( https://www.ray.io/ ) are open source frameworks to distribute the execution of Python tasks an actors on supercomputer and cloud. They provide seamless parallelization of classical data processing libraries like Numpy ( https://numpy.org/ ), Panda ( https://pandas.pydata.org/ ) or Scikit-learn ( https://scikit-learn.org ). Ray and Dask also enable to deploy the classical AI stacks like Pytorch (https://pytorch.org/) or Jax (https://jax.readthedocs.io/), through actors on multiple GPUs for training or inference. This makes these frameworks very popular for advanced high performance data processing in the scientific and machine learning communities.
Classical numerical solvers for scientific computing have been central to the development of supercomputers as they can require up to millions of cores for simulating high resolution systems or phenomena.
Today there is a strong need to mix both large scale solvers and data processing tools and run them in a coupled mode on supercomputers.
We developed the open source library Deisa (code: https://github.com/GueroudjiAmal/deisa - PhD: https://theses.hal.science/tel-04194958 ) to extend Dask with classical parallel solvers based on MPI. The data produced by each MPI process of the solver are routed as soon as available to Dask workers that can execute tasks to process these data. This data are exposed to the user as Dask Arrays a distributed extension of Numpy arrays, that can then conveniently rely on the classical Python Numpy API to process the data in parallel (operations on Dask Arrays are split into tasks distributed automatically to the workers).
We are looking for an engineer that will join our team to extend this work into a consolidated framework and participate to the development of advanced analysis scenarios:
- Performance improvement. We target to deploy Deisa with large applications on the european Exascale supercomputers.
- Support novel features by integrating AI frameworks like JAX ( https://github.com/google/jax ) and Pytorch ( https://pytorch.org/ ) for instance.
- Refine the programming environment by developing new APIs or algorithms for easing code coupling.
- Develop prototype data processing pipelines for two specific applications: Gysela (plasma simulation code - https://gyselax.github.io/ ), and Parflow (water flow simulation - https://parflow.org/ ). Required data processing ranges from classical linear algebra to shallow machine learning or deep neural networks.
- Run experiments on a variety of supercomputers
- Participate to the research activity, possibly leading to publications.
- Collaborate with other European partners as this work is part of the European project Eocoe-III ( https://www.eocoe.eu/ ).
Through this work the candidate will gain strong expertise in high performance computing and high performance data analysis. She/he will integrate a dynamics research team and have the opportunity to work in an international context.
Compétences
We welcome candidates with a master (or equivalent title) in computer science, experience with parallel programming, distributed data processing, deep learning or numerical solvers.
Expected technical skills include Linux, Python and some C/C++ programming practice, mastering of development processes is a plus (git, continuous integration, containers, etc.).
No previous work experience required as long as your are motivated and ready to train yourself to complement your skills.
Experienced candidates are very welcome with income adjusted to your experience.
Candidates with a PhD that are looking to complement their experience are also welcome.
A reasonable level of English is required. French is not mandatory and INRIA will provide French classes if needed.
To apply submit you CV, references, recent marks, and if available your last Intership/Master Thesis manuscript. With your application provide any element (github account, code snippets, etc.) that could help us assess you skills beyond your academic record, as well as a few references of persons we can contact to get some feedback on your qualities.
Avantages
- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (90 days / year) and flexible organization of working hours (except for intership)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage under conditions
Rémunération
From 2,692 € (depending on experience and qualifications).
RÉSUMÉ DE L' OFFRE
Development & Research Engineer @ Alpes-Grenoble: Deep and Shallow Parallel Data Processing on SupercomputersInria
Saint-Martin-d'Hères
il y a 2 jours
S/O
Temps plein