About Pathway
Pathway is building live AI systems that think and learn in real time as humans do. Our mission is to deeply understand how and why LLMs work, fundamentally changing the way models think.
The team is made up of AI luminaries. Pathway's CTO, Jan Chorowski, co-authored papers with Geoff Hinton and Yoshua Bengio and was one of the first people to apply attention to speech. Our CSO, Adrian Kosowski, received his PhD in Theoretical Computer Science at the age of 20 and made significant contributions across numerous scientific fields, including AI and quantum information. He also served as a professor and a coach for competitive programmers at Ecole Polytechnique. The team also includes numerous world's top scientists and competitive programmers, alongside seasoned Silicon Valley executives.
Pathway has strong investor backing. To date, we have raised over $15M; our latest reported round was our seed. Our offices are located in Palo Alto, CA, as well as Paris, France, and Wroclaw, Poland.
The Opportunity
This is an R&D position in attention-based models.
We are currently searching for 1 or 2
R&D Engineers with a strong track record in
machine learning models research.
This is an extremely ambitious foundational project. There is a flexible GPU budget associated with this specific project, guaranteed to be in the 7-digit range minimum.
You Will
- perform (distributed) model training
- help improve/adapt model architectures based on experiment results
- design new tasks and experiments
- optionally: oversee activities of team members involved in data preparation.
The results of your work will play a crucial role in the success of the project.
Requirements
Cover letter
It's always a pleasure to say hi! If you could leave us 2-3 lines, we'd really appreciate that.
You are expected to meet at least one of the following criteria:
- You have published at least one paper at NeurIPS, ICLR, or ICML - where you were the lead author or made significant conceptual & code contributions.
- You have significantly contributed to an LLM training effort which became newsworthy (topped a Huggingface benchmark, best in class model, etc.), preferably using multiple GPU's
- You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA)
- You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School.
You Are
- A deep learning researcher, with a track record in Language Models and/or RL (candidates with a Vision or Robotics ML background are also welcome to apply)
- Interested in improving foundational architectures and creating new benchmarks
- Experienced at hands-on experiments and model training (PyTorch, Jax, or Tensorflow)
- Have a good understanding of GPU architecture, memory design, and communication
- Have a good understanding of graph algorithms
- Have some familiarity with model monitoring, git, build systems, and CI/CD
- Respectful of others
- Fluent in English
Bonus Points
- Knowledge of approaches used in distributed training
- Familiarity with Triton
- Successful track-record in algorithms & data science contests
- Showing a code portfolio
Why You Should Apply
- Join an intellectually stimulating work environment
- Be a pioneer: you get to work with a new type of "Live AI" challenges around long sequences and changing data
- Be part of one of an early-stage AI startup that believes in impactful research and foundational changes
Benefits
- Type of contract: Full-time, permanent
- Preferable joining date: Immediate. The positions are open until filled - please apply immediately
- Compensation: six-digit annual salary based on profile and location + Employee Stock Option Plan
- Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, UK, United States, and Canada will be considered
If you meet our broad requirements but are missing some experience, don't hesitate to reach out to us.