Maincode is building sovereign AI models in Australia. We are training foundation models from scratch, designing new reasoning architectures, and deploying them on state-of-the-art GPU clusters. This is not fine-tuning someone else’s work. This is building from first principles.
As an AI/ML Engineer, you’ll be part of the team that makes this real. You’ll work on both sides of the problem: how models are trained and how they run in the world. You’ll design and build the systems that power large-scale training runs and efficient inference. You’ll work closely with our AI Researchers to implement the latest algorithms and ideas.
This is a deep engineering role. You’ll be writing a lot of code, instrumenting systems, optimizing performance, and debugging weird, messy edge cases in distributed training and model serving. If you love figuring out how foundation models really work under the hood, this is your team.
We’re expanding our global network of Maincoders by establishing a new Seattle office. Join us in shaping this hub from day one while collaborating closely with our Melbourne headquarters to push forward the frontier of AI research and engineering.
What You’ll Do
- Build and scale training pipelines for sovereign foundation models like large language models and other architectures
- Design efficient inference systems that run these models in real-world environments
- Optimize data pipelines, tokenization, batch prep, and distributed training at multi-node scale
- Build deep observability into training and inference, improving performance, correctness, and efficiency
- Debug and troubleshoot the hardest parts of model training and deployment, including distributed failures, GPU bugs, data inconsistencies, and scaling limits
- Work closely with AI Researchers to translate cutting-edge algorithms into working systems
- Establish strong ModelOps practices to ensure reproducibility, reliability, and continuous improvement across the full lifecycle of foundation models, from initial experiments to production deployment
Who You Are
- Passionate about how models are built, trained, and run, especially large-scale foundation models
- Driven by curiosity about model internals, training dynamics, and system-level performance
- Excited to work across both training and inference, not just one side of the ML stack
- Skilled in Python and ML frameworks like PyTorch or JAX. Familiarity with distributed compute (CUDA, Triton, NCCL, etc.) is a bonus but not required
- Constantly learning, whether it’s reading open-source repos, replicating research ideas, or designing your own tools to explore a problem
- Hands-on and determined. You like writing code, running experiments, and figuring things out
- Motivated to help build sovereign AI capability here in Australia
Why Maincode
We are a small team building some of the most advanced AI systems in Australia. We are creating new foundation models from scratch, not just using what’s already out there.
We operate our own GPU clusters, run large-scale training, and work closely across research and engineering to push the frontier of what’s possible.
You'll be surrounded by people who:
- Care about model internals, not just outputs
- Build things that work, at scale
- Take pride in learning, experimenting, and shipping
- Want to help Australia build independent, world-class AI systems