About
This is a well-funded deep tech company (>$100m raised) building a new compute paradigm based on photonic AI accelerators. The goal is to push beyond the physical limits of wafer-scale GPUs and redefine how large-scale AI workloads are run.
The team is around 60 people, including engineers from leading AI hardware and systems companies. They operate across the UK and North America, with a strong in-office culture and tight collaboration between hardware, systems, and software.
You’d join at a critical stage: translating breakthrough photonic hardware into usable, high-performance AI infrastructure.
What you'll do
- Bridge hardware and software by building systems that expose photonic accelerators to real AI workloads
- Identify performance bottlenecks across the stack and design solutions spanning compiler, runtime, and kernel layers
- Work closely with hardware engineers to co-design efficient execution pathways
- Optimise model execution for throughput, latency, and power efficiency on novel accelerator architecture
- Contribute to core infrastructure that enables large-scale training and inference on next-generation hardware
- Help shape technical direction as the company scales from prototype to production systems
What you'll need
- Strong systems engineering background in high-performance computing or ML infrastructure
- Experience working with modern compiler stacks such as MLIR, TLM, XLA, or similar intermediate representations
- Familiarity with performance modelling, tiling, scheduling, and low-level optimisation
- Exposure to AI accelerators (GPUs, TPUs, or custom silicon) and an understanding of how models map to hardware
- Experience diagnosing and optimising performance across complex hardware–software stacks
- Strong C++ with the ability to operate close to the metal
Shortlisted candidates will be contacted within 48 hours.