AI Infrastructure Engineer, Model Serving Platform
About Scale AI
We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.
About the Role
The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.
Qualifications
Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).
Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).
Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.
Experience with containers and orchestration tools (e.g., Docker, Kubernetes).
Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).
Proven ability to solve complex problems and work independently in fast-moving environments.
Responsibilities
Build an internal platform to empower LLM capability discovery.
Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.
Conduct architecture and design reviews to uphold best practices in system design and scalability.
Develop monitoring and observability solutions to ensure system health and performance.
Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.

