TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

AI Infrastructure Engineer, Model Serving Platform

Scale AI
On-site
San Francisco, CA, United States
Full-time
$175,000 - $220,000

About Scale AI

At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI. Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.

About the Role

As a Software Engineer on the ML Infrastructure team, you will design and build platforms for scalable, reliable, and efficient serving of LLMs. Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.

The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.

Qualifications

4+ years of experience building large-scale, high-performance backend systems.

Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).

Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).

Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.

Experience with containers and orchestration tools (e.g., Docker, Kubernetes).

Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).

Proven ability to solve complex problems and work independently in fast-moving environments.

Responsibilities

Build and maintain fault-tolerant, high-performance systems for serving LLMs workloads at scale.

Build an internal platform to empower LLM capability discovery.

Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.

Conduct architecture and design reviews to uphold best practices in system design and scalability.

Develop monitoring and observability solutions to ensure system health and performance.

Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.
0 AIs selected
Clear selection
#
Name
Task