Software Engineer II - AI Platform
About Uber
The idea for Uber was born on a snowy night in Paris in 2008, and ever since then our DNA of reimagination and reinvention carries on. We’ve grown into a global platform powering flexible earnings and the movement of people and things in ever expanding ways. We’ve gone from connecting rides on 4 wheels to 2 wheels to 18-wheel freight deliveries. From takeout meals to daily essentials to prescription drugs to just about anything you need at any time and earning your way. From drivers with background checks to real-time verification, safety is a top priority every single day. At Uber, the pursuit of reimagination is never finished, never stops, and is always just beginning.
About the Role
We are looking for a strong Software Engineer to be a part of a high-velocity, high-impact team at the intersection of AI fundamentals, AI infrastructure, with an emphasis on systems design. In this role, you’ll build the foundational platform capabilities that power high-impact AI products at Uber. This includes evaluation frameworks, and automated integration and load testing embedded in CI/CD pipelines; managed services such as memory, chat history, agentic runtime environments, agent and tool registries; and streamlined developer workflows that support fast iteration from experimentation to production.
Qualifications
Proficient in one or more of Python or Go, with a proven track record of shipping production services.
4+ years of software engineering experience in building scalable, high-quality systems.
Demonstrated experience in system design, service reliability, and scalability; comfortable evaluating trade-offs for real-world systems.
Excellent communication skills, with the ability to articulate technical decisions.
Proficiency in using AI assistant tools to accelerate the development process, and build a strong intuition of AI capabilities.
Hands-on experience with the end-to-end AI Agent development lifecycle using popular frameworks like LangChain, CrewAI, and AutoGen.
Understanding of LLM-based systems; familiarity with prompt engineering, fine-tuning, or embedding-based retrieval frameworks; familiarity with the challenges surrounding AI Agent evaluations.
Responsibilities
Design evaluation frameworks and automated testing integrated into CI/CD pipelines to ensure agent quality, reliability, and performance.
Develop managed platform services such as memory, chat history, and agent runtime environments that support reliable multi-agent systems.
Create agent, tool, and skills registries that make capabilities easy to discover, reuse, and operate across teams.
Deliver intuitive developer experiences across code-first and no-code workflows, enabling rapid iteration from experimentation to production.

