Back to jobs
P

Sr. Engineering Manager, AI/ML Serving Platform

馃嚭馃嚫Pinterest

San Francisco, CA, US; Remote, USRemote0 applicants
Full TimeSenior

Job Description

About Pinterest: Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we鈥檙e on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product. Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other鈥檚 unique experiences and embrace the flexibility to do your best work. Creating a career you love? It鈥檚 Possible. At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we鈥檙e looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we鈥檒l explore your foundational skills and how you collaborate with AI. Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here . Sr. Engineering Manager, AI/ML Serving Platform The AI/ML Serving Platform team provides foundational tools and infrastructure used by hundreds of AI/ML engineers across Pinterest, including recommendations, ads, visual search, growth/notifications, trust and safety. We aim to ensure that AI/ML systems are efficient, healthy (production-grade quality) and fast (for modelers to iterate upon). Pinterest is seeking a Sr. Engineering Manager to lead the team that builds the serving and deployment infrastructure for all AI/ML models at Pinterest. Systems include: Ultra-high-performance C++ model inference engine for production recommendations and content ranking systems. TorchScript + CUDA Graph models on GPU inference, serving 500+M inferences/second. Production GenAI & LLM model inference stack for emerging use cases. Model routing, deployment, monitoring. Kubernetes-based provisioning. Feature fetching, caching, and logging What you鈥檒l do Lead the team to deliver continual improvements in advanced model architectures, cost-efficient resource utilization, and AI/ML developer productivity. Set technical direction for the team based on company and org priorities Coach and develop talent on the team. What we鈥檙e looking for: Experience managing platform engineering teams with many cross-organizational customers Experience leading the development of large-scale distributed serving systems Experience with AI/ML inferenc

Read original posting

Required Skills

RustC++RKubernetesRESTLLM
P

Pinterest