Back to jobs
A

AI Analytics Engineer (AI & Analytics Platform)

πŸ‡ΊπŸ‡ΈAirtable

San Francisco, CA; Austin, TX; New York, NY0 applicants
Full TimeMid-level

Job Description

Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Airtable is building the infrastructure that makes AI-powered analytics trustworthy and scalable β€” and we're looking for an AI Analytics Engineer to help define what that looks like from the ground up. This is a new role on a new team. Our Data Science & Analytics org is standing up an AI & Analytics Platform function to own the context layer, evaluation frameworks, and adoption strategy behind our internal AI analytics tools β€” including our natural-language-to-SQL capabilities, Claude, and Omni Analytics. The goal: shift from a world where analysts are the bottleneck for every data question to one where the organization can self-serve with confidence. You'll be one of the first hires shaping this discipline. That means you won't just use AI tools β€” you'll build the systems that make them accurate, design the workflows that make them trustworthy, and partner across the business to drive adoption. If you're excited about working at the intersection of data engineering, LLM tooling, and business enablement β€” and you want to help define what the analytics engineer role becomes in an AI-native world β€” this is the role. Note on Leveling : We’re particularly interested in candidates with 1–4 years of professional experience. What you'll do Build and maintain context infrastructure: Translate institutional business knowledge into structured formats β€” business glossaries, DBT model enrichment, semantic layer definitions in Omni Analytics β€” so that AI tools can answer questions accurately, not just confidently. Design and run evaluation frameworks: Develop predefined test cases, accuracy benchmarks, and validation workflows that measure whether AI-generated insights are trustworthy. Own the feedback loop between eval results and context improvements. Build and orchestrate AI agent systems: Help design, build, and iterate on the agent architectures that power our analytics tools β€” including prompt pipelines, tool orchestration, query routing logic, and guardrails that determine when AI should answer autonomously vs. escalate for human validation. Experiment and evaluate: Test prompt configurations, agent behaviors, and model outputs across different use cases β€” using eval results and accuracy metrics to drive continuous improvement. Develop internal AI tooling and workflows: Build tools and automations that improve DS&A's own efficiency β€” identifying opportunities where AI can accelerate the team's work and executing on the

Read original posting

Required Skills

GoRustScalaRRailsRESTLLM
A

Airtable