Design, develop, and deploy AI agent workflows for use cases that require engineering support, including tool integrations, API connections, and RAG pipelines
Maintain shared infrastructure that multiple teams can reuse, such as LLM API access, vector databases, and prompt templates
Help individual teams build their own AI automations by providing technical direction, code templates, and troubleshooting support
Handle complex integrations — connect AI tools to backend systems, internal APIs, and regional platforms that individual teams cannot access independently
Optimize for efficiency — manage LLM costs, monitor system performance, and ensure reliability of deployed agents
Document what you build so others can maintain, reuse, and build on top of it
Support AI adoption — help teams across the company integrate AI into daily workflows and make it easy and accessible
Optimize data pipelines and table structures to improve query performance and reduce resource costs
Monitor and maintain data infrastructure health — identify inefficiencies, resolve bottlenecks, and ensure reliability for downstream BI reporting
Requirements
2+ years of engineering experience in AI/ML systems, backend development, or automation — or fresh graduates with strong fundamentals and genuine hunger to learn
AI/LLM fundamentals — working knowledge of LLM frameworks (LangChain, LangGraph, or similar), prompt engineering, and RAG systems
Python proficiency — comfortable building and deploying Python-based tools, APIs, and automation scripts
API and integration skills — experience connecting systems via APIs, handling data pipelines, working with third-party services, and scraping data from various sources
Builder mentality — able to ship working prototypes quickly, iterate based on feedback, and take projects from idea to production
Communication skills — able to explain technical concepts simply to non-technical business stakeholders
Self-starter — comfortable working with ambiguity, owning initiatives independently, and collaborating across teams
Strong SQL knowledge — able to write complex queries, optimize table structures, and understand how data flows across pipelines
Nice To Have
Experience with observability tools and production monitoring for AI systems
Familiarity with vector databases (Pinecone, ChromaDB, or similar)