Search by job, company or skills

SeIndonesia

Data Engineer

5-7 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a month ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for a Data Engineer to build and maintain the company's analytics platform by designing robust ETL pipelines and enabling advanced analytics capabilities.

Job Responsibilities:

  • Design, implement, and maintain scalable Extract, Transform, Load (ETL) / Extract, Load, Transform (ELT) pipelines on Amazon Web Services (AWS) using services such as Glue, Lambda, and Fargate.
  • Model, transform, and curate data in cloud data warehouses, including Amazon Redshift, Snowflake, and BigQuery.
  • Integrate and normalize order data from multiple aggregator APIs such as Shopee, GoFood, and Grab into centralized data schemas.
  • Build and optimize data ingestion pipelines from transactional databases (RDS/PostgreSQL), real-time event streams (Kinesis), and third-party APIs.
  • Implement data quality checks, monitoring, and alerting using tools such as Great Expectations for data validation and AWS CloudWatch for real-time monitoring and notifications.
  • Collaborate with analysts and data scientists to provision feature stores, support machine learning training data, and enable ad hoc analysis.
  • Design and implement automated reporting pipelines and interactive dashboards using tools such as QuickSight and Metabase to support data-driven decision making.
  • Establish and promote best practices for data governance, documentation, and data security.

Job Requirements:

  • Minimum bachelor's degree majoring in information technology, information systems or similar.
  • 5+ years of professional experience in a Data Engineering role.
  • Strong proficiency with the AWS data stack: Glue, Lambda, S3, Redshift (or equivalent warehouse), IAM, CloudWatch.
  • Expertise in Python and SQL for data transformation and pipeline development.
  • Experience designing and maintaining reliable, incremental data pipelines (Airflow, Step Functions, or similar).
  • Solid understanding of data modeling (star/snowflake schemas) and performance tuning.
  • Hands-on with API integration, batch and streaming ingestion patterns.
  • Familiarity with data quality frameworks and monitoring tools.
  • Advanced analytics / data science experience (statistical modeling, customer segmentation, demand forecasting).
  • Experience with containerized data workflows (Docker, Kubernetes, Fargate).
  • Knowledge of machine learning pipelines (SageMaker, TensorFlow Extended).
  • Experience with infrastructure as code (Terraform, CloudFormation). Prior exposure to F&B or e-commerce data challenges (order volumes, seasonality, multi-channel integration).
  • Excellent communication skills and ability to collaborate with non technical stakeholders

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 141556813

Similar Jobs