Search by job, company or skills

Acclime

Group Data Engineer

3-7 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 11 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

At Acclime, we seek for people who care about delivering honest, professional and quality advice to our extended business community. We value diversity, inclusion, and trust, and we search for extraordinary individuals who are excited about working in an international corporate service practice.

Our organizational culture is at the core of what we do. Our values define our identity, and our staff are part of the journey towards growth, development and success together with our clients and partners. We know companies with a strong culture and identity thrive the most in the long run, which is why our culture is a priority.

Now, let's dive right into the job summary and explore the essentials of this exciting role.

Challenge yourself. Job summary.

In this exciting and challenging role, you will work in our Group IT Team, located in our Jakarta office as Group Data Engineer.

Job Description

We're building out the data foundation that powers internal solutions and , integrates with internal and other enterprise systems, enabling secure and reliable analytics at scale. As our Data Engineer, you will design, build, and operate endtoend data pipelines and integration services, partnering with product, automation, cybersecurity, and BI teams to make clean, governed, and timely data available across the organization.

You will be hands-on with AWS, orchestrators, event/streaming, and modern ELT/ETL tooling; and you will help us formalize data standards, lineage, and qualityso analysts and stakeholders can trust the numbers. Therefore, The ideal candidate must demonstrate fluency in English communication and possess a genuine passion for data exemplified by hands-on experience in building and sustaining the secure ecosystem in which intelligent data workflows solutions reside.

Key Responsibilities

Data Pipelines & Orchestration

  • Design, implement, and maintain ETL/ELT pipelines from multiple sources (apps, databases, APIs, files) into data lake/warehouse environments.
  • Use workflow/orchestration tools to schedule, monitor, and recover jobs; implement SLAs and alerting.

System Integrations

  • Build and harden integrations between internal/external systems, including REST/GraphQL APIs, webhooks, and message queues.
  • Collaborate with application teams (e.g., AI automations platforms) to provide stable data ingress/egress patterns and shared components.

Data Modeling & Storage

  • Design star/snowflake schemas and semantic layers; optimize partitioning, indexing, and lifecycle policies across lake/warehouse.
  • Ensure models reflect business definitions in partnership with BI/analytics (e.g., ACV, cohort analysis, PMR, revenue bridges).

Data Quality, Lineage & Governance

  • Implement data validation (unit tests, expectations), lineage tracking, and reproducibility; maintain documentation and runbooks.
  • Enforce data classification, access controls, and leastprivilege policies; support audits and compliance needs.

Performance & Cost Optimization

  • Profile and tune pipeline performance (IO, compute, SQL); right-size infrastructure and leverage caching/materializations.

Security & Reliability

  • Partner with Cybersecurity to apply secure transport/encryption standards, secrets management, and incident playbooks.
  • Design for observability (logs, metrics, traces) and resilience (idempotency, retries, backfills).

Collaboration & Enablement

  • Work closely with Software Engineers, Data Analyst/BI, Automation/AI, and product teams to prioritize backlog and deliverables.
  • Provide reusable components, conventions, and documentation to accelerate new integrations.

Key Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 35+ years in data engineering (or equivalent software/data roles).
  • Strong Python (ETL, data frameworks) and SQL (analytical, optimization).
  • Hands-on with AWS (S3, Glue/Lambda, IAM, Secrets Manager, CloudWatch; plus RDS/Redshift or Snowflake/BigQuery on cloud).
  • Experience with Airflow (or similar), Kafka/streaming, and API integrations.
  • Solid understanding of data modeling, ELT/ETL patterns, and data warehousing.
  • Proven track record implementing data quality, testing, and observability in pipelines.
  • Familiarity with CI/CD (Git, branching, pipelines) and InfrastructureasCode concepts.
  • Experience integrating internal platforms or ERP/finance systems (e.g., Business Central/Dynamics) via APIs.
  • Exposure to automation platforms and building robust data handoffs to workflow engines.
  • Knowledge of dbt (data builds), Spark, or distributed processing.
  • Practical experience with role-based access control, columnlevel security, and data privacy/compliance.
  • Background in supporting BI tools (Power BI, Tableau) and semantic layer design.
  • Excellent problem-solving and communication skills. Excellent written and verbal communication communication in English and documentation skills.
  • Strong problem-solving skills and ability to work independently or in a team.
  • Experience working in an Agile/Scrum development process.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 138017929