Search by job, company or skills

  • Posted 16 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes to support analytics and business intelligence initiatives
  • Implement and optimize data storage solutions, ensuring data quality, accessibility, and security
  • Collaborate with Data Scientists, Analytics teams, and other stakeholders to understand data requirements and deliver effective solutions
  • Build and maintain data warehouses and data lakes, ensuring efficient data organization and retrieval
  • Write clean, maintainable, and well-documented code following team standards and best practices
  • Participate in code reviews and provide constructive feedback to team members
  • Monitor and optimize data pipeline performance and efficiency
  • Create and maintain comprehensive documentation for data processes and architectures
  • Implement data validation and quality control measures
  • Contribute to technical discussions and architectural planning sessions
  • Share knowledge with team members and participate in mentoring activities

Requirements

Required Skills

  • Bachelor's degree in Computer Science, Data Engineering, or related field
  • Strong foundation in SQL and experience with relational databases (e.g., PostgreSQL, MySQL)
  • Proficiency in at least one programming language (e.g., Python, Java, Scala)
  • Experience with ETL tools and data pipeline development
  • Understanding of data warehouse concepts and dimensional modeling
  • Familiarity with version control systems (Git) and collaborative development workflows
  • Basic knowledge of data security practices and compliance requirements
  • Strong problem-solving skills and analytical thinking abilities
  • Excellent communication skills in both technical and non-technical contexts
  • Demonstrated interest in data engineering through projects or work experience

Preferred Skills

  • Experience with big data technologies (e.g., BigQuery, Spark)
  • Knowledge of stream processing frameworks (e.g., Kafka, RabbitMQ)
  • Experience with data modeling and optimization techniques
  • Knowledge of data governance principles
  • Understanding of machine learning pipelines and requirements
  • Experience with data visualization tools
  • Understanding of CI/CD practices for data pipelines
  • Knowledge of scripting languages for automation (e.g., Bash, Shell)
  • Familiarity with Python web development (e.g., FastAPI, Flask) is a plus
  • Proficiency with AI-powered coding assistants (e.g., GitHub Copilot, Cursor) is a plus
  • Familiarity with data warehouse tools or relevant Apache frameworks (e.g., Airflow,Spark)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 141436035

Similar Jobs