Apply now »

Title:  Associate Data Engineer

Job Requisition ID:  323610
Location: 

Bangalore, Karnataka, IN India, 560087

Category:  Data
Description: 

Job Purpose and Impact

The Associate Data Engineer job assists with the design, building and maintenance of routine data systems that enable data analysis and reporting. Under close supervision, this job provides collaboration to support that large sets of data are efficiently processed and made accessible for decision making.

Key Accountabilities

  • DATA & ANALYTICAL SOLUTIONS: Assists with the development of basic data products and solutions using big data and cloud-based technologies, supporting scalable, sustainable and robust designs.
  • DATA PIPELINES: Collaborates with the development of basic streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • DATA SYSTEMS: Assists with the implementation of existing data systems and architectures in support of improvement and optimization activities.
  • DATA INFRASTRUCTURE: Supports the preparation of data infrastructure aligned with the efficient storage and retrieval of data.
  • DATA FORMATS: Helps implement appropriate data formats to improve data usability and accessibility across the organization.
  • STAKEHOLDER MANAGEMENT: Assembles requirements from multi-functional partners assisting the team to ensure that data solutions meet the functional and non-functional needs of various partners.
  • DATA FRAMEWORKS: Conducts basic testing of new concepts and assists with the implementation of data engineering frameworks and architectures to support the improvement of data processing capabilities and analytics initiatives.
  • AUTOMATED DEPLOYMENT PIPELINES: Collaborates with the implementation of automated deployment pipelines to support improving efficiency of code deployments with fit for purpose governance.
  • DATA MODELING: Performs basic data modeling aligned with the datastore technology to ensure sustainable performance and accessibility.

Qualifications

  • Have a Bachelor's degree with 2 years or more of relevant experience.
  • CLOUD ENVIRONMENTS: Basic familiarity with major cloud platforms (AWS, GCP, Azure) and interest in learning how cloud services support data pipelines and storage.
  • DATA ARCHITECTURE: Introductory understanding of modern data architectures such as data lakes and lakehouses, with exposure to concepts like ingestion, governance, and basic data modeling.
  • DATA INGESTION: Hands-on experience or coursework using data ingestion tools (e.g., Kafka, AWS Glue) and awareness of common data storage formats like Parquet or Iceberg.
  • DATA STREAMING: Foundational understanding of streaming concepts and exposure to tools such as Kafka or Flink.
  • DATA MODELING: Experience writing SQL and supporting data transformation tasks. Familiarity with modeling concepts (e.g., SCDs, schema evolution) and introductory experience with tools like dbt, Airflow, or AWS Glue.
  • DATA TRANSFORMATION: Basic experience using Spark or similar frameworks for data processing, with a willingness to learn more advanced topics like performance tuning and debugging.
  • PROGRAMMING: Proficiency in at least one programming language (typically Python) and ability to write clean, reusable code. Comfortable with SQL basics and working toward stronger query optimization skills.
  • DEVOPS: General awareness of DevOps practices such as version control (Git) and basic CI/CD concepts. Interest in learning deployment and automation workflows.
  • DATA GOVERNANCE: Foundational understanding of data quality, security, and privacy principles. Awareness of best practices for handling data responsibly.

 

 

Apply now »