Apply now »

Title:  Data Engineer, Manufacturing & Supply Chain

Job Requisition ID:  316330
Location: 

Atlanta, Georgia, US United States, 30340

Category:  Data
Description: 

Cargill’s size and scale allows us to make a positive impact in the world. Our purpose is to nourish the world in a safe, responsible and sustainable way. We are a family company providing food, ingredients, agricultural solutions and industrial products that are vital for living. We connect farmers with markets so they can prosper. We connect customers with ingredients so they can make meals people love. And we connect families with daily essentials — from eggs to edible oils, salt to skincare, feed to alternative fuel. Our 160,000 colleagues, operating in 70 countries, make essential products that touch billions of lives each day. Join us and reach your higher purpose at Cargill.

Job Summary

The Professional, Data Engineering job designs, builds and maintains moderately complex data systems that enable data analysis and reporting. With limited supervision, this job collaborates to ensure that large sets of data are efficiently processed and made accessible for decision making. 

Essential Functions

  • DATA & ANALYTICAL SOLUTIONS: Develops moderately complex data products and solutions using advanced data engineering and cloud based technologies, ensuring they are designed and built to be scalable, sustainable and robust. 
  • DATA PIPELINES: Maintains and supports the development of streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others. 
  • DATA SYSTEMS: Reviews existing data systems and architectures to implement the identified areas for improvement and optimization. 
  • DATA INFRASTRUCTURE: Helps prepare data infrastructure to support the efficient storage and retrieval of data. 
  • DATA FORMATS: Implements appropriate data formats to improve data usability and accessibility across the organization. 
  • STAKEHOLDER MANAGEMENT: Partners with multi-functional data and advanced analytic teams to collect requirements and ensure that data solutions meet the functional and non-functional needs of various partners. 
  • DATA FRAMEWORKS: Builds moderately complex prototypes to test new concepts and implements data engineering frameworks and architectures to support the improvement of data processing capabilities and advanced analytics initiatives. 
  • AUTOMATED DEPLOYMENT PIPELINES: Implements automated deployment pipelines to support improving efficiency of code deployments with fit for purpose governance. 
  • DATA MODELING: Performs moderately complex data modeling aligned with the datastore technology to ensure sustainable performance and accessibility.

 

Qualifications

Minimum requirement of 2 years of relevant work experience. Typically reflects 3 years or more of relevant experience.

 

Preferred Qualifications

 

  • 2+ years of professional proficiency in SQL 
  • 2+ years of professional experience working with spark with a programming language [Python/Scala] 
  • 2+ years of professional experience working with AWS Cloud tools like S3, Secret Manager, CloudWatch 
  • 2+ years of professional experience working with CI/CD tools and any code repo 
  • 2+ years of professional experience with Kafka and Snowflake is highly preferred  

 

#LI-NS7 

Equal Opportunity Employer, including Disability/Vet.


Nearest Major Market: Atlanta

Apply now »