Data Engineer
Full Time
full-time
27 Oct 2025
Verified by Turrior
Content + Source + Freshness • 17 Dec 2025 • 95% confidence
75 / 100
Offer value
Attractive offer for data engineers, particularly due to the emphasis on modern technologies and strong collaboration within teams.
- Work with cutting-edge data technologies
- Collaboration-focused role in data solutions
- Significant growth potential in data engineering
Pros
- Ability to work with latest data technologies (GCP, BigQuery)
- Strong team-based collaboration on data solutions
- Good growth prospects in the field of data engineering
Cons
- Lack of detailed compensation information
- Expectations for a broad skill set can be challenging
- Onboarding could be demanding due to rapid environment changes
Who it's for
Mid-level • Onsite
Good fit
- Data engineers with cloud experience
- Individuals who thrive in collaborative work environments
- Professionals looking to develop data solutions
Not recommended for
- Entry-level data enthusiasts without experience
- Those who prefer isolated work environments
- Candidates not interested in technology-driven roles
Motivation fit
Desire to develop expertise in data engineeringInterest in working on extensive data projectsWillingness to learn and adapt in innovative environments
Key skills
Data pipeline developmentETL processesCloud technologiesData quality assurance
Score: 75/100 AI verified analysis
About the job
Project Role : Data Engineer
Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills : Google BigQuery, Microsoft SQL Server, Google Cloud Data Services, GitHub
Good to have skills : NA
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 2:Proven track record of delivering data integration, data warehousing soln 3: Strong SQL And Hands-on (No FLEX) 4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 5:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes 6:Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : Professional & Technical Skills: - 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills : Google BigQuery, Microsoft SQL Server, Google Cloud Data Services, GitHub
Good to have skills : NA
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 2:Proven track record of delivering data integration, data warehousing soln 3: Strong SQL And Hands-on (No FLEX) 4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 5:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes 6:Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : Professional & Technical Skills: - 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
