Job Description
Role Purpose
The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists.
͏
We are looking for an experienced Azure Databricks Data Engineer to build and optimize large‑scale ETL pipelines on Azure. You will work on end‑to‑end data engineering workflows using Databricks, PySpark, SQL, Delta Lake, and Azure Data Factory (ADF).
Must‑Have Skills
- 5–10+ years in Data Engineering / ETL with strong PySpark + SQL
- Strong hands‑on experience with Azure Databricks
- Strong experience in Delta Lake
- Experience with Azure Data Factory (ADF) for ingestion and orchestration
- Strong understanding of ETL development (extract, transform, load) and data validation
- Comfortable working with large datasets and multiple data formats
- Ability to work in a fast‑paced environment and proactively communicate risks/issues
Key Responsibilities
- Develop, test, and maintain ETL pipelines using Azure Databricks & ADF
- Optimize Spark jobs and SQL queries for performance and scalability
- Implement data quality, validation, and reconciliation checks in pipelines/notebooks
- Orchestrate end‑to‑end workflows integrating multiple data sources
- Build reusable PySpark modules/framework components for ETL pipelines
- Collaborate with product and QA teams to deliver reliable data solutions
- Ensure secure, scalable, and maintainable engineering practices
Good to Have
- Spark performance tuning best practices
- Experience with enterprise data governance and secure pipeline patterns
͏
We are looking for an experienced Azure Databricks Data Engineer to build and optimize large‑scale ETL pipelines on Azure. You will work on end‑to‑end data engineering workflows using Databricks, PySpark, SQL, Delta Lake, and Azure Data Factory (ADF).
Must‑Have Skills
- 5–10+ years in Data Engineering / ETL with strong PySpark + SQL
- Strong hands‑on experience with Azure Databricks
- Strong experience in Delta Lake
- Experience with Azure Data Factory (ADF) for ingestion and orchestration
- Strong understanding of ETL development (extract, transform, load) and data validation
- Comfortable working with large datasets and multiple data formats
- Ability to work in a fast‑paced environment and proactively communicate risks/issues
Key Responsibilities
- Develop, test, and maintain ETL pipelines using Azure Databricks & ADF
- Optimize Spark jobs and SQL queries for performance and scalability
- Implement data quality, validation, and reconciliation checks in pipelines/notebooks
- Orchestrate end‑to‑end workflows integrating multiple data sources
- Build reusable PySpark modules/framework components for ETL pipelines
- Collaborate with product and QA teams to deliver reliable data solutions
- Ensure secure, scalable, and maintainable engineering practices
Good to Have
- Spark performance tuning best practices
- Experience with enterprise data governance and secure pipeline patterns
͏
We are looking for an experienced Azure Databricks Data Engineer to build and optimize large‑scale ETL pipelines on Azure. You will work on end‑to‑end data engineering workflows using Databricks, PySpark, SQL, Delta Lake, and Azure Data Factory (ADF).
Must‑Have Skills
- 5–10+ years in Data Engineering / ETL with strong PySpark + SQL
- Strong hands‑on experience with Azure Databricks
- Strong experience in Delta Lake
- Experience with Azure Data Factory (ADF) for ingestion and orchestration
- Strong understanding of ETL development (extract, transform, load) and data validation
- Comfortable working with large datasets and multiple data formats
- Ability to work in a fast‑paced environment and proactively communicate risks/issues
Key Responsibilities
- Develop, test, and maintain ETL pipelines using Azure Databricks & ADF
- Optimize Spark jobs and SQL queries for performance and scalability
- Implement data quality, validation, and reconciliation checks in pipelines/notebooks
- Orchestrate end‑to‑end workflows integrating multiple data sources
- Build reusable PySpark modules/framework components for ETL pipelines
- Collaborate with product and QA teams to deliver reliable data solutions
- Ensure secure, scalable, and maintainable engineering practices
Good to Have
- Spark performance tuning best practices
- Experience with enterprise data governance and secure pipeline patterns
͏
Deliver
| No | Performance Parameter | Measure |
| 1 | Process | No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT |
| 2 | Team Management | Productivity, efficiency, absenteeism |
| 3 | Capability development | Triages completed, Technical Test performance |
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.