Job Description
Role Purpose
The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists.
͏
We are seeking an experienced Azure Databricks Engineer to design and build scalable data pipelines. The role requires strong hands‑on experience in PySpark, SQL, and Databricks, working with large datasets in a fast‑paced environment.
Responsibilities:
- Develop and optimize ETL pipelines using Azure Databricks
- Build data workflows using PySpark and SQL
- Work with Delta Lake and Azure Data Factory
- Handle large‑scale datasets and multiple data formats
- Deliver solutions within tight timelines
Required Skills:
- Strong hands‑on experience in PySpark, SQL, Azure Databricks
- Experience with Delta Lake and Azure Data Factory
- Solid understanding of ETL processes
- Experience working in fast‑paced delivery environments
͏
We are seeking an experienced Azure Databricks Engineer to design and build scalable data pipelines. The role requires strong hands‑on experience in PySpark, SQL, and Databricks, working with large datasets in a fast‑paced environment.
Responsibilities:
- Develop and optimize ETL pipelines using Azure Databricks
- Build data workflows using PySpark and SQL
- Work with Delta Lake and Azure Data Factory
- Handle large‑scale datasets and multiple data formats
- Deliver solutions within tight timelines
Required Skills:
- Strong hands‑on experience in PySpark, SQL, Azure Databricks
- Experience with Delta Lake and Azure Data Factory
- Solid understanding of ETL processes
- Experience working in fast‑paced delivery environments
͏
-
We are seeking an experienced Azure Databricks Engineer to design and build scalable data pipelines. The role requires strong hands‑on experience in PySpark, SQL, and Databricks, working with large datasets in a fast‑paced environment.
Responsibilities:
- Develop and optimize ETL pipelines using Azure Databricks
- Build data workflows using PySpark and SQL
- Work with Delta Lake and Azure Data Factory
- Handle large‑scale datasets and multiple data formats
- Deliver solutions within tight timelines
Required Skills:
- Strong hands‑on experience in PySpark, SQL, Azure Databricks
- Experience with Delta Lake and Azure Data Factory
- Solid understanding of ETL processes
- Experience working in fast‑paced delivery environments
͏
Deliver
| No | Performance Parameter | Measure |
| 1 | Process | No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT |
| 2 | Team Management | Productivity, efficiency, absenteeism |
| 3 | Capability development | Triages completed, Technical Test performance |
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.