Job Description
- Define best practices for end-to-end data bricks platform.
Work with databricks and internal teams on evaluation of new features of databricks (private preview/ public preview)
Ongoing discussions with databricks product teams on product features. - Design scalable ETL/ELT data pipelines using Databricks , Azure Data Factory, Synapse Pipelines.
- Develop data ingestion workflows for structured and semi-structured data sources, including IoT streams and batch files.
- Define and enforce data modeling standards for the raw, curated, and semantic layers in ADLS Gen2.
- Create PySpark jobs and reusable transformation frameworks for cleansing, validation, and enrichment.
- Oversee data partitioning, versioning, and metadata strategies to ensure high performance and maintainability.
- Coordinate with ML engineers to support feature engineering pipelines and training datasets.
- Mentor data engineers, conduct code reviews, and enforce best practices in CI/CD workflows.
- Implement monitoring and alerting for data pipelines, including performance metrics and SLA tracking
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.