Long Description
Design, build, and optimize scalable and high‑performance data pipelines on GCP.· Develop and maintain end-to-end ETL/ELT workflows using BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage.· Write complex SQL queries for large-scale data transformations, analysis, and performance tuning.· Develop reusable Python and PySpark components for automation and data processing.· Implement data quality, validation rules, and monitoring frameworks.· Collaborate with cross-functional teams including analysts, architects, and data scientists.· Ensure robust documentation, metadata management, and data lineage for all pipelines.· Troubleshoot and resolve pipeline failures and ensure high reliability of data workflows.· Adhere to GCP security best practices, governance policies, and cost optimization techniques.
͏
Deliver
| No | Performance Parameter | Measure |
| 1. | Analyses data sets and provide relevant information to the client | No. Of automation done, On-Time Delivery, CSAT score, Zero customer escalation, data accuracy |
͏
͏
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.