About the Role:
We are looking for an experienced DevOps Engineer to design, build, and optimize our data infrastructure, enabling high-performance, reliable, and scalable data workflows. The ideal candidate has deep expertise in the modern data ecosystem (Druid, Databricks, dbt, Redshift, etc.), a strong understanding of distributed systems, and a proven track record in managing data pipelines and platforms at scale. In addition, strong programming skills are essential for building automation, custom integrations, and advanced data solutions.
͏
Key Responsibilities:
- Design, implement, and maintain highly available and scalable data pipelines leveraging tools such as Druid, Databricks, dbt, and Amazon Redshift
- Manage and optimize distributed data systems for real-time, batch, and analytical workloads
- Develop custom scripts and applications using programming languages (Python, Scala, or Java) to enhance data workflows and automation
- Implement automation for deployment, monitoring, and alerting of data workflows
- Collaborate with data engineering, analytics, and platform teams to deliver reliable and performant data services
- Monitor data quality, reliability, and cost efficiency across platforms
- Build and enforce data governance, lineage, and observability practices
- Work with cloud platforms (AWS/Azure/GCP) to provision and maintain data infrastructure
- Apply CI/CD and Infrastructure-as-Code (IaC) principles to data workflows
͏
Required Skills & Experience:
- 5+ years of experience in DataOps, Data Engineering, DevOps Engineering, or related roles
- Strong hands-on experience with Druid, Databricks, dbt, and Redshift (experience with Snowflake, BigQuery, or similar is a plus)
- Solid understanding of distributed systems architecture and data infrastructure at scale
- Proficiency in SQL and strong programming skills in at least one language (Python, Scala, or Java)
- Experience with orchestration tools (Airflow, Dagster, Prefect, etc.)
- Familiarity with cloud-native services on AWS, Azure, or GCP
- Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.)
- Strong problem-solving, debugging, and performance-tuning skills
͏
Preferred Qualifications:
- Experience with real-time streaming platforms (Kafka, Kinesis, Pulsar)
- Knowledge of containerization/orchestration (Docker, Kubernetes)
- Experience with Infrastructure-as-Code (Terraform, CloudFormation)
Experience: 5-8 Years .
The expected compensation for this role ranges from $60,000 to $135,000 .
Final compensation will depend on various factors, including your geographical location, minimum wage obligations, skills, and relevant experience. Based on the position, the role is also eligible for Wipro's standard benefits including a full range of medical and dental benefits options, disability insurance, paid time off (inclusive of sick leave), other paid and unpaid leave options.
Applicants are advised that employment in some roles may be conditioned on successful completion of a post-offer drug screening, subject to applicable state law.
Wipro provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Applications from veterans and people with disabilities are explicitly welcome.
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.