Job Description
Role Purpose
The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction.
D͏atabricks Architect
Key Responsibilities:· Design, develop, and optimize ETL/ELT pipelines in Databricks.· Implement real-time and batch data processing solutions using Apache Spark and Delta Lake.· Develop PySpark/Scala-based data transformation scripts for large-scale data processing.· Ensure data quality, performance tuning, and cost optimization within Databricks.· Implement CI/CD pipelines for Databricks workflows using Terraform, GitHub Actions, or Azure DevOps.· Monitor, troubleshoot, and optimize Databricks clusters, jobs, and queries.· Collaborate with Data Architects, Business Analysts, and DevOps teams to align solutions with business needs.Required Skills:· Strong expertise in Databricks (AWS) and Apache Spark.· Proficiency in PySpark for data engineering workflows.· Experience with Delta Lake, Unity Catalog, and Databricks SQL.· Hands-on experience with Kafka, APIs, and streaming data processing.· Proficiency in SQL for querying and performance tuning.· Experience in DevOps and CI/CD pipelines for Databricks.· Good understanding of Data Governance, Security, and Access Control in Databricks.Good to Have· Experience with AWS Glue, Azure Data Factory or Snowflake.· Familiarity with Terraform, Databricks CLI, and automation frameworks
͏
Key Responsibilities:· Design, develop, and optimize ETL/ELT pipelines in Databricks.· Implement real-time and batch data processing solutions using Apache Spark and Delta Lake.· Develop PySpark/Scala-based data transformation scripts for large-scale data processing.· Ensure data quality, performance tuning, and cost optimization within Databricks.· Implement CI/CD pipelines for Databricks workflows using Terraform, GitHub Actions, or Azure DevOps.· Monitor, troubleshoot, and optimize Databricks clusters, jobs, and queries.· Collaborate with Data Architects, Business Analysts, and DevOps teams to align solutions with business needs.Required Skills:· Strong expertise in Databricks (AWS) and Apache Spark.· Proficiency in PySpark for data engineering workflows.· Experience with Delta Lake, Unity Catalog, and Databricks SQL.· Hands-on experience with Kafka, APIs, and streaming data processing.· Proficiency in SQL for querying and performance tuning.· Experience in DevOps and CI/CD pipelines for Databricks.· Good understanding of Data Governance, Security, and Access Control in Databricks.Good to Have· Experience with AWS Glue, Azure Data Factory or Snowflake.· Familiarity with Terraform, Databricks CLI, and automation frameworks
͏
Key Responsibilities:· Design, develop, and optimize ETL/ELT pipelines in Databricks.· Implement real-time and batch data processing solutions using Apache Spark and Delta Lake.· Develop PySpark/Scala-based data transformation scripts for large-scale data processing.· Ensure data quality, performance tuning, and cost optimization within Databricks.· Implement CI/CD pipelines for Databricks workflows using Terraform, GitHub Actions, or Azure DevOps.· Monitor, troubleshoot, and optimize Databricks clusters, jobs, and queries.· Collaborate with Data Architects, Business Analysts, and DevOps teams to align solutions with business needs.Required Skills:· Strong expertise in Databricks (AWS) and Apache Spark.· Proficiency in PySpark for data engineering workflows.· Experience with Delta Lake, Unity Catalog, and Databricks SQL.· Hands-on experience with Kafka, APIs, and streaming data processing.· Proficiency in SQL for querying and performance tuning.· Experience in DevOps and CI/CD pipelines for Databricks.· Good understanding of Data Governance, Security, and Access Control in Databricks.Good to Have· Experience with AWS Glue, Azure Data Factory or Snowflake.· Familiarity with Terraform, Databricks CLI, and automation frameworks
͏
Key Responsibilities:· Design, develop, and optimize ETL/ELT pipelines in Databricks.· Implement real-time and batch data processing solutions using Apache Spark and Delta Lake.· Develop PySpark/Scala-based data transformation scripts for large-scale data processing.· Ensure data quality, performance tuning, and cost optimization within Databricks.· Implement CI/CD pipelines for Databricks workflows using Terraform, GitHub Actions, or Azure DevOps.· Monitor, troubleshoot, and optimize Databricks clusters, jobs, and queries.· Collaborate with Data Architects, Business Analysts, and DevOps teams to align solutions with business needs.Required Skills:· Strong expertise in Databricks (AWS) and Apache Spark.· Proficiency in PySpark for data engineering workflows.· Experience with Delta Lake, Unity Catalog, and Databricks SQL.· Hands-on experience with Kafka, APIs, and streaming data processing.· Proficiency in SQL for querying and performance tuning.· Experience in DevOps and CI/CD pipelines for Databricks.· Good understanding of Data Governance, Security, and Access Control in Databricks.Good to Have· Experience with AWS Glue, Azure Data Factory or Snowflake.· Familiarity with Terraform, Databricks CLI, and automation frameworks
Experience: 8-10 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.