͏
Job Description: Data Engineer (Snowflake & Python)
Position Overview:
We are seeking a highly skilled and motivated Data Engineer with expertise in Snowflake and Python to join our dynamic team. The ideal candidate will be responsible for building and optimizing data pipelines, managing cloud-based data environments, and supporting data integration efforts. You will play a key role in ensuring that data solutions are scalable, reliable, and aligned with business needs.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and ETL processes using Snowflake and Python.
- Build and optimize data architectures to support business intelligence, analytics, and machine learning initiatives.
- Collaborate with data analysts, data scientists, and stakeholders to understand data requirements and ensure smooth dataflows.
- Manage and administer Snowflake data warehouses, including schema design, performance tuning, and data security.
- Write efficient, reusable, and maintainable code for data processing and transformation tasks using Python.
- Implement data quality checks and validation processes to maintain data integrity.
- Automate data workflows and improve data reliability and performance.
- Troubleshoot and resolve data-related issues in a timely manner.
- Maintain and document data engineering solutions, best practices, and coding standards.
Required Skills & Qualifications:
- Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
- Proven experience as a Data Engineer with hands-on expertise in Snowflake and Python.
- Strong proficiency in SQL for querying and manipulating data within Snowflake.
- Knowledge of Snowflake architecture, data sharing, cloning, and security features.
- Experience in developing and managing ETL pipelines and workflows.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and data storage solutions.
- Proficient in data modeling, data warehousing concepts, and database optimization techniques.
- Experience with version control systems (e.g., Git) and CI/CD pipelines.
- Strong problem-solving and debugging skills with attention to detail.
Preferred Skills:
- Experience with data orchestration tools like Airflow or Prefect.
- Knowledge of other big data technologies such as Databricks, Spark, or Kafka.
- Familiarity with REST APIs and data integration from external sources.
- Exposure to machine learning pipelines and AI workflows is a plus.
Experience: 5-8 Years .
The expected compensation for this role ranges from $60,000 to $135,000 .
Final compensation will depend on various factors, including your geographical location, minimum wage obligations, skills, and relevant experience. Based on the position, the role is also eligible for Wipro's standard benefits including a full range of medical and dental benefits options, disability insurance, paid time off (inclusive of sick leave), other paid and unpaid leave options.
Applicants are advised that employment in some roles may be conditioned on successful completion of a post-offer drug screening, subject to applicable state law.
Wipro provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Applications from veterans and people with disabilities are explicitly welcome.
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.