Job Description
Role Purpose
The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters
͏
Optus T&M JD
Python
3-5 years of experience building software applications with Python programming language with strong understanding of core Python concepts, OOP principles, and data structures along. The desired roles/responsibilities are outlined below:
· Design, develop, and implement well-tested, reusable, and maintainable Python code.
· Utilize various Python libraries and frameworks (e.g., FastAPI, Django, Flask, Pandas, NumPy) to implement functionalities.
· Integrate various data sources (APIs, databases) to manipulate and analyze data.
· Optimize code for performance, scalability, and security.
· Write unit and integration tests for code coverage and stability.
· Collaborate with designers and other developers to translate requirements into efficient solutions.
· Participate in code reviews, providing constructive feedback to improve code quality.
· Stay up-to-date with the latest Python trends, libraries, and best practices.
· Debug and troubleshoot complex issues to ensure optimal application performance.
· Proactively suggest improvements and optimizations to existing codebase.
͏
Databricks
Minimum 5+ years of IT experience with 3+ years of hands-on experience in designing, developing, implementing, and maintaining data solutions with Databricks. This role requires close collaboration with various teams to gather data requirements, build scalable data pipelines, and ensure the overall solution is reliable, available, and optimized for high performance. Detailed skills are outlined below:
· Proven expertise with Lakehouse Architecture with Delta Tables – Delta tables, Schema Evolution, ACID compatibility, History capture etc.
· Integrating Databricks with various ETL/Orchestration tools and perform required operations.
· Strong expertise in fundamental areas like Spark ecosystems, Data frame API & Spark SQL with Python/Scala and SQL as language
· Hands on knowledge on Databricks native features like Workflows, Delta Live Tables, Unity Catalog etc.
· Expertise in workspace and cluster configuration, performance tuning, cost optimization, data security and log monitoring.
· Experience with Spark Structured Streaming jobs with Autoloader integrating with different streaming platforms.
· Experience in building optimal data pipeline architecture with knowledge and best practices from Databricks platform.
· Fair understanding of Data Modelling and defining conceptual logical and physical data models.
· Nice to have - exposure to AI/Gen AI skill and/or any associate or professional level Databricks/Cloud certifications.
Experience: 3-5 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.