Job description͏
As a L3 resource of his /her team, he /she:
- Takes up technical tasks and also manages delegation for technical issues within the team,
- animates the team to encourage collaboration and sharing of best practices,
- supports new technologies and leverages them to provide consistency of service across streams,
- proposes service improvements for all Big Data services supported throughout the organization,
- documents, reviews, maintains and shares relevant technical information within the team
- provides technical knowledge, supports services both proactively and reactively to maintain the availability and reliability of system infrastructure in accordance to the SLA,
- works in line with policies based on LEAN-Client best practices,
- Actively engages during any high severity issue and drives for issue resolution.
- reviews technology changes to identify potential risks,
As an experienced professional in Big Data Services, he/she:
- supports his/her team during diagnosis when technical issues rise in his/her scope of expertise,
- is aware of the global IT structure so that he/she anticipates interrelationships within the organization,
- engages with technical peer, Development team, Service managers, Architect and project teams on technology roadmap and projects,
- facilitates transformation projects and suggest future directions for new areas of improvement and change,
- guarantees the production readiness and license to operate of new projects and solutions
- is available and able to drive technically, any complex or high severity incidents that occur within the scope of their role,
- actively engages to understand new technologies and technology trends and reviews them with a view to incorporating them into CACIB operations,
- actively assists in identifying the most technical skilled candidates for open roles,
- technically coach and develop partner resources to improve quality and productivity,
Job specific environment and/or organization
- Fluent English - written and spoken
- French language is desirable
- Working hours will primarily match Europe business hours
- On-call support will be expected on a rotational basis
Candidate profile͏
Mandatory track record
- Minimum 8 in a typical system administration role, performing system monitoring, storage capacity management, performance tuning, and system infrastructure development.
- Minimum 1-2 years of experience in deploying and administering large Hadoop clusters.
- Experience in the Financial and banking industry.
- Must be a bachelors/engineering graduate.
- Candidates with development experience around any of the languages like Java/Python/Perl/PHP/CSS/HTML would be given preference.
- Experience with Spark development is an added advantage.
- Working knowledge of Hadoop ecosystem (Hadoop, Hive, Pig, Oozie, Hbase, Flume, sqoop) using both automated toolsets as well as manual processes.
- Ability to isolate and troubleshoot Hadoop service issues using a combination of system and Hadoop logs and monitoring/alerting systems. Experience with Apache Ambari is a plus.
- Preferred experience administering Hadoop 2.0 Clusters to include YARN and YARN-based applications such as Spark.
- Linux administrative skill will be an added advantage.
- Detailed knowledge of basic OS administration tasks such as configuring PAM authentication, disk quota, ulimit, etc. and managing security patches.
- Experience coordinating rolling OS-level changes with cluster administration tools.
- Experience configuring and using cluster-wide monitoring tools to diagnose cluster issues and propose operational enhancements.
- Expertise in writing shell scripts, Perl/Python scripts and debugging existing scripts.
- Ability to quickly learn new technologies and enable/train other analysts
- Ability to work independently as well as in a team environment on moderate to highly complex issues
- High technical aptitude and demonstrated progression of technical skills - continuous improvement
- Ability to automate software/application installations and configurations hosted on Linux servers
- Network and system administration experience, preferably in large-scale data center environments
- Excellent communication, interpersonal and logical skills
- Customer service oriented and a strong team player
- Ability to work under pressure and a commitment to solving issues
- French language proficiency would be seen as an advantage
Primary Responsibilities͏
- Build and maintain Hadoop stack infrastructure
- Install Hadoop updates, patches, and version upgrades as required
- Cluster maintenance and deployments including creation and removal of nodes.
- Performing HDFS backups and restores
- User management from Hadoop perspective inclusive of setting up user in Linux, Kerberos setup, access to other Hadoop stack components.
- HDFS Support and maintenance
- File system management and monitoring
- Manage and review Hadoop log files
- Monitor Hadoop cluster connectivity and security
- Kafka Administration and Operations
- Ensure cluster and MapReduce routines are tuned for optimal performance
- Proper configuration and screening of cluster jobs
- Capacity Planning
- Working knowledge of ElasticSearch Stack
- Working transversally with other teams to guarantee high data quality and availability.
- Be the link between developers and build/architecture teams.
- Document best practices and maintain the knowledge base
SKILLS MATRIX͏
|
Technical skills |
Skills |
Critical/ Essential/ Desirable |
|
Linux Administration |
Critical |
|
|
Hadoop Administration(HDFS, YARN and other components) |
Critical |
|
|
Knowledge of Hortonworks Data Platform |
Critical |
|
|
Scripting Knowledge(any language but preferably shell, perl or python) |
Critical |
|
|
Kafka Administration |
Essential |
|
|
ElasticSearch Administration |
Desirable |
|
|
Knowledge of Access control and security mechanisms like PAM and Kerberos |
Critical |
|
|
Hardware Configurations and setup like racks, disk topology and RAID |
Essential |
|
|
Knowledge of Virtual Machines(deployment & configuration) |
Essential |
|
|
Knowledge of JVM |
Essential |
|
|
Proficiency in Python, Java, or Scala Proficiency in SQL, Hive or another SQL-on-Hadoop tool |
Desirable |
|
|
Experience with ETL process/software |
Desirable |
|
|
Other skills |
Capacity to develop others |
Desirable |
|
Capacity to cooperate / work across discipline |
Essential |
|
|
Capacity to take have a vision and work on the continuous improvement |
Desirable |
|
|
Language skills |
English |
Critical |
|
French |
Desirable |
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.