Title: Technical Lead
Minimum 6+ years of experience. At least 3+ years’ experience in real time Big data implementation.
· Code, Unit test, Hive, HDFS, KAFKA and SPARKScala/Pyspark. |
· Build libraries, user defined functions, and frameworks around Hadoop/Spark. |
· Exposure to cloud platform such as AWS or equivalent is desired. |
· Develop user defined functions to provide custom Hive, HDFS, Kafka and SPARK capabilities |
· Follow best practices and develop code as per standard defined |
· Strong understanding of Hadoop internals |
· Experience with databases like SQL Server / Oracle; |
· Experience with performance/scalability tuning, algorithms and computational complexity |
· Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development |
· Excellent communication skills |