Service Engineer Hadoop Kafka Multinational Commerce Retailing Company 10 Years
Job Description:
You will play a key role in administering, monitoring and problem-solving on our current Big Data platform, and helping us to engineer our next-gen platform based on bleeding-edge and Open Source tech. If you are a determine self-starter who is committed to continuous improvement in a team-based setting and have the following skills/experience, we are confident that the Cloud Platform Department would make a rewarding home.
Responsibilities:
* The design, development and tuning of Data Streaming Platform.
* Testing and bringing into production new solutions and functions for Data Streaming Platform.
* Help to maintenance our Data Streaming Platform stable.
* troubleshooting in relation to our Data Streaming Platform.
* Automating everything always
* Providing best-in-class user support for Data Streaming Platform.
Minimum Qualifications:
* 6 + Years of professional experience in the field of Information Technology.
* 2 + Years of hands-on experience with Linux and shell scripting.
* 2 + Years of programming experience using Java, Scala, or Python.
* Deep understanding of Data Streaming Platform architecture and components.
* Hands-on experience and knowledge of components of Data Streaming Platform.
* experience with automation tools such as Chef, Ruby, Puppet, or Ansible.
* experience with Dev Ops tools and technology, such as Gradle, Maven, Jenkins, Git, IntelliJ, or Eclipse.
* 3 + Years working experience as an Administrator of Data Streaming Platform (Kafka, Nifi, ELK, etc.)
Preferred Qualifications:
* Knowledge of and/or experience with Docker and/or Kubernetes;
* experience with streaming frameworks such as Spark Streaming and/o rFlink.
* experience with a major Hadoop distribution (HDFS, Hive/Hive LLAP, Map Reduce, Spark on Yarn)
* experience working with a BI Platform
* experience with KVS, such as HBase, Couchbase, Cassandra, Redis.