10 May 2021

Data Engineer Kafka Japanese Multinational Fortune 500 Commerce Retailer Years

Responsibilities:
- Administration and Maintenance for Data pipeline System that transfer and wrangle terabyte of data from various service using ELK, Apache Kafka, Apache NiFi.
- Collaboration with SRE Tm in Japan and Implement Automated Operation System.
Requirements:
- Excellent Hands-on experience with Linux. (At least more than 3-years)
- Must have experience in administrating and maintaining large-scale Kafka Cluster in production.(At least more than 1-years)
- Hands-on experience with Apache Kafka.(At least more than 1-years)
- Excellent experience with Dev Ops tool and technology like Gradle, Maven, Jenkins, Git, IntelliJ, Eclipse, and so on (At least more than 3-years)
- Excellent programming skills in Java (or Scala) , Python, or Shell Script (At least more than 3-years)
- Must have experience in administrating and maintaining large-scale data application and pipeline in production.(At least more than 1-years)
- Must be self-organized and gritty on continuous improvements of the platform.
- Must be self-starter and good collaborator with good communication skills.
Preferred Knowledge, Skills and Abilities:
- Hands-on experience with one of ELK and Apache NiFi.
- Hands-on experience with Hadoop (HDFS, Hive/HiveLLAP, Map Reduce, Spark on Yarn)
- Hands-on experience with Streaming framework, like Spark Streaming, Flink.
- Hands-on experience or great knowledge with Docker, Kubernetes.
- Fluent or Business level of Japanese.

Email: EXPIRED



REPORT
Jobs
goto: Engineering Jobs