15 July 2021

Senior Software Engineer Leading Multinational Fortune 500 Organisation Years T886

Summary

At Walmart Global Tech India, if you’re thinking ‘scale’, think bigger and don’t stop there. Take a regular day at Walmart Global Tech and match that with 260 million customers a week, 11,695 stores, under 59 banners in 28 countries and e-commerce websites in 11 countries. That\'s Walmart Global Tech India for you! Through our products and engineering services across all Walmart properties (stores, app and online) we help customers live better by saving them time and money. Our teams are engaged in cutting edge engineering and product development to support Walmart’s strategy of offering customers an anywhere, anytime shopping experience.

Our Team

Walmart\'s Global Cloud Platform team, also known as GTP (Global Tech Platform), helps thousands of developers save time and code better, so that millions of Walmart associates can help hundreds of millions of customers to save money and live better.

We sustain millions of transactions per second, process petabytes of data, and enable tens of thousands of production deployments per day. We simplify the complexities of scale and unify the software development for all aspects of business, digital and physical.

We are developer’s developers. We provide and foster the cloud native culture to our organization. Our team has the whole gamut of platforms, services, libraries, tools and frameworks on some of the cutting-edge technologies to solve hard, complex and high scale business problems. The Infrastructure as a Service team provides private and public cloud infrastructure - compute (VMs, CPUs, GPUs and Bare metals), storage (CEPH), SDN and container platform. The Developer Environment – DE team – provides world class CI/CD pipelines, workflow automation, repositories and One Ops deployment tool providing ultimate Dev Ops power to developers for continuous delivery. The data platforms team provides multi-tenant managed services for app data (rdbms, nosql, search, cache), data in motion (Audit and Reconciliation, JMS, Kafka, AMQ, stream processing frameworks, unified data pipeline, etc) and data analytics (Hadoop, spark, etc)

Responsibilities

· Strong programming experience in Core Java, strong with Algorithms, Data structures, design patterns.
· Development and maintenance of Apache Druid and Presto platform/services¯.
· Working with different customers to optimise their Apache Druid , Presto queries and data ingestion performance.
· Using standard tools to tune, profile, and debug Java Virtual Machines (JVM).
· Contribute to design, implementation, and maintenance of systems across Data platforms, and Tools.
· Troubleshoot and triage issues related to Data ETL framework, Metadata, Workflow management tools, Data quality, and open-source database solutions like Presto, MySQL, and cloud-native components to ensure SLA adherence.
· Own production incidents/issues and provide a response to infrastructure incidents and alerts.
· Packaging and deploying application updates and patches.
· Builds tools to continuously monitor and alert platform components.
· Continually improve CI/CD tools, processes, and procedures.
· Exposure of building micro services using Spring Boot framework (Optional)
· Write and maintain infrastructure documentation.
· Work with distributed teams in a collaborative and productive manner.
· Identify right open-source tools to improve the product by performing research, POC/Pilot, and/or interacting with various open-source forums.
· Support On-Call for 12x7 rotation when needed.
· Promote and support company policies, procedures, mission, values, and standards of ethics and integrity.

Qualifications

· Good academic record and graduated (B.E/ B.Tech / M.E / M.Tech)
· 4 + Years of Industry experience, focused on solving data challenges.
· Deep Exposure to Apache Druid and Presto internals and troubleshooting.
· Strong to Expert skills in Apache Druid with multiple examples of implementations or support of the software.
· experience in managing Data Platform, ETL pipelines with the ability to debug at a code level.
· Strong understanding of the internals of at least 1 distributed processing frameworks like Map-reduce, Hive, and Spark (Java/Python/SQL)
· Understanding of Linux/Unix OS and Network protocols – _TCP/IP, DNS, HTTP, and usage of system diagnostic tools.
· experience in one or more programming such as Java, Python.
· Building monitoring tools and automation for managing the Production systems.
· Hands-on experience in doing proofs of concepts based on business and technology needs and communicating Technology direction to top management.
· experience with source code repositories (Git) and CI tools (Jenkins, Maven) and software provisioning and deployment automation tools (Ansible)
· Exposure on Azure or Google Cloud and performing Orchestration, deployments and CI/CD using Ansible and Terraform.
· Exposure in deploying and managing Kubernetes and Container technologies like Docker is an added advantage.
· Exposure to one of the databases like MongoDB/MySQL/Teradata/Bigdata is required.
· experience in building scalable/highly available distributed systems in the production environment is desirable.

Email: EXPIRED



REPORT
Jobs

Fresh Similar Jobs:

goto: Engineering Jobs