Big Data Software Engineer

Job Expired

Summary

Posted: 
Role Number:200208629
Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there’s no telling what you could accomplish! Join Apple, and help us leave the world better than we found it. The Big Data Engineering team under Data Services manages various state of the art open source technologies including Kafka, Hadoop, Spark and AI/ML. We look forward to your experience developing/deploying large-scale Kafka clusters/applications and services. The ideal candidate will bring a lot of energy, initiative and excitement! Will be ready to learn and explore new leading edge technologies.
Key Qualifications
  • At least 5 years (preferably 8 years) of experience in a professional programming position
  • Strong knowledge and experience of core Java programming, performance, multi-threading, garbage collection
  • Solid experience and knowledge in the deployment, design and architecture of Apache Kafka & Apache Zookeeper
  • Experience in optimization and Tuning of Kafka brokers/clusters based on performance metrics
  • Experience in setting up best practices, standards, automation process for onboarding, monitoring and healing of Kafka clusters
  • Expertise in the lifecycle management of Kafka clusters including security patching, adding/removing brokers in a cluster, restarting brokers without disrupting the application
  • Experience with processing large amounts of data
  • Source code level knowledge Kafka implementation is highly desirable
  • Strong education in Computer Science, Software Engineering, Algorithms, Operating Systems, Networking, etc.
  • Strong knowledge of Linux, Systems/Application Design & Architecture
  • Experience with Python and/or Go development highly desirable
  • Experience with public clouds (GCP & AWS) highly desirable
  • Knowledge of deployments in Kubernetes containers highly desirable
  • Knowledge of other Big Data Technologies such as Hadoop, Spark, etc. highly desirable
  • Experience in handling architectural and design considerations such as performance, scalability, reusability and flexibility issues
Description
We seek a highly motivated, detail-oriented, energetic individual with excellent written and oral skills who’s not afraid to think outside the box and question assumptions. In this role, you will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including: – Setup of Kafka brokers, Kafka MirrorMakers and Kafka Zookeeper on hosts including a combination of bare metal systems, VMs and Containers. – Setup of Hadoop clusters with related technologies on hosts including a combination of bare metal systems, VMs and Containers – Define/develop Big Data technologies, platforms and applications – Develop scalable, robust systems that will be highly adaptable to changing business needs. – Architect, improve, and scale applications to the next level. – Interface with application owners, developers and project managers. – Recommend and deploy tools and processes to enable rapid development, deployment and operations of big data solutions. – Be the local guru for application teams faced with architectural decisions or complex technical problems, such as scaling and tuning.
Education & Experience
BS in Computer Science or equivalent experience

More Information

  • This job has expired!
Share this job