Data Engineer (Hadoop)
233 South Wacker Drive Suite 4300 Chicago, IL 60606
As a Hadoop Administrator, you’ ll administer data workflows in an evolving, modern Hadoop-based environment. You’ ll also:
- Consult and educate internal users on Hadoop technologies and assist them in finding and effectively utilizing the best solutions for their problem space.
- Improve the performance of financial analytics platforms built around the Hadoop ecosystem.
WHAT MAKES IT FUN?
- We are on the cutting edge of financial applications of Hadoop, processing terabytes of data daily for mission critical trading systems.
- We operate at the bleeding edge of technology. If something new can potentially bring an advantage we will adopt and incorporate the new technology.
- The landscape is always changing creating new and exciting challenges. What we focus on today is very different than what we focused on two years ago.
- We really believe in sharing knowledge and technology between the different offices. Much of our technology stack is shared globally between our offices, and we provide opportunities to travel between the regions both for personal growth and to assist where it has the biggest impact.
- Working at our organization is a great way to gain exposure to and learn about financial markets and technology. We know from experience that a lot of people really enjoy learning about a field beyond their immediate area of expertise, it’ s one of the things that makes this job more interesting than others.
- We employ a broad range of people with varying backgrounds. What they have in common is their superior technical expertise, their extraordinary smarts and their collaborative approach.
WHO YOU ARE:
- 3+ years of experience working with Hadoop 2 (YARN), cluster management experience preferable
- 3+ year of experience with Hadoop SQL interfaces including Hive and Impala
- 2+ years of experience developing solutions using Spark
- Strong statistical analysis skills
- Strong systems background, preferably including Linux administration
- Linux/Unix scripting experience (bash, tcsh, zsh, python, etc.)
- Experience with DevOps tools such as SALT and Puppet as part of a CI/CD development and deployment process.
- Demonstrated ability to troubleshoot and conduct root-cause analysis
Experience with the below (not required, but definitely desired):
- Developing with Apache Kafka
- Java, SQL, and Python development skills
- Containerization and Docker
- OSS scheduling tools, preferably Luigi
- Developing solutions in the Machine learning space, with an emphasis on Change/Anomaly detection