Data Engineer

Chicago, IL

Post Date: 05/17/2017 Job ID: 8979 Category: Data

Data Engineer – Hadoop
 
Trading nowadays happens in a highly competitive technological landscape; the best trading idea alone doesn’ t cut it anymore. Instead, only the best trading ideas that are enabled via robust, scalable and fast technology win. 
Do you enjoy the process of problem solving, a process where you recognize areas of improvement and iterate and innovate to improve? Does your curiosity and desire to learn drive you?
 
DATA ENGINEERING


As a data engineer   you’ ll build and administer data workflows in an evolving, modern Hadoop-based environment. You’ ll also:
  • Develop and extend in-house data toolkits based in Python and Java.
  • Consult and educate internal users on Hadoop technologies and assist them in finding and effectively utilizing the best solutions for their problem space.
  • Improve the performance of financial analytics platforms built around the Hadoop ecosystem.

 
WHAT MAKES IT FUN?
  • Our Client is on the cutting edge of financial applications of Hadoop, processing terabytes of data daily for mission critical trading systems. 
  • They operate at the bleeding edge of technology. If something new can potentially bring an advantage we will adopt and incorporate the new technology.
  • The landscape is always changing creating new and exciting challenges. What they focus on today is very different than what they focused on two years ago.
  • They really believe in sharing knowledge and technology between the different offices. Much of their technology stack is shared globally between  offices, and they provide opportunities to travel between the regions both for personal growth and to assist where it has the biggest impact.

 

 
WHO YOU ARE:
  • 3+ years of experience working with Hadoop 2 (YARN), cluster management experience preferable
  • 3+ year of experience with Hadoop SQL interfaces including Hive and Impala
  • 2+ years of experience developing solutions using Spark
  • Experience with common data-science toolkits, Python-based preferred
  • Strong Java, SQL, and Python development skills
  • Strong statistical analysis skills
  • Strong systems background, preferably including Linux administration
  • Unix scripting experience (bash, tcsh, zsh, python, etc)
  • Experience with DevOps tools such as SALT and Puppet as part of a CI/CD development and deployment process.
  • Demonstrated ability to troubleshoot and conduct root-cause analysis
     

Experience with the below (not required, but definitely desired):
  • Developing with Apache Kafka
  • Containerization and Docker
  • OSS scheduling tools, preferably Luigi
  • Developing solutions in the Machine learning space, with an emphasis on Change/Anomaly detection
  • Building Cube/Cube-like products

 
OUR CULTURE:


Our Client is at the core a trading firm, however they value trading and technology equally and we believe that cooperation between traders and technologists is one of our great strengths.

 
Hadoop, Python, Java,

Evan Pollock


Not ready to apply?

Send an email reminder to:

Share This Job:

Related Jobs: