Connecting to LinkedIn...

Data Engineer - Hadoop Eco System

Job Title: Data Engineer - Hadoop Eco System
Contract Type: Permanent
Location: Sydney
Salary: 140000
REF: 2823932
Job Published: over 1 year ago

Job Description

Big Data, Hadoop, Kafka, HDFS, Spark, Pig, Hive, Zookeeper, Impala, Storm, RabbitIQ, Flume, Architect

An excellent opportunity has arisen for a Big Data Engineer to work for a Software Company based in Sydney CBD. 

The ideal candidate will have outstanding knowledge in technologies related to collecting, storing, processing and analysing huge sets of data near to real time. The role involves design, development, implementation, and support of a new real-time data platform which includes data streams, a data lake and the enterprise data warehouse.

Key skills/experience required:
  • Proficient understanding of distributed computing principles.
  • Expert knowledge in Java/Scala programming languages.
  • Expert knowledge of core Hadoop distribution stack such as Hadoop V2, HDFS, Spark, Pig, Hive, Impala and ZooKeeper.
  • Expert knowledge in handling stream data including Kafka, Kafka Streams, Spark Streaming and Storm.
  • Expert knowledge in data integration and messaging platforms including Kafka, Apache Nifi, Flume, Scoop and RabbitMq.
  • Good understanding of relational databases such as Oracle, SQLServer and MySQL.
  • Good understanding of NoSQL databases such as HBase, Cassandra and MongoDB.
  • Good knowledge of data model design techniques including OLTP and Dimensional Modelling.
  • Experience with popular distributions such as Cloudera, MapR and Hortonworks.
This is a great opportunity to join this company as they are going through another period of growth. After a number of significant wins as well as opening offices in different locations around the globe they are now looking to appoint a number of key roles in the business.

On offer is a highly competitive package, great team culture, the latest tech, and challenging high profile clients and projects.

Social Stream