change
starts now...

< BACK TO SEARCH RESULTS

Southern California

Competitive

A global Investment management company is looking to hire an experienced Big Data Administrator experienced with the Hadoop/Spark stack.  They are looking for a dedicated administrator responsible for the day to day operations, and work across the teams with data scientists

Responsibilities

  • Capacity planning of infrastructure
  • Performance tuning
  • Upkeep of Hadoop ecosystem
  • Optimizing MapR clusters and databases
  • Data integration
  • Disaster recovery


Skills & Qualifications 

  • Strong UNIX/Linux skills
  • Proficient understanding of distributed computing principles
  • Management of Hadoop cluster and ecosystem, MapR or Cloudera a plus
  • Hadoop performance tuning and capacity planning 
  • Experience with Hadoop, MapReduce, HDFS, Spark, YARN
  • Experience with Spark and stream-processing systems
  • Experience with Big Data querying tools: Impala & Hive
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O 
  • Experience with Oracle BDA/Cloudera/MapR a plus
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Excellent verbal & written communication skills
  • Financial experience not necessary


If you would like to be considered for the Big Data (HADOOP) Engineer position, or wish to discuss the role further then please leave your details below. Your resume will be held in confidence until you connect with a member of our team

Upload