Hadoop Administrator

col-narrow-left
Job ID:
2514815
Location:
Tempe, AZ
Category:
Information Technology, Telecommunications, Array
Salary:
$120,000.00 per year
Zip Code:
85280
Employment Type:
Full time
Posted:
11.09.2018
col-narrow-right
col-wide

Job Description:

Prestigious Fortune 500 Company is currently seeking a Big Data Engineer with strong Hadoop administration experience. Candidate will work in an agile environment interacting with multiple technology and business areas designing and developing next generation analytics platforms and applications. Candidate will be responsible for the strategy and design of complex projects as well as coding, and also supports project planning and mentoring.

Responsibilities:

Working closely with the various teams - data science, database, network, BI and application teams to make sure that all the big data applications are highly available and performing as expected

Administration experience on Hadoop, HDFS, YARN, Spark, Sentry/Ranger, HBase and Zookeeper

Design, install, and maintain big data analytics platforms (on-prem/cloud) including design, security, capacity planning, cluster setup and performance tuning.

Manage public and private cloud infrastructure.

Qualifications:

Deep understanding of distributed Hadoop ecosystem, networking connectivity and IO throughput along with other factors that affect distributed system performance

Expert in configuring & troubleshooting of all the components in the Hadoop ecosystem like MapReduce, YARN, Pig, Hive, HBase, Sqoop, Flume, Zookeeper, Oozie (understanding of all these)

Experience in installing/configuring cluster monitoring tools like Cloudera Manager/Ambari, Ganglia, or Nagios. (one of these)

Hands-on experience with Scripting with bash, Perl, ruby, or python (one of these)

Working knowledge of hardening Hadoop with Kerberos, TLS,SSL and HDFS encryption.

Working knowledge on Jenkins, git, AWS.

Good understanding on automation tools (eg, Puppet, Ansible)

Expert in configuring & troubleshooting of all the components in the Hadoop ecosystem Spark, Solr, Scala, Kafka etc.

Working knowledge on Jenkins, git, AWS.

Company Info
Request Technology - Craig Johnson