Cloudera Big Data administrator, Resource Informatics Group, Reston, VA


Resource Informatics Group -
N/A
Reston, VA, US
N/A

Cloudera Big Data administrator

Job description

Client Company
CareFirst Client FEPOC
Job Description
Position Title: Cloudera Big Data administrator

Location: Reston VA

Duration : 12 months of contract with possibility of extension

Experience Summary:

This is a Cloudera Big Data administrator position and not a developer position. Experience with building Cloudera cluster, setting up Nifi, Solr, HBase, Kafka. Setting up the High Availability of the Services like Hue, Hive, HBase REST, SOLR and IMPALA on top of the all-new clusters that were built on the BDPaas Platform. Be able to write the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions. Monitoring the health of all the services running in the production cluster using the Cloudera Manager. Performing/Accessing the databases, metastore tables and writing Hive, Impala queries using HUE. Responsible for monitoring the health of the Services on top of all clusters. Working closely with different teams like Application development team, Security team, Platform Support to identify and implement the Configurational changes that are needed on top of the cluster for better performance of the services. Experience with CDP Public Cloud is a PLUS.

Skills Must Require:

Cloudera CDP v7.x

Apache Kafka - strong Administration & troubleshooting skills

Kafka Streams API

stream processing with KStreams & Ktables

Kafka integration with MQ

Kafka broker management

Topic/ offset management

Apache Nifi - Administration

Flow management

registry server management

controller service management

Nifi to kafka /Hbase /solr integration

Hbase - administration

database management

troubleshooting

Solr - administration

managing Logging level

managing shards & high availability

Collection management

Rectify resource intensive & long running solr queries

Additional Skills includes:
Ensure Cloudera installation and configuration is at optimal specifications (CDP, CDSW, Hive, Spark, NiFi).
Perform critical data migrations from CDH to CDP.
Design and implement big data pipelines and automated data flows using Python/R and NiFi.
Assist and provide expertise as it pertains to automating the entire project lifecycle.
Perform incremental updates and upgrades to the Cloudera environment.
Assist with new use cases (i.e., analytics/Client, data science, data ingest and processing), Infrastructure (including new cluster deployments, cluster migration, expansion, major upgrades, COOP/DR, and security).
Assist in testing, governance, data quality, training, and documentation efforts.
Move data and use YARN to allocate resources and schedule jobs.
Manage job workflows with Oozie and Hue.
Implement comprehensive security policies across the Hadoop cluster using Ranger.
Configure and manage Cloudera Data Science Workbench using Cloudera Manager.
Troubleshoot potential issues with Kerberos, TLS/SSL, Models, and Experiments, as well as other workload issues that data scientists might encounter once the application is running.
Supporting the Big Data / Hadoop databases throughout the development and production lifecycle.
Troubleshooting and resolving database integrity issues, performance issues, blocking and deadlocking issues, replication issues, log shipping issues, connectivity issues, security issues, performance tuning, query optimization, using monitoring and troubleshooting tools.
Create, test, and implement scripting for automation support.
Experience in working with Kafka ecosystem (Kafka Brokers, Connect, Zookeeper) in production is ideal
Implement and support streaming technologies such as Kafka, Spark & Kudu


Full-time 2024-06-30
N/A
N/A
USD

Privacy Policy  Contact US
Copyright © 2023 Employ America All rights reserved.