Pavankumar
Contact: +91-8151857859
Email: tdpavandata@gmail.com
Summary:
Technically sophisticated & astute Bigdata Professional, with total experience of 4+ years.
Having around 3 years of experience on Hadoop administration and its eco system components
Sqoop, Hive, Oozie, Zookeeper, YARN, Spark, Impala and NIFI.
Hands on experience in installation, configuration, supporting and managing Hadoop Clusters.
Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
Proficient in using ClouderaManager, an end to end tool to manage Hadoop operations.
Decommissioning and commissioning the Node on running hadoop cluster.
Expertise in Hadoop cluster Planning and providing summarized report.
Backup configuration and Recovery from a Name Node failure.
Installation of various Hadoop Ecosystems and Hadoop Daemons.
Knowledge of cluster coordination services by using Zookeeper Quorum.
Experience monitoring and troubleshooting issues with Linux, CPU, OS, storage and network.
Good experience on Design, configure and manage the backup for Hadoop data.
Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the
job requirement.
Expertise in Hadoop job schedulers such FIFO, Fair scheduler and Capacity scheduler.
In depth understanding/knowledge of Hadoop Architecture and various components such as
HDFS, JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts.
Having knowledge in Rack Awareness and Speculative Execution.
1.6 years of experience as Websphere Admin.
Excellent knowledge in Hortonworks administration.
Ability to play a key role in the team and communicates across the team.
Self-motivated and organized with excellent interpersonal, communication and presentation skills.
Believe in continuous learning and possesses an innovative approach.
Capability of adapting to new and fast changing technologies.
ACADEMIAICS:
ORGANIZATIONS WORKED:
Currently working as a Hadoop Administrator at Tech Mahindra.
.
TECHNICAL PROFICIENCY :
Hadoop Framework : HDFS, Hive, Sqoop, Zookeeper, Oozie, Impala, NIFI.
Hadoop Distribution : Cloudera Manager, Hortonworks.
Databases : MSSQL SERVER 2008 R2 and NOSQL (HBASE).
Operating Systems : Windows, Linux
Scripting languages : Shell Scripting.
Packages : MS OFFICE (MS word, MS Excel).
PROFESSIONAL EXPERIENCE:
#Project 1:
Collaborated with multiple teams for design and implementation of Hadoop clusters
Responsible for commissioning and decommissioning of nodes from Clusters
Importing and exporting data into HDFS using Sqoop.
Maintaining cluster health and HDFS space for better performance.
Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the
j ob requirement
Moving data efficiently between the clusters.
Rebalancing the hadoop Cluster.
Working on Name node high availability customizing Zookeeper Services.
Allocating the name and space Quotas to the users in case of space problems.
Participation in support for 24x7 Big Data Environment
Installation of various hadoop Ecosystems and hadoop Daemons.
Involved in Installing and configuring Kerberos for the authentication of users and hadoop daemons.
Experienced in managing and reviewing Hadoop log files.
Environment: HDFS, Hive, Impala, Spark, Sqoop, Oozie and NIFI.
#Project 2:
Client: Capital One
Duration: Dec 2014 to Oct 2016
Role: Websphere Administrator
.
Job Responsibilities:
Installed IBM WebSphere Application Server 6.0.x, IBM HTTP Server 6.0.x, and Plugin 6.0.x
Applied Refresh Pack 2 and FixPack 27 on all WebSphere instances running on both AIX and windows
across all environments (Testing, UAT and Production).
Creating Application Servers Clusters, standalone servers, configured various other websphere
resources which include Datasources, VirtualHosts etc.
Performed Deployment of Enterprise applications across all environments.
Created andconfigured clusters (horizontal and vertical) in UAT and Production environments.
Implemented shell scripts and Jython scripts for deployments and application monitoring.
24x7 application/infrastructure support, working in shifts.
Trouble shooting various problems in different stages of development using logs files.