fbpx Maniganda | DevReady

Let’s Get Started

Maniganda
HireManiganda

Maniganda

Chennai, TamilNadu, India --:--:--

Maniganda is an AWS certified data engineer with 10+ years of strong experience and understanding of big data ecosystems, cloud service offerings, and distributed file systems.  He has technical expertise in data extraction, compaction, cleansing, enrichment, ingestion, and modeling of very large data in vital information to the business.  His latest project focused on big data ecosystem components using modern technologies such as Spark Core, Spark SQL, PySpark, Kafka, HBase, Airflow, ELK stack, No SQL, Map Reduce, Hadoop, Hive, Sqoop, Oozie, and more.

Hire Maniganda
Skills
Years
1
2
3
4
5
6
7
8
9
10+
ELK
Kafka
Airflow
Spark SQL
Spark
Hive
Big Data
NoSQL
AWS
PostgreSQL
MySQL
Azure
PySpark
Python
Sqoop
Oozie
Scala
Jenkins
Core Java
Hadoop
HBase
MongoDB
ORC
HDFS
Parquet
CSV
Developer Personality

Independent

Collaborative

Trailblazer

Conservative

Generalist

Specialist

Planner

Doer

Idealist

Pragmatist

Abstraction

Control

100
50
0
50
100
Feature Experience

ETL Pipeline design/implementation

Data Modeling

Data Warehousing

Cloud Migration

MODERATE
EXTENSIVE
EXPERT
Cultural Experience

Finance

eCommerce

Digital Communication

Insurance

MODERATE
EXTENSIVE
EXPERT
Portfolio

Impetus Technologies Pvt. Ltd.

Lead Data Engineer

Work Experience : 2019-Present

Use case to implement the Kafka schema registry to maintain the evolving schema from the application and make it into plug-and-play type of architecture.

  • Design and implement the ETL Pipelines
  • Implementing the Schema registry to address the evolving application schema
  • Building Data Warehouses and Data modeling for the project
  • Data migration activity to on premise to AWS cloud
  • Leading and mentoring the team
  • Conceptualize, Design, Develop and Productize ETL pipeline using Big Data stack of Python, Scala, Spark SQL, Hive and HBase.
  • Spark code development via Scala or Python Expertise on working on different file formats ORC, Parquet and CSV.
  • Analyze the performance of the Spark job and tune it by implementing best optimization technique to achieve better performance.
  • Data Schema/Migration activity as per the feature requirement
  • Research and implement evolving tools and libraries when use cases needs.
More

Softcrylic Technologies Pvt Ltd

Lead Engineer

Categories

Work Experience : 2016-2019

A project to migrate from the on-premise solution to the Serverless fashion approach in Microsoft Azure. It had a manual team of 4 members in size early on and has proposed and productized the solution in a Serverless fashion in Azure successfully.

  • Solution and implementing of the ETL pipelines
  • Data project migration to Azure cloud
  • Lead and mentor the team
  • Research and implement evolving technologies and libraries based on use case needs.
  • Designed End to End ETL pipeline using Spark and Python
  • Loading, Cleansing, Validating, and Transforming data on different layers of
  • ETL process. Data Import and Export into HDFS and Hive using Sqoop jobs.
  • Performed Data enrichments activities such as filtering, pivoting, format modeling, sorting, and aggregation through Spark transformations
  • Optimized the hive queries via modeling the data using Partitions and Bucketing techniques
More

Pramati Technologies Pvt. Ltd.

Senior Engineer

Work Experience : 2014-2016

Code development and maintained continuous delivery through Jenkins pipelines. Fixed functional and performance defects.

Skills

More

Sensiple Software Solutions Pvt. Ltd.

Associate Engineer

Categories

Work Experience : 2012-2014

Worked on new modules and features by following best practices of coding and process. POC preparation and technical support for production issues.

Skills

More

Hire Maniganda