fbpx

Let’s Get Started

Naveen
HireNaveen

Naveen

Suwanee, Georgia --:--:--

Naveen is an ETL Architect with 17+ years of experience mostly in data warehousing, ETL projects, and as a system analyst.  He has worked in almost all domains with all kinds of team environments throughout the world. Naveen has strong accomplishments in design, development, architecture, analysis, and implementation of relational database and enterprise data warehousing systems using IBM Infosphere DataStage, ORACLE, DB2 UDB, SQL Server, Teradata, Netezza, SQL, PL-SQL. He also has 3 years of experience with Talend, Amazon AWS, Microsoft Azure, Snowflake, Qlik Replicate/Compose, Python, MongoDB, Kafka, and Java (Basic).

Hire Naveen
Skills
Years
1
2
3
4
5
6
7
8
9
10+
AWS
Talend (ETL)
InfoSphere
IBM DataStage (ETL)
Salesforce
Azure
Control-M
Unix, Linux
Qtest
Snowflake
PeopleSoft
Linux
S3
MongoDB
Redhat
AIX
WinSQL
Python
Hadoop
Oracle
Qlik
Cloudera
DB2
Sqoop
HDFS
SQL Server
Hive
Toad
Cassandra
Windows NT
Sybase
SQL
Shell Script
Autosys
Teradata
Cognos
Maestro
SAP
XML
PowerBI
Tableau
NoSQL
Developer Personality

Independent

Collaborative

Trailblazer

Conservative

Generalist

Specialist

Planner

Doer

Idealist

Pragmatist

Abstraction

Control

100
50
0
50
100
Feature Experience

Data Warehousing/Integration

Data Analytics/Modeling

Data Migration

Performance Tuning/Debugging

MODERATE
EXTENSIVE
EXPERT
Cultural Experience

Telecom

Transportation

Insurance

Finance

MODERATE
EXTENSIVE
EXPERT
Portfolio

VSP Global

ETL Architect/Lead

Work Experience : 2020 - present

Roles & Responsibilities:

  • Work on Multiple projects simultaneously on Siebel to Salesforce conversion/migration and Netezza to Snowflake migration.
  • Work directly with SME’s to design, build, convert and transform data from Siebel to Salesforce.
  • Work on architecture and designing the data model for the conversion and Integration/Migration projects.
  • Work on Agile methodology with two-week sprint stories on both the conversion and migration projects.
  • Work on Cleansing Seibel data in SQL Server using ETL and cleaning data from various sources.
  • Work with Salesforce team on data issues and data cleansing rules while converting data from Seibel to Salesforce.
  • Work with DS Salesforce connectors to load data into Salesforce using Bulk and Real time load.
  • Work on performance optimization for data conversion project while loading complex Task and Cases data into Salesforce.
  • Design ETL procedures to ensure conformity, compliance with standards and lack of redundancy, translates Business Rules and Functionality Requirements into ETL procedures
  • Involved in every phase of the project like task prioritization, planning, estimating, & quality assurance.
  • Work individually and with team on whole SDLC on Multiple projects.
  • Work with in a team environment and as team lead, Architect, Designer, Developer and team player.
  • Work on Transforming data while copying data from Netezza, CSV, XML files into Snowflake.
  • Work on Migrating data of Account/Contacts/Agreements/Tasks/Cases from Netezza and Salesforce to Snowflake.
  • Work on POC’s for Matillion and other open sources for cloud data migration.
  • Working on sizing the virtual warehouses based on the workloads, query performance tuning, events, lake-house patterns and search optimization.
  • Create one-time mappings for conversion project using Insert and update strategy for Salesforce.
  • Create Snow pipes to ingest data into Amazon S3.
  • Create Data Pipelines and modern ways of automating data pipeline using cloud-based testing, and document implementations, so others can easily understand the requirements, implementation and test conditions.
  • Work on Multiple versions of DataStage using Multi Client Manager.
  • Work on Materialized views in Snowflake.

Environment: IBM Infosphere Infosphere suite11.7/11.5, Salesforce cloud, Salesforce connector, Netezza, SQL Server, Oracle 11g, Multi Client Manager, WinSCP 5.13.9, Putty, Qtest, Matillion, Snowflake Cloud Data Warehouse, AWS, AWS S3, WinSQL,

More

SpartanNash

ETL Architect/Developer

Categories

Work Experience : 2016-2020

Roles & Responsibilities:

  • Initially Built the Parallel environment from Scratch, build new DS 9.1 parallel servers with help of Unix Admins, which includes node configuration, DB connections, ODBC connections, client installations.
  • Worked and coordinated directly with end users to gather requirements for Retail, Food Distribution and MDV divisions convert them into Technical Design Documents.
  • Worked on Data Modelling with DBA’s for the critical Sales, Customer and Item data.
  • Involved in all phases of the project like task prioritization, planning, estimating, & quality assurance.
  • Worked Individually on whole SDLC on Multiple projects for FD, Spartan Retail and MDV.
  • Worked with in a team environment and as team player.
  • Initially worked on DataStage V 8.5 and later upgraded to DataStage V 9.1 and then to 11.7.
  • Converted all the server jobs which were in 8.5 to 9.1 parallel.
  • Worked on Transforming data while copying data from CSV, JSON, XML files into Snowflake.
  • Worked on sizing the virtual warehouses based on the workloads, query performance tuning, events, lake-house patterns and search optimization.
  • Worked with most of the existing stages, new stages used here are Connecting to Mongo DB on 11.7 Version, JDBC connector, Snowflake connector, unstructured data.
  • Implemented and helped team members like improving performance using DataStage and Database best practices on new server to parallel conversion jobs.
  • Worked extensively on creating new dimensional model
  • Extensively worked on change data capture for multiple tables for data loading into DB2.
  • Worked on Multiple versions of DataStage using Multi Client Manager.
  • Worked on multiple POC’s, for Talend, Qlik (Replicate & Compose)
  • Worked on converting sample jobs from DataStage to Talend and loaded into DB2 and Snowflake for use cases.
  • Worked on various Talend components like, Processing, File, Logs & Errors, Miscellaneous and Connectors to build the conversion jobs from DataStage to Talend.
  • Worked on DS 11.7 on some new stages like Hierarchical stage to parse and transform XML data and loading in Snowflake DB using Snowflake connector.
  • Closely worked and supported Unix administrators while Installing the DS version 11.7
  • Worked on Deployment of jobs and coordinated with Control-M team to schedule DS jobs.
  • Worked on Production support for old and new jobs.

Environment: IBM Infosphere Infosphere suite11.7/11.5/9.1/8.5.0(Server Edition), IBM DB2, Oracle 11g, SQL Server, Informix DB, Netezza, Multi Client Manager, WinSCP 5.13.9, Putty, Advanced Query Tool 10.0.5j (AQT), Mongo DB 4.1, Talend 7.1, Snowflake Cloud Data Warehouse, Microsoft Azure Catalog, Python, MongoDB, Qlik Replicate/Compose, Control -M, IBM AIX 7100 – 05, IBM Change manager (Notes),

More

Norfolk Southern

Sr. ETL Architect

Work Experience : 2013-2016

Roles & Responsibilities:

  • Worked from scratch on Requirements gathering for multiple projects simultaneously.
  • Worked with direct clients to gather requirements for Automotive and Inter-Modal and convert them into Mapping Documents.
  • Communicated and coordinated phase wise project requirements with project management in terms of personnel and time resources for efficient project implementation.
  • Involved in each phase of the project like task prioritization, planning, estimating, & quality assurance.
  • Worked parallel on multiple projects like Automotive, Inter-Modal (IM), Coal, Industrial Products (IP), Crew, Damage Prevention (DP) and Service measurements (SM)
  • Initially worked with DataStage V 8.5 and later upgraded to DataStage V 9.1 and moved all the jobs to the new version.
  • Worked with all the stages as usual and the new stage I used in this project is FTP enterprise.
  • Monitored offshore teamwork with onshore resources and did a day-to-day peer review.
  • Conducted scrum every day to know the whole team updates.
  • Worked on editing the JCL code and change man on Mainframe servers with the main frame teammates.
  • Implemented real time Change Data Capture for the front-end application NS onsite scanners.
  • Extensively worked on change data capture for data loading into DB2.
  • Extensively sourced the GDG files from Main frame server using FTP stage to pull the data on to Teradata staging.
  • Worked On two versions of DataStage using Multi Client Manager.
  • Worked on developing new Bteq’s and ETL views changed the existing Bteq’s and call the scripts through DataStage jobs.
  • Worked on POC project for Hadoop and worked with Hadoop Eco system in DEV and QA environment.
  • Written new Shell scripts and modified existing scripts to call DataStage jobs and Bteqs.
  • Worked on Teradata Bulk utilities like Multi load and Fast load.
  • Worked on Teradata ETL views using the Merges to improve the performance.
  • Working with DS admin To Upgrade to Infosphere 11.5 and planning to run natively on Hadoop Servers.
  • First time I worked with Exadata database which is much faster compared to Oracle.
  • Improved the performance in DataStage jobs using Teradata Connector bulk load option.
  • Extensively worked on ZEKE scheduling tool to call DataStage jobs through Linux scripts for daily run.

Environment: IBM InfoSphere DataStage 9.1, 8.5.0, (Infosphere 11.5 in Process) Hadoop Eco System 2.4.0, Coludera 4.3/5.3 HDFS, Map Reduce, Hive, Cassandra 2.1.6, InfoSphere Data Replication (CDC) 10.2.1, Teradata V 14.1, V 15, Teradata Utilities, Bteq’s, Oracle Exadata X4-8, IBM DB2, Oracle 11g, SQL Server, Main Frame Zeke, Teradata SQL Assistant, Oracle SQL Developer, Multi Client Manager, Redhat Linux 6.2, WinSCP, HPSM, Serena PVCS Version Manager, z/OS, JCL, OBIEE 11.1.1.7.

More

University of Arizona

Sr. IBM DataStage Consultant

Categories

Work Experience : 2010-2013

Roles & Responsibilities:

  • Involved in translating the business requirements into High Level design and developed ETL logic based on these requirements.
  • Liaised with University Stake Holders during high-level review sessions to derive and execute action plans, meeting deadlines and standards.
  • Involved in contributing to project task prioritization, planning, estimating, & quality assurance at every stage of a project.
  • Worked simultaneously on various projects like Kuali Financial Systems (KFS), Human Resources systems (HR), Kuali Coeus(KC) and Student Systems.
  • Worked on data cleansing from People Soft Transaction systems Data to create staging area and Dimensions and Facts for Kuali systems.
  • Worked from scratch on implementing Kuali Applications such as KFS and KC of the University.
  • Working on Major conversion project from 5 Server to 8.5 Parallel.
  • Coordinated with Offshore team on Conversion Project.
  • Worked together with Business Analysts on using the InfoSphere FastTrack Tool for requirements and with Data Analyst for using Data Architect Tool.
  • Worked with XML stages (XML input and XML output) to read the data from XML files and load into the tables.
  • Worked On various versions of DataStage using Multi Client Manager.
  • Used Control-M to schedule the Jobs for daily run.

Environment: IBM InfoSphere DataStage 8.0.1, 8.1.0, 8.5.0, DataStage 7.5.2, InfoSphere FastTrack, InfoSphere Data Architect, Oracle 10g/11g, SQL Server, Control-M, PeopleSoft Pack, Toad, SQL Developer, Multi Client Manager, Redhat Linux 5.7, WinSCP, XML, (IBM InfoSphere DataStage 9.1 Demo Project).

More

Verizon

Sr. IBM DataStage Consultant

Categories

Work Experience : 2010

Roles & Responsibilities:

  • Gathered the business requirements mainly from end users and created the detailed module specifications document and source to target mapping Document.
  • Involved in defining technical and functional specifications for ETL process.
  • Involved in contributing to project task prioritization, planning, estimating, & quality assurance at every stage of a project.
  • Involved mainly with DataStage Administrator for adding the new Projects and Environmental Variables etc.
  • Extracted data from SQL Server and XML transformed the data and loaded in Teradata.
  • Extensively used 7.5 Quality Stage (Investigate, Standardize, Match and Survive stages) for aEDW-CDI data cleansing and stored them in a staging area.
  • Migrated DataStage jobs from 7.5.1 to 8.0.1 version.
  • Extensively worked with Teradata team in all phases of the project, including requirements, coding, developing and UAT testing.
  • Coordinated with Teradata developers in Bteq Scripts, Molad’s and Fast Exports.
  • Worked extensively with DataStage Admin, Developer and director.
  • Worked with the SAP ABAP team to fetch the legacy data files.
  • Migrated DataStage code and UNIX directory structure from development to UAT and PROD environments.
  • Designed the DataStage ETL jobs applying the business rules provided by SAP process
  • Developed new and made changes to existing UNIX shell scripts to automate the Data Load processes to the target Data warehouse.
  • Involved in system testing and UAT.
  • Involved with ESPX Scheduling team for scheduling the jobs for daily and weekly run.

Environment: IBM Infosphere DataStage 8.0.1, DataStage (7.5.2), SAP R/3, Quality Stage 7.5.2, SQL Server, TeraData12, Teradata Utilities (Mload, Bteq, Fast Export), SQL Assistant, ESPX, HP-UNIX 11.11,  

More

Travelers Insurance

IBM DataStage Developer

Categories

Work Experience : 2008-2009

Responsibilities:

  • Worked closely with Project lead/Manager, Architects, and Data Modelers to understand the business process and functional requirements
  • Designed and Developed Enterprise edition Jobs based on the specification.
    Developed and supported the Extraction, Transformation and Load process (ETL) for a data warehouse from various data sources using Ascential DataStage Designer
  • Designed and developed Parallel jobs to extract data, clean, transform, and to load the target tables using the DataStage Designer
  • Used Quality Stage to investigation, standardization and matching of source data.
  • Performed lookups on the actual dimension tables to check for Referential Integrity (RI). The actual codes from the conversion lookups are matched for referential integrity in the dimension and the key values are pulled from the dimension table. Records failed during the RI lookup operation will be collected in an error table.
  • Developed logical and physical source to target mapping documents for Data warehouse as well as Data mart, to translate business rules into technical specifications.
  • Used local and shared containers to increase Object Code Reusability and to increase thru put of the system.
  • Imported/Exported source code and executables using DataStage Designer Client
  • Migrated DataStage code and UNIX directory structure from development to QA and PROD environments.
  • Performed debugging for SQL scripts and developed complex to simple scripts to implement business logic.
  • Involved in unit testing, system testing, UAT and integration testing.
  • Participated in the review of Technical, Business Transformation Requirements Document.

Environment: Ascential DataStage 7.5.2(Designer, Director, Manager and Administrator), Quality Stage, Cognos, Sybase, Teradata, Bteqs, DB2 UDB, XML, SQL Assistant, Maestro, Shell Script, Win 2000/NT and AIX UNIX.

More

General Motors

Programmer Analyst / DataStage Developer

Categories

Work Experience : 2007

Project:  OnStar Vehicle Diagnostics Dashboard

Responsibilities:

  • Involved in gathering Business Requirements for reports and cubes and also came up with standard Requirement gathering documents.
  • Involved in preparing technical design/specifications for data Extraction, Transformation and Loading.
  • Provided technical/user documentation and training.
  • Extensively used DataStage Designer to develop various jobs to extract, cleanse, transform, integrate and load data into Oracle target tables.
  • Worked with DataStage Manager to import/export metadata, jobs, and routines from repository and also created data elements.
  • Scheduled the server jobs using DataStage Director, which are controlled by DataStage engine and also for monitoring and performance statistics of each stage.
  • Involved in Design, Source to Target Mappings between sources to operational staging targets, using Star Schema, implemented logic for Slowly Changing Dimensions.
  • Improved the performance of the jobs by using Performance Tunings.
  • Extensively wrote user-defined SQL coding for overriding for Auto generated SQL query in DataStage.
  • Involved in unit testing, system testing and integration testing.
  • Participated in the review of Technical, Business Transformation Requirements Document.

Environment: DataStage 7.0, 7.5.1(Designer, Director, Manager and Administrator), Oracle 9i, Sybase, SQL, Shell Script, TOAD 7.3, Autosys, HP UNIX.

 

More

Kushal Software Ltd.

Programmer Analyst / BI Developer

Categories

Work Experience : 2005-2007

Responsibilities:

  • Responsible for the Dimensional Data Modeling and populating the Business rules into the repository for metadata management.
  • Developed Jobs using Ascential DataStage 7.0 to Extract and load relational data into Oracle9i, DB2UDB Databases
  • Implemented of Surrogate key by using Key Management functionality for newly inserted rows in Data Warehouse.
  • Worked extensively on different types of stages like Sequential file, Hashed File, Aggregator, Basic Transformer, Sort and Server Containers for developing job.
  • Performed data manipulation using BASIC functions and DataStage
  • Designed complex job control processes to manage a large job network.
  • Implemented performance-tuning techniques along various stages of the ETL process.
  • Created master controlling sequencer jobs using the DataStage Job Sequencer.
  • Designed complex job control processes to manage a large job network.

Environment: Ascential DataStage7.0, SQL Server 2000, Oracle, Toad, Windows NT 4.0/2000, IBM AIX 4.2.

More

Accomplishments Summary

Categories

Work Experience : 2015-2020
  • Seventeen+ years of experience in Architecture, Analysis, Design, Development and implementation of Relational Database and (Enterprise Data Warehousing System using IBM Infosphere DataStage, ORACLE, DB2 UDB, SQL Server, Teradata) and (Cloud Data Warehouse Snowflake, Enterprise Data Lake, Data Ingestion, Data Migration, Processing data pipelines).
  • Around Three years of experience with Amazon AWS, Microsoft Azure, Snowflake, Talend, Qlik Replicate/Compose, Python, MongoDB and Kafka.
  • Hands on work experience with Snowflake Multi-cluster environments like creating snow pipes, copy for bulk load, zero copy cloning using snowflake, time travel, RBAC Controls, resource monitors, sizing the virtual warehouse based on the workloads, query performance tuning, lake-house patterns, events and search optimization.
  • Experience with Snowflake utilities, Snow SQL, Snow Pipe and migration from DB2 to Snowflake.
  • One Plus years of experience in Hadoop Architecture and Administration with various components such as Apache Hadoop, HDFS, MapReduce and Hadoop Ecosystem (Pig, Hive, HBase, Sqoop) and No SQL Cassandra.
  • Working experience and knowledge with Amazon AWS S3, Microsoft Azure Blobs Containers (Block, page and append blobs) for data injection into EDW on Snowflake on Azure.
  • Strong working experience on IBM Infosphere DataStage 8.0.1, 8.1.0, 8.5.0,9.1,11.5,11.7 Ascential DataStage 7.5.2/7.5.1/7.0, SQL, PL/SQL, Bteq’s, Multi loads, Stored Procedures and Triggers. Performed Debugging, Troubleshooting and performance Tuning.
  • Expertise in translating business requirements into Data Warehouse and Data Mart design and developing ETL logic based on the requirements using DataStage.
  • Expert in dimensional modeling, Star Schema modeling, Snowflake modeling, and Fact and dimension table design, Kimball Methodology, physical and logical Data modeling.
  • Worked on various operating systems like UNIX, AIX, Linux, Sun Solaris and Windows.
  • Designed and developed jobs using Parallel Extender for splitting bulk data into subsets and to dynamically distribute data to all available nodes to achieve best job performance.
  • Proficiency in data warehousing techniques for Slowly Changing Dimension Type II phenomenon, surrogate key assignment and change data capture.
  • Worked extensively on XML Pack (XML Input, XML Output, XML Transformer), SAP R/3 (Using ABAP Stage, IDoc Extract Stage, IDoc Load Stage, and BAPI Stage) and SAP BW Packs for DataStage SDC, Shared Containers (Server, Parallel), most of the connectors for various databases.
  • Very good exposure to Infosphere real time CDC is now known as Infosphere Data Replication.
  • Good working knowledge of various Databases like Oracle 11g/10g/9i/8i/7.x, DB2 UDB, Netezza, SQL Server, Informix and Teradata.
  • Good Exposure in Teradata Utilities (TPT, Multiload, Fast load and Fast export)
  • Expert in unit testing, system integration testing, UAT, implementation, Deployment, maintenance and performance tuning
  • Unique ability to understand long-term project development issues at all levels, from interpersonal relationships to the details of coding scripts with strong analytical, organizational, presentation and problem-solving skills.
  • Experience in developing and monitoring nightly batch jobs using Unix Cron, AWK, ASG- Zeke Scheduler, Control-M, ESPX and AutoSys.
  • Good knowledge of Data analytics tools Tableau, Cognos, Business Objects and Power BI.
More

Hire Naveen