Felipe is an experienced, self-driven Data Engineer with over 8 years of experience working with data solutions, 5 years on cloud technologies, and 3 and a half years in cloud data engineering. His technical experience focused on building data platform aspects such as developing ELT pipelines using DBT, Databricks, Airflow, or Data Factory, providing platform improvements using Python such as implementing best practices, automated profiling reports, CI/CD deployment pipelines using GitHub Actions, and Azure DevOps. He is very strong in using both AWS and Azure to build cloud data solutions. He is always striving to architect and build solutions with quality, manageability and automation attributes using the best frameworks and tools available to the problem, while clearly communicating the proposed values of the solution and implementation.
Designed and implemented a new big data architecture at the Azure cloud using tools like ADLS Gen2 Data Lake, Data Factory, developing Databricks/Spark modules, extracting/integrating data from WebAPI’s such as SAP and SalesForce and pipelines with Python, Synapse and Power BI, rebuilding and migrating on-premises SQL Server data pipelines to deliver data products to the business areas.
Participated in the data governance initiative, working with the business stakeholders to conceive the initial data governance products such as the corporate data catalog, data quality indicators and monitoring, and defining data management processes, standards and responsibilities.
Worked at the UOL/Pagseguro client, modeling customer, campaigns, and relationship data from various channels to develop data models and pipelines using the SQL Server BI platform to deliver data products and analysis to the CRM team.