Bruno is a Senior Data Engineer/Architect with over 6 years of professional experience focusing on Data engineering and analysis. He is an advanced developer of Spark and Scala and has worked extensively with Python as well. He has built and supported Cloud data solutions in both AWS and GCP, and is strongest with AWS tools, particularly with Kubernetes and related technologies. Bruno is a passionate technologist that is always learning new technologies and mastering best practices. Recently he has been studying data algorithms, trade-offs of different types of databases, and how to move from a batch-centric approach to an event-driven one, especially trying to apply the concepts of Data Products and Data Mesh.
Main role is to negotiate and define data products, features, and topology alongside business domains. Specify how to ingest, process and aggregate data. Define how the data product will be delivered to domain user, e.g., dashboards, table, Rest API. Coding frameworks, pipelines or database creations, while helping define standards for the data platform. Supporting and mentoring junior engineers.
Data architecture design, data products deployment, translating business requirements into technical specifications.
Stack: AWS, Kubernetes, Spark, Scala, Python, ksqlDB, MongoDB
Fraud data engineer responsible for defining the fraud data flow (ingestion, processing, storage) based on business requirements. Responsible for moving machine learning models into production (creating pipelines or Rest APIs to deploy the models) and mentoring junior engineers.
Data architecture design, ML models deployment, automation, and creation of ML pipelines
Stack: AWS, Kubernetes, Kubeflow, Spark, Scala, Python, Clojure
Responsible for defining the architecture and implementation of data products, i.e., recommendation platform, end-to-end. Development of data pipelines and deploy of ML models into production. Also mentoring junior engineers.
Data architecture design, ML pipelines development, and deployment, data products design and deployment;
Stack: AWS, Spark, Python, Terraform, Git, Airflow, Docker, Kubernetes, MLFlow.
Creation of data pipelines, deploy of ML models into production, collect business requirements from domain experts.
Stack: Azure, Spark, Python, AirFlow;
ML pipelines development and deployment, stack and data architecture definition;
Business Intelligence Analyst, responsible for collecting businesses requirements and converting into managerial dashboards (Power BI)
Customer-facing role, development and implementation of business analytics and policies.
Over 8 years of experience working as an Industrial Engineer. Career change to Tech Industry in 2018.