2026/01 - Present
Data Engineer & Architect
Capgemini
Málaga, Spain · Remote
PySpark, Synapse, Microsoft Fabric and migration work. Data platform engineering, metadata-driven system evolution, Databricks platform solutions and MongoDB.
About
I am a data engineer and data architect based in Málaga, working remotely on cloud data platforms and analytics systems. My recent roles have focused on Azure, Databricks, Microsoft Fabric, PySpark, Synapse, Data Factory, Delta Lake, Unity Catalog, MongoDB and Power BI semantic models.
I have worked across platform engineering, ETL and ELT delivery, metadata-driven architecture, data virtualization, secure connectivity patterns, streaming and analytics enablement. My background also includes software engineering, API development, identity and access management, process analytics and technical delivery in international environments.
I prefer architecture that can be explained, operated and evolved. That means clear data layers, explicit ownership, automated delivery where it reduces risk, and technical decisions that remain understandable after the first release.
Experience
2026/01 - Present
Capgemini
Málaga, Spain · Remote
PySpark, Synapse, Microsoft Fabric and migration work. Data platform engineering, metadata-driven system evolution, Databricks platform solutions and MongoDB.
2025/09 - 2026/01
Aszendit Tech / SARIA Group
Germany · Remote
Databricks ETL on Delta Lake, Unity Catalog and ADLS Gen2, using Synapse Link, Autoloader, Azure DevOps CI/CD, DataM8/Jinja, Power BI semantic models and ADF orchestration.
2024/08 - 2025/09
Minsait / Indra
Málaga, Spain · Remote
Concurrent PySpark and Scala projects around Synapse migration, audit data, environmental integration and enterprise data virtualization.
2024/05 - 2024/08
Hiberus
Málaga, Spain · Remote
ETL pipeline development with PySpark and cloud data services across Azure and AWS.
2023/09 - 2024/03
Santander Digital Services
Málaga, Spain
Software and security work around APIs, OAuth 2.0, identity and access management, with support for data engineering patterns including Kafka, Spark, Airflow and pipelines.