PUBLICIDAD

Oferta destacada

Data Engineer

ID 3717153
Buscado AudienceView
Fecha
2025-10-23 12:11:07
Expira
2026-01-06 (en 75 días)
Ubicación Santiago CL
Categoría Informática / Telecomunicaciones
Duración Indefinido
Tipo Full-time
Ley inclusión
AudienceView Postular
Sé de los primeros en postular!
Guardar Compartir
El anuncio ha sido visto: 44 veces
Interesados: 1

Data Engineer

AudienceView, Santiago

23 de Octubre de 2025

Descripción oferta de trabajo


hoy Full-time

The Company:
AudienceView is an organization of people who are passionate about the business of Live Events. We create industry-leading software solutions that fuel attendee engagement, ticket sales and advertising solutions for thousands of sports, music and theatre venues in 16 countries around the world. AudienceView employees share a vision to help entertainment organizations deliver exceptional experiences for people who love live events. We achieve this through innovative technology, popular media brands, effective distribution strategies and a dedicated team of experts that help create customer success every single day.


Why We’d Want to Work with You:
As a Data Engineer II, you will be a key technical contributor responsible for building, optimizing, and maintaining the data infrastructure that powers our analytics and business intelligence capabilities. You will bring strong expertise in cloud-based data engineering, with deep hands-on experience in the Azure ecosystem and modern data platforms.

Your problem-solving mindset and attention to detail drive the success of every pipeline you build and every optimization you implement. You must possess a unique blend of technical depth and practical experience, understanding how data flows from raw ingestion through transformation pipelines to consumption in reports and dashboards. You thrive in collaborative environments where clear communication, thorough documentation, and adherence to version control processes are essential.

You're someone who "gets it", a hands-on engineer who understands the full data lifecycle and can trace data lineage from Power BI reports back through semantic models, Synapse, Python codebases, all the way to raw ingested data. You're comfortable working within established processes while also identifying opportunities for improvement. You know when to dig deep to solve complex problems independently, and you know when to ask for help—both are equally important.



What You’ll Do

Design, build, and maintain scalable data pipelines using Azure Data Factory, Databricks, and Synapse to support analytics and reporting needs.
Develop and optimize data transformation logic using Python, PySpark, and SQL, ensuring performance, reliability, and data quality.
Optimize Spark jobs and Databricks workflows for performance and cost-efficiency, applying best practices for distributed data processing.
Work with Azure services including ADLS Gen2, Key Vault, Event Hubs, and other data-focused Azure services to build robust data infrastructure.
Manage and maintain Databricks Hive metastores, with opportunities to contribute to Unity Catalog implementation and leverage modern Databricks features such as Metric Views and structured streaming.
Process and transform JSON messages from Kafka and other streaming sources, ensuring reliable data ingestion.
Collaborate with the analytics team to understand data requirements and trace data lineage from Power BI reports through semantic models, Synapse, and transformation code back to raw data sources.
Maintain code and projects in git repositories using VS Code, adhering to version control best practices including branch management and working with Power BI projects in PBIP format.
Document work in progress and completed tasks using Azure DevOps (Kanban boards, wiki), ensuring clear communication and knowledge sharing across the team.
Evaluate and contribute to the adoption of new Azure services and platforms such as Fabric/OneLake as the team explores enhancements to the data architecture.
Collaborate with the Senior Data Engineer and broader team to identify and implement improvements to data pipelines, ingestion architecture, and overall data platform capabilities.

What You’ll Need

Minimum 5+ years of experience as a Data Engineer, with strong hands-on expertise in Azure cloud services and modern data platforms.
Deep experience with Azure Databricks, including Spark optimization and working with Hive metastores.
Proficiency in Spark optimization techniques for performance tuning and cost management in distributed data processing environments.
Hands-on experience with Azure Data Factory for building and orchestrating data pipelines.
Experience with Azure Synapse for data warehousing and analytics workloads.
Strong SQL skills for data manipulation, transformation, and analysis.
Proficiency in Python and PySpark for data engineering tasks, transformations, and pipeline development.
Experience with Azure ADLS Gen2, Key Vault, and Azure DevOps (Kanban boards, wiki, branch management).
Experience working with JSON messages produced by Kafka or similar streaming platforms.
Some knowledge of Power BI, including the ability to trace data lineage from reports through semantic models back to data sources, understanding how analytics consume the data you engineer.
Comfort with VS Code and git for version control, including experience managing Power BI projects in PBIP format and adhering to collaborative development processes.
Problem-solving mindset with the ability to diagnose complex data issues, QA, troubleshoot pipeline failures, and optimize performance.
High attention to detail, ensuring data quality, accuracy, and reliability across all pipelines and transformations.
Strong collaboration and communication skills, including comfort with documentation, clear status updates, and working within processes that support team collaboration.
Self-awareness and judgment about when to work independently and when to seek help recognizing that both are critical to success.
Ability to work independently while also thriving in a collaborative team environment.
Nice to Have

Familiarity with Unity Catalog and recent Databricks features (Metric Views, structured streaming)
Familiarity with Event Hubs or similar streaming/ingestion services, as the team evaluates alternatives to current ingestion architecture.
Awareness of Microsoft Fabric/OneLake, as these platforms are under consideration for future adoption.
Bachelor’s degree in computer science, Engineering, Information Systems, or a related technical field
Experience in the ticketing or live event industry

Beneficios


Trabajo 100% remoto
No es necesario ir a la oficina, puedes trabajar completamente desde tu casa.
Informática / Telecomunicaciones Santiago and data the

Comparte por redes sociales



Postular

Estadísticas del anuncio


El anuncio ha sido visto: 44 veces
Interesados: 1
Publicado: hoy
Expiración: En 75 días

Ofertas Destacadas

Importante empresa del rubro de la construcción busca incorporar a su equipo a… Ver más

Oferta destacada

Empresa de capacitaciones en el área de salud. requiere incorporar a su equipo:… Ver más

Oferta destacada

Se requiere técnico en construcción para labores de mantenimiento constructivo, reparaciones en general,… Ver más

Oferta destacada

¡representante de ventas dentro de las mejores clÍnicas de salud ¡sÍguenos en instagram!… Ver más

Oferta destacada

Bcn, empresa de soluciones tecnológicas, busca incorporar a su equipo un/a: asistente comercial… Ver más

Oferta destacada

Se necesita supervisor de terreno Área: reparación y mantenimiento de redes de agua… Ver más

Oferta destacada

Empresa líder en control de accesos y automatismos se encuentra en búsqueda de… Ver más

Oferta destacada

Somos una empresa de seguridad en constante crecimiento y estamos buscando operadores de… Ver más

Oferta destacada

Merjet, empresa relojería, joyería y accesorios de alta gama ¿te apasionan las ventas… Ver más

Oferta destacada

The company: audienceview is an organization of people who are passionate about the… Ver más

Oferta destacada

PUBLICIDAD