Global Transport and Logistics
Select country/region

Select country/region


Login Sign up

Job Req Number:  71870
Time Type:

Data Engineer – Data Foundation Team (Hybrid)


At DSV, we are on a journey to renew the Global Data and Analytics Platform that is being used by colleagues, customers and Business Partners in all Business Domains to get the most information out of available data.


Our department members are placed in numerous locations around Europe, Asia and South Africa.

A significant part of our department is located in Lisbon, grouped in development and operations teams.

For the Data Foundation Team, we are looking for a Data Engineer to support the team of currently 3 team members plus the Manager.

The position is part of the Global organisation and work will be in close collaboration with colleagues across locations in the department.


To achieve our goals, we are working on an ambitioned plan and have chosen a vendor who is offering us a full stack of services. All of our work is done in the cloud, utilising the full stack of Microsoft’s offering: Azure Data Lake, Azure SQL Server /Azure SQL Synapse, Azure Analysis Services, DataBricks, PowerBI Services, Azure Data Factory, Azure DevOps.

DSV will also embark on a multi cloud strategy and introduce technologies to facilitate cloud provider agnosticism.


The Data Foundation team works as technology spearheads and supports the delivery and operations teams with tools, frameworks, processes, and coaching. We deliver in an agile way with Jira as our management tool.



As a Data Engineer in Data Foundation Team, your major responsibilities will be:



  • Manage and optimize cross department tools.
  • Be a key member in cross data foundation implementation projects.
  • Analyse existing applications and optimize their performance.
  • Design and implement solutions that will support the development of our Operational Data Warehouse (ODW) in alignment with IT Architectural principles, guidelines, and standards.
  • Rapidly build up understanding of existing architecture and underlying business processes
  • Develop and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources.
  • Analyse requirements together with our different department teams.
  • Document and provide technical specifications for the implementations.
  • Refine and promote our agile delivery framework to help ensure high quality and flexible deliverables on the data & BI platform.
  • Actively transfer knowledge to other team members as well as the Operations & Support team.



Skills & Competencies

To perform the job successfully you should demonstrate the following competencies:


  • Analytical Skills: Ability for systematic and rational analysis to identify the root cause of problems.
  • Delivery & Project Management: Experience with Agile methodology, and the ability to promote and facilitate deliveries accordingly.
  • Conversational Skills: Being comfortable in international business communication, able to communicate openly and confidently, able to listen effectively, able to influence and convince others in a way that results in acceptance and agreement.
  • Personal Skills: Ability to work with different cultures and to manage cross functional interfaces, high priority for teamwork, capable to build strong and long-term team relationships within and across functions and geographical boundaries and based on ethics and trust, ready to take personal accountability for achieving individual and shared goals.


Required Experience:

  • Bachelor’s or master’s degree in computer sciences, Data Science, Information Technology or a related field.
  • 3-5 years of experiences in BI/Data Engineering projects.


Technical requirements:


  • Expert knowledge of Microsoft’s Data stack: Azure Datalake, Azure SQL Database/Azure Synapse, Azure Data Factory, Synapse integration, DevOps, etc
  • Ability to monitor, troubleshoot, and optimize data pipelines for performance and reliability.
  • Experience with profiling and tuning queries for efficiency.
  • Experience in working with GIT repositories.
  • Experiences of Agile methodology and CI/CD practices.
  • Experience with task management tools (JIRA, Confluence)
  • Experience with Databricks and Pyspark/Scala/Spark SQL
  • Expert Knowledge of data governance principles to ensure data quality, lineage, and compliance.




  • Proficient Level of English (spoken and written)


Our offer:

Your job location will be Poland (Warsaw, Taśmowa) and you will be part of DSV Group Finance with peers working in teams across the Globe.


  • Employment contract
  • English classes
  • Office gym
  • Multisport
  • Canteen
  • Health Insurance
  • Hybrid working model
  • Comprehensive onboarding program
  • Work-life balance
  • Comfortable office
  • Internal training catalogue
  • Culture of feedback

DSV – Global transport and logistics

DSV is a dynamic workplace that fosters inclusivity and diversity. We conduct our business with integrity, respecting different cultures and the dignity and rights of individuals. When you join DSV, you are working for one of the very best performing companies in the transport and logistics industry. You’ll join a talented team of more than 75,000 employees in over 80 countries, working passionately to deliver great customer experiences and high-quality services. DSV aspires to lead the way towards a more sustainable future for our industry and are committed to trading on nature’s terms.

We promote collaboration and transparency and strive to attract, motivate and retain talented people in a culture of respect. If you are driven, talented and wish to be part of a progressive and versatile organisation, we’ll support you and your need to achieve your potential and forward your career.

Visit and follow us on LinkedInFacebook and Twitter.



Apply now