Skip to content

The Long Way to the Map – Geodata and Services Provision Via an Automated Pipeline

Date
Wednesday, September 14, 2022
Presenters
Christoph Dittmann
Co-Presenters
Jonas Weimann
Presenter Company
Deutsche Telekom IT GmbH
Event
FME User Conference 2022
Industry
Telecommunications

Presentation Details

At Deutsche Telekom (DT), we are operating a spatial data infrastructure (SDI) with the objective to be a One-Stop-Shop for spatial data and associated services for multiple applications and use-cases at DT. Our users are from mobile and fixed-line planning departments, but also end-customers such as sales portals and joint venture partners.

So what is the journey of raw data to consistent delivery in a modern, cloudified architecture?

Data integration on the enterprise level is an interaction of multiple applications, such as databases systems, cloud storage services, processing workflows and Web-GIS solutions for data discovery and visualization. In our case, FME plays here a central role. FME-Server is used for data intake and manipulation, as well as for managing several workflows to satisfy our end-users needs. Workflows triggered by FME process data for respective use cases, such as anonymizing photographs, thus allowing consultations with public while respecting data privacy requirements. Retrieving of surface types based on Lidar and image data is another example which enables data-driven decisions.

The challenge to satisfy multiple applications, yet maintain integrity in all 4 environments: DEV, INT, STAG & PROD is not a trivial task. To address it, an innovative solution was developed with a focus on simplification and transparent mapping of the entire process.

Our data integration pipeline is embedded in OpenShift, where communication and orchestration between the applications and the respective tasks happens via various APIs. The advantage of this solution is the entire process is set up centrally, is reproducible, and can easily be transferred to other environments. Logging, orchestrating as well as authentication are managed using a GitLab code repository. Deploying this pipeline guarantees exact the same status of data integration in all 4 environments, hence simplifies bug fixing, minimizes manual interaction, thus raises level of automation.