Data Management

Creating a reliable and consistent data basis

Frank Kathage

Your Contact Person

Frank Kathage | Senior Manager
Hammer Straße 165
48153 Münster



Creating a standard database - Data Warehouse software module

A sound, integrated database is at the heart of bank-wide management. In practice, the establishment of a central data pool constitutes a complex implementation task that raises high requirements for functional areas and organization/IT units. allows you to tackle this task by relying on time-tested concepts for data consolidation. With the data warehouse, a comprehensive database with far-reaching functionalities is available to you. It covers all the processing steps required for data consolidation. Specifically, supports you with the following functionalities:


Individually configurable reference model

The individually configurable reference model for data and configuration covers all the elements of bank-wide management. The reference model can serve as a starting point for fast implementation and is adapted and/or expanded during customization – in line with the bank's business model.

Data and data quality requirements

Data and data quality requirements are input in line with specific needs and automatically used in building the ETL process. The structured storage and processing of data allows you to analyze the earnings structures of your bank in great detail.

Comprehensive internal database

In addition, you can integrate data from diverse bank-specific feeder systems (cost accounting, personnel management, etc.) so that you get a comprehensive internal controlling database.

Automated and customized interfaces

Automated interfaces allow you to transfer all the data required, import them into the system and compute them in line with the relevant methods. The broad range of automated interfaces ensures the rapid establishment of the ETL processes needed for data supply and delivery. Through the tool-based development of ETL jobs, additional customized interfaces are created efficiently.

Data cleansing - Data Cleansing software module

The experiences of zeb show that it is impossible to do without manual data correction in practice. Whenever possible, corrections should be directly made in the feeder system. In many cases, however, this cannot be done in a timely fashion so that corrections need to be made in centralised databases (e.g. data warehouse) in an upgrading step. Specifically, supports you with the following functionalities:

Easy management via web application

This web application that can be deployed in IT centres allows numerous users, who need not have any detailed technical knowledge, to correct, allocate and logically eliminate data in line with their user permissions in their field of responsibilities. This permits the transparent and logged elimination, modification or re-supply of data in diverse functional contexts.

Data correction

Both before and after loading from feeder system, the data are corrected. In parallel, the data are also corrected at the result database levels. Ad-hoc checks already prevent mistakes during data entry and directly highlight violations of the rules to the user. zeb's approach holds a pioneering position in the DQM market as it ensures the integrated, active assurance of data quality instead of passively reacting to errors identified. The likelihood of technical and functional input mistakes is minimised.

Input via a web interface and Excel

The changes made are entered and logged in a web interface or Excel. Thereby, the entire correction process is traceable and transparent.

Meeting the requirements of an integrated data platform - Data Quality Management software module

The helps you to define and ensure compliance with DQ rules and allows you to sustainably raise data quality at any point in your databases. Specifically, supports you with the following functionalities:

Defining a binding DQ governance

In the first step, an enterprise-wide, binding DQ governance has to be gradually developed on the basis of the business objectives taking account of process and structural organization, roles and responsibilities. This clear assignment of roles and responsibilities ensures that the processes smoothly function in line with the methods defined and that DQ managers can resolve the data problems revealed by the metrics.

Rule-based data quality checks

Experience shows that data problems have highly diverse causes. Recurring problems are detected with the help of metrics and an analysis and correction process is started. The plausibility rules used serve both for identifying technical data errors and taking account of functional data errors.

Intuitive cause and impact analyses

The plausibility rules are applied and logged when data is loaded to the database. Condensing the causes of DQ problems facilitates fast analyses in a structured way. As a result, the analysis of causes quickly identifies fields of action for resolving problems. To allow for the preparation of concrete action plans, problems are weighted by means of expressive indicators. The results of this impact analysis can support the prioritization of actions in a targeted manner – also for non-technical users.


The reporting functionality of the data quality module provides web-based DQ analyses that meet all the requirements described above. In addition to the analysis of causes, it offers innovative impact analyses. Intuitive graphic elements (e.g. traffic lights and trend arrows) visualize the status of data quality and thereby enable you to manage data quality in a targeted way.