For many years now, we have been creating and deploying software designed for the implementation of business-critical applications in banking. Our experience shows that the best (and often the only possible) path to implement new software is integrating it with the existing IT infrastructure in a possibly non-invasive way. Such a task, difficult in practice, is workable only when software is created with the use of highly adaptable tools/concepts.

Selected aspects of organizing and using ESB integration tools are discussed below.


All messages exchanged internally and with ancillary systems should be documented in a way which allows for easy browsing and searching for documentation by a person, as well as automatic validation of processed messages. In the latter scope, it is vital to use validators consistently in developer and testing environment, which ensures compliance of documentation with the actual state. Implementing this kind of solutions requires discipline and constitutes certain costs but benefits of introducing such an operating model are significant, and with large systems, developed for many years – thousands of message types – it is simply indispensable for keeping control and system manageability.

Text formats such as XML and JSON are convenient to use and thus popular. Consistent defining and application of reusable structures describing particular data blocks is vital. Frequently appearing flat list creation of optional parameters of request/reply per message is generally not sufficient to understand the way the service works and to recognize the common part in different services. Depending on the data format used, using XML Scheme, JSON Scheme, OpenAPI Scheme etc is the adequate form of documentation, enabling automatic validation.


In medium and large systems, multi-tier architecture is the best solution.

The outer layer should consist of communication gateways, of general type or dedicated, type B2B. The best way is to set up separate gateway instances for communication with separate systems, which enables effective monitoring of interaction with ancillary systems and managing implementation of changes, maintenance breaks etc.

The system should contain a communication router, convenient in configuration and a business rules engine allowing for implementation of a simple orchestration as well as complex multi-step workflow type processes, enabling implementation of SAF (Store and Forward) mechanism. In defining workflow in cooperation with ‘business’, it is handy to use high level formalized descriptions such as BPMN, for which commercial graphic tools are accessible. Implementation of processing model should be possible with the use of low-level languages such as C++, as well as popular scripting languages such as Python and JavaScript.

Depending on particular application, different types of auxiliary tools are needed for the rules engine. These are often applications which use a classic database or a memory database, different types of data format converters transferred in files etc.

Multi-tier solutions are worth using for bulding components which realize particular business functions. It maximises their flexibility in the face of rapidly changing environment and increasingly growing demands for implementation of fast and broad changes.


In data exchange between applications delivered by different providers and made in different technologies, the best solution is to use general purpose protocols because it is nearly sure that the other party can efficiently and effectively serve the particular form of communication. In this area, it is best to use http protocol, and in it WebServices (well defined API for signature and point-to-point encryption) or REST (small size of documents, increasingly popular among programmers). Obviously, nothing stands in the way to use binary protocols such as Tuxedo FML, Protocol Buffers etc, which are completely unreadable to the users.


While using (for example) http in communication between systems is desired, inside a system it is much more favorable to use other forms of communication, optimal in particular application. In this context, it is worth being prepared to change the transport layer or use different transport in different implementations.


The systems must ensure scalability in broad sense. In this area scalability in breadth is the most practical, that is the possibility to start additional environments quickly. Within a single environment, it should also be possible to start additional processes of the operating system, and within a process it should be possible to start an additional thread. Both starting and stopping processes and threads can and should be done automatically on the basis of rules defined in the system configuration. It is helpful to use fields of different kind of resources, which allows us to increase, as well as limit, the use of resources. The latter is vital for cooperation with ancillary systems, which are not always able to serve, frequently sharp, traffic growth.


For efficiency purposes it is often advisable to use data caching mechanism. Solutions such as in-memory database, for example Redis, work best in this area. While using this kind of solutions, it is good to provide a transparent way of searching the right cache instance.


Ensuring confidentiality and data security is necessary on network level (mechanisms such as firewall), as well as in applications, through serving access control lists (ACL) and on low level, through point-to-point data encryption. In the latter area, it is advisable to use symetric or asymetric cryptography mechanisms with signature, for example HMAC.


Monitoring the work of all components of the environment should be possible by means of tools using SNMP protocol, which are popular among administrators.

Information about processing should be saved in local files and, at best, additionally stored day-to-day in tools useful for aggregation of this type of data, sharing graphical interface, for example ELK Stack (Elasticsearch, Logstash, Kibana).


Configuration saved in files containing XML or YAML documents is very convenient to use. While the component should load the configuration from a file at the start, the very configuration management should be accessible with the use of a tool such as configuration server which shares graphical user interface for browsing and modification, and change publication. Configuration server should ensure automatic deployment of generated configuration files to catalogues served by components and reloading the modified configuration.


The multi-tier component architecture, recommended earlier, can and should be served by suitably designed and created Framework. In fact, the plural should be used here, because there are different requirements for Framework for communication tools with WLAN and different ones for Framework for internal communication components. The key thing is for Framework to free programmers as much as possible from problems connected with configuration loading, creating technical, business, tracking logs etc.


Tool software used by developers usually consists of several elements created independently, Open Source software, as well as commercial software. It generally means that APIs of particular elements differ from one another, that’s why it is good practice to produce uniform SDK covering tool software. SDK should be accessible at least on Linux and Windows platforms for core systems and on mobile platforms, if the latter are used in the particular case.