<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Softax Blog]]></title><description><![CDATA[Advanced systems tailored to your needs]]></description><link>https://www.softax.pl/blog/</link><generator>Ghost 3.42</generator><lastBuildDate>Thu, 22 Jan 2026 01:48:47 GMT</lastBuildDate><atom:link href="https://www.softax.pl/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[BLIK has launched deferred payments on the Advantica Softax platform]]></title><description><![CDATA[Today, BLIK and its partner Millennium made the first Buy Now Pay Later deferred transaction in the production environment. Our Advantica platform is responsible for the technical part of the process and functionalities in the form of account maintenance, customer creditworthiness and limit setting.]]></description><link>https://www.softax.pl/blog/blik-has-launched-deferred-payments-on-the-advantica-softax-platform/</link><guid isPermaLink="false">632c39c79ce7bf02e3259003</guid><category><![CDATA[advantica]]></category><category><![CDATA[banking]]></category><category><![CDATA[blik]]></category><category><![CDATA[bnpl]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Thu, 22 Sep 2022 10:41:41 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2022/09/Post-cover.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2022/09/Post-cover.png" alt="BLIK has launched deferred payments on the Advantica Softax platform"><p>Today, BLIK and its partner Millennium made the first Buy Now Pay Later deferred transaction in the production environment. Our Advantica platform is responsible for the technical part of the process and functionalities in the form of account maintenance, customer creditworthiness and limit setting. </p><p>We are happy for the direction of development and the introduction of new services at BLIK, because we have been their technological partner from the moment of its establishment. Successes achieved by our clients are also our successes, and each production deployment of our solutions makes us proud. </p><p>We wish BLIK and ourselves more successful projects.</p>]]></content:encoded></item><item><title><![CDATA[Mobile Banking Application for Children: Advantica Kids]]></title><description><![CDATA[The whole idea of banking applications for children up to 13 years of age assumes discreet supervision of the child's account by the legal guardian.
The kid’s account is closely linked to the parent's account and it is the parent who decides on many financial operations carried out by the child.]]></description><link>https://www.softax.pl/blog/mobile-banking-for-children-advantica-kids/</link><guid isPermaLink="false">613752030e7a851beaa511d9</guid><category><![CDATA[mobile]]></category><category><![CDATA[banking]]></category><category><![CDATA[Kids]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Tue, 07 Sep 2021 13:30:00 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/09/Avatar-adv-kids-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://www.softax.pl/blog/content/images/2021/09/Avatar-adv-kids-1.png" alt="Mobile Banking Application for Children: Advantica Kids"><p>The article presents the concept of a banking application for kids (up to 13 years old) prepared on the basis of our analysis of this issue.</p><p>The whole idea of banking applications for children up to 13 years of age assumes discreet supervision of the child's account by their legal guardian.</p><p>The kid’s account is directly linked to the parent's account and it is the parent who decides upon many financial operations carried out by the child.</p><p>This article focuses on the key elements of a kid's mobile banking app which in my opinion should appear in any app of this kind. It will not cover the details of the business process behind it.</p><p>Moving on to specifics.</p><p>Firstly, the application must <strong>support</strong> an adolescent in the world of finance, specifically in financial management.</p><p>Secondly, it should contain an <strong>educational aspect</strong>. Financial education is an obvious take, but also education in the area of threats related to the use of electronic financial tools (cards, ATMs, transfers, online and mobile banking, cyberattacks) and, rather unprecedentedly, the historical aspect of finances.</p><p></p><p>This is a certain novelty. Considering the young generation's awareness of finance, we assume that historical elements like: "when was the first ATM created" or "how was the development of payment cards" are an attractive addition and a jump into the past extending the picture of the financial world.</p><p>For older kids (from the age of 13) this historical theme can be used to present medieval settlements in the area of loans (e.g. giving as an interesting fact the financial system of the Knights Templar) or shares of Indian companies. Such information expands the perspective of investment knowledge. They introduce the subject of investments in stocks, bonds and other financial instruments.</p><p>Thirdly, the app should be <strong>attractive</strong>, making the children eager to use it. Subjectively perceived "attractiveness" of an app may be executed through not only well-designed user flows and gamification, but also joyful, energizing UI design and satisfying micro-interactions. It is worth maintaining that as much as the kid's app has its own visual perks, it still needs to be internally consistent with the parent's app within the same banking system.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/09/3-wersje-1.png" class="kg-image" alt="Mobile Banking Application for Children: Advantica Kids" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/09/3-wersje-1.png 600w, https://www.softax.pl/blog/content/images/2021/09/3-wersje-1.png 760w" sizes="(min-width: 720px) 720px"></figure><p>Above we presented 3 proposals of dashboard applications for different thematic threads addressed to both older and younger customers. In the following part we will present functionalities from the first visualization.</p><h2 id="functionality">Functionality</h2><p>Of course, functionality is important, but because it is quite limited by law, it does not allow for great breadth.</p><p>The main functional areas are the implementation of the savings function, support for payments, mechanisms to support the collection of money and the implementation of tasks with rewards (Pocket 2.0).</p><p>I will not describe in detail those functionalities that are known from the applications available in PKOBP (iPKO Junior), Pekao SA (Wallet PeoPay KIDS), mBank (mBank Junior), Millennium (Konto 360° Junior). I will focus on a few elements that are not present in the applications available in the market, and which in my opinion should be there.</p><p>Interestingly, only 4 banks in the market have mobile apps for children under 13: PKOBP (since 2012), Pekao SA since 2018, mBank, Millennium.</p><p>2 more banks have products for children: BNP Paribas and Santander:</p><p>BNP Paribas offers only a debit card (in the form of a plastic card, wristband or watch (in this case for 10 PLN per month)).</p><p>Santander Bank has an option to open an account for a child, but it is an account available only from the parent's e-banking. Unfortunately, there is no card for a child under 13 years old. The bank has an interesting educational website https://finansiaki.pl/ with many materials to use with children of different age groups.</p><p>Below is a list of functionalities that, in my opinion, should be in a model application and that are in our solution.</p><h2 id="support-in-financial-management">Support in financial management</h2><p>The app has a financial barometer that is constantly available to the child and visible in the app, in different places.</p><p>Thanks to this the child can see how his/her finances are going. The barometer continuously analyzes the situation and displays the state of finances online.</p><p>The barometer is activated with each operation indicating whether the child exceeds his/her budget with too much spending or whether he/she keeps the balance between spending and saving.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/09/Budget-2.png" class="kg-image" alt="Mobile Banking Application for Children: Advantica Kids" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/09/Budget-2.png 600w, https://www.softax.pl/blog/content/images/2021/09/Budget-2.png 760w" sizes="(min-width: 720px) 720px"></figure><p>For each thematic version the barometer uses different icons dedicated to the topic. Below is the simplest version with smiley faces.</p><p>In addition to the general fixed barometer there are also notifications generated by the assistant.</p><h2 id="financial-education-related-to-risks-and-historical-elements">Financial education related to risks and historical elements</h2><p>The mobile application itself, from the point of view of banking functionality, can not offer (due to legal restrictions) too much functionality. After all, a child needs to be able to make small card payments, a convenient system of saving (moneybox) and mechanisms for receiving money (here it is worth noting that challenges in iPKO Junior application are a good idea. In our application they are simply called tasks, about which further in the article).</p><p>To make the child reach for the application it has to provide other attractive elements. We introduced two such elements. The first one is the area of financial education, the second one is gamification used to reinforce the first area.</p><p>Extended educational scope in the application is an unprecedented element.</p><p>In our solution we placed not only basic information about what a card or an account is, but we built a historical knowledge base.</p><p>We present the history of banking (when was the first bank established, what were the historical ways of investing, how did the history of money change) and the history of products such as ATM card, deposits, accounts, but also coins, bullion, bonds. We provide in various places information about when the first card was created, the first ATM, how these devices looked in the past, etc.</p><p>The child receives a lot of interesting information and the system leads him/her to new ones responding to his/her current attention.</p><p>Here we have also used virtual guides such as the owl the educator, who guide the young user through the process.</p><p>Information appears in appropriate places (e.g. next to the card there is information related to payment cards in general) and there is also a separate option for gaining and consolidating knowledge "Gain knowledge".</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/09/Story.png" class="kg-image" alt="Mobile Banking Application for Children: Advantica Kids" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/09/Story.png 600w, https://www.softax.pl/blog/content/images/2021/09/Story.png 760w" sizes="(min-width: 720px) 720px"></figure><h2 id="gamification-that-is-lessons-and-learning-as-in-memrise-or-duolingo">Gamification that is lessons and learning as in Memrise or Duolingo</h2><p>The above educational area appearing randomly and irregularly will not result in acquiring knowledge.</p><p>We used mechanisms of consolidation of the presented knowledge based on solutions known from such applications as Memrise or Duolingo.</p><p>A child logging on to the application has a dedicated option "Gain knowledge" - it works on a similar basis as Memrise type applications.</p><p>There are historical and general questions related to financial concepts.  They are of course related to the knowledge presented in different parts of the application. Then the child can go to the test and check his/her knowledge, consolidate it with repeated questions and go to the next levels.</p><p>Money prizes are an attractive element. After a specified time and consolidation of knowledge the child receives a monetary reward defined by the parent, which is credited to the main account, card or moneybox (optionally the Bank can reward).</p><p>The application takes into account the basic principles of well-designed gamification processes. The child sees his/her progress, earns (apart from money) points and sees his/her position in the ranking (the parent defines whether the child participates in the knowledge ranking anonymously or not - only the child's nickname is visible, not his/her personal data).</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/09/Grywalizacja-1.png" class="kg-image" alt="Mobile Banking Application for Children: Advantica Kids" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/09/Grywalizacja-1.png 600w, https://www.softax.pl/blog/content/images/2021/09/Grywalizacja-1.png 760w" sizes="(min-width: 720px) 720px"></figure><p>The goal is to reach the level of an expert in financial knowledge. The application arouses emotions and gives satisfaction from the achieved results.</p><h2 id="learning-responsibility-and-the-system-of-collecting-rewards">Learning responsibility and the system of collecting rewards</h2><p>Another option is a system of tasks and rewards received for their completion (known from iPKO Junior application prepared by us in 2012)</p><p>Tasks are activities to be completed within a specified time. The tasks are defined by the child's parents. The child receives a reward for completing them. It can be a monetary or other reward.</p><p>The application supports the child in the realization of the tasks. It reminds the child about the approaching time of their realization.</p><p>The application supports learning how to fulfill duties, evaluate work and get paid.</p><p>Such functionality will not be found in an adult app. We called it "Pocket 2.0". Child has presented in the tasks section, view of his day. There are daily tasks, those that we have to complete every day to receive a reward. There are also irregular tasks, the frequency of which is set by the parent. After completing a given activity a notification pops up in the parent's application, the parent can verify the completion of the task and either ask to do it again or approve the correctness. If the child completes the entire series of tasks and the parent reviews them positively, the child will automatically receive a predetermined reward.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/09/Junior_zadania-1.png" class="kg-image" alt="Mobile Banking Application for Children: Advantica Kids" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/09/Junior_zadania-1.png 600w, https://www.softax.pl/blog/content/images/2021/09/Junior_zadania-1.png 760w" sizes="(min-width: 720px) 720px"></figure><h2 id="personalization">Personalization</h2><p>The app has several functional areas:</p><p>Money Boxes (savings), Transfers, Tasks (Pocket 2.0), Gain Knowledge, Payment Card, Events, Settings. The child can personalize the look of the application by setting the elements that interest him the most on the start screen.</p><p>In this way, he learns that he can manage his application.</p><h2 id="attractiveness-means-coherent-thematic-worlds">Attractiveness means coherent thematic worlds</h2><p>We introduce several thematic threads that coherently build a world of concepts and symbols guiding the child through the area of finance.</p><p>Animal friends, stars. We have thematic threads adapted to the age and gender of the child.</p><p>Animations add to the appeal.</p><!--kg-card-begin: html--><video width="100%" controls autoplay preload muted loop>
    <source src="https://www.softax.pl/downloads/blog/advantica_kids.mp4" type="video/mp4">
</video><!--kg-card-end: html--><h2 id="summary">Summary</h2><p>Banking apps for kids need to be approached very differently than apps for adults. There is a need to focus not on access to products and related functions, but on attracting attention, introducing gamification in order to make the child reach for the app often. The financial products themselves are kept to a minimum and not so important.</p><p>The emphasis should be on the fun area.</p><p>In our lab, we are constantly working on new concepts. We create thematic apps with sets of concepts tailored to the age and gender of the child (animals may attract a different group of children than a thread related to space).</p><p>We try to build a coherent set of artifacts and their features around a chosen thematic thread and then use them to attractively present the relevant financial area.</p><p>Our artifacts convey habits, knowledge, services.</p><p>With appropriately selected technology we are able to achieve a satisfactory time to market on such a dynamically changing market.</p>]]></content:encoded></item><item><title><![CDATA[Polish mobile banking applications - reports on user experience]]></title><description><![CDATA[I have been working in the area of web and mobile applications for over 20 years and many years ago, the user experience was not mentioned as a separate issue requiring dedicated experts.]]></description><link>https://www.softax.pl/blog/polish-mobile-banking-applications-reports-on-user-experience/</link><guid isPermaLink="false">6112ce8e0e7a851beaa51184</guid><category><![CDATA[mobile]]></category><category><![CDATA[banking]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Wed, 11 Aug 2021 14:20:09 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/08/Avatar.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2021/08/Avatar.png" alt="Polish mobile banking applications - reports on user experience"><p>I have been working in the area of web and mobile applications for over 20 years and many years ago, the user experience was not mentioned as a separate issue requiring dedicated experts.</p><p>Although UX has been functioning as a term since the 1970s, work on the visual layer of the application was simply outsourced to graphic companies that focused on preparing a good appearance. The concept of customer experience did not work in projects.</p><p>The UX area slowly crept into front-end application projects, and currently UX specialists have dominated the design of mobile applications and there is a lot of talk about it while the knowledge and methodology of UX design are actively used in the creation of new banking applications for users.</p><p>Nevertheless, when using mobile applications in everyday banking, we encounter many inconvenient or counterintuitive solutions.</p><p>We wondered what these imperfections are caused by, since so much energy is spent on improving the quality of user experience?</p><p>So we decided to methodically verify the functionalities made available in banking mobile applications.</p><p>Do all applications have similar problems in similar areas? Do the complications only affect some of them? How are the various functional processes handled by the Banks?</p><p>We were interested in this, so we looked at 8 applications of the largest Polish banks, which covers approx. 14 million users which is about 80-90% of users of banking applications in Poland.</p><p>Taking a closer look at individual applications, we decided to present good and bad practices in the main functional areas of these applications.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/08/Srodek1---banking-applications.png" class="kg-image" alt="Polish mobile banking applications - reports on user experience" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/08/Srodek1---banking-applications.png 600w, https://www.softax.pl/blog/content/images/2021/08/Srodek1---banking-applications.png 760w" sizes="(min-width: 720px) 720px"></figure><h2 id="functional-areas">Functional areas</h2><p>We plan reports on 6 main functional areas that are most often used by clients:</p><ol><li><a href="https://www.softax.pl/en/reports/mobile-banking-login">Logging in and account balance before logging in</a></li><li>Payments and other actions before logging in</li><li>Main view/dashboard after logging in</li><li>Account history</li><li>Making a payment</li><li>Payment card management</li></ol><h2 id="experience-analysis">Experience analysis</h2><p>In the research, we focused on the customer experience related to the use of mobile applications.</p><p>We have described this experience focusing on the following areas:</p><ul><li>Peace of mind - the client does not want to have problems, does not want to risk a mistake, he wants to feel that he is in control of the situation, that he correctly uses the possibilities offered by the application. So we tested whether a given function clearly defines what the client can do at a given moment.</li><li>Security - the client knows that his data and money are safe and that his orders will be carried out in accordance with his will.</li><li>Ease of use / Convenience - the efficiency of the process itself, no effort or repetitive activities required.</li></ul><p>At the moment, we present the first report on the functionality related to logging in to the mobile application and the account balance before logging in.</p><p>We invite you to read it as well as the individual reports for each application.</p><p>Subsequent reports will be out every 2-3 months.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/08/Srodek2---banking-applications.png" class="kg-image" alt="Polish mobile banking applications - reports on user experience" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/08/Srodek2---banking-applications.png 600w, https://www.softax.pl/blog/content/images/2021/08/Srodek2---banking-applications.png 760w" sizes="(min-width: 720px) 720px"></figure><h2 id="how-to-use-this-report">How to use this report?</h2><p>After the first report, we do not draw general conclusions, however, as part of the report for each area, we present successful solutions that are worth disseminating, as well as problematic ones, where we recommend changes.</p><p>We can already see that the variety of approaches to the same functionality is great. There is no one good way to implement a given functionality (for now, we can say about the login functionality), there are several good ones. Sometimes you are surprised by obvious UX errors (e.g. poorly visible basic Login buttons, no explanation of what the effect of performing a given action will be) or technical errors (e.g. turning on the biometric sensor too early, which causes an error to greet us). We can also see that some processes depend on the adopted level of security (e.g. whether the change of the login method is to be additionally confirmed or not - in two ways or the last used one?) which affects the comfort of use.</p><p>We encourage you to monitor our blog. Reports are divided into a descriptive part - substantive, which is the result of the analysis of all applications, and a part describing a given process - prepared loosely and with humor for each application separately. Despite the large overall volume of the reports, we focused on details. The reports can certainly inspire you to look again at the UX of the delivered applications. So we encourage bankers to get acquainted with them!</p><!--kg-card-begin: html--><aside>Read our report today: <a href="https://www.softax.pl/en/reports/mobile-banking-login">Mobile banking report: 1. Login and account balance</a></aside><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[How to put your system in the cloud, keeping your head out of the cloud? Part2]]></title><description><![CDATA[The second part of a series of four articles in which I describe first three areas important to be considered in the process of migrating infrastructure to the cloud: inventory, potential infrastructure for migration and schedule.]]></description><link>https://www.softax.pl/blog/migration-of-existing-it-infrastructure-to-the-cloud-environment-part-two/</link><guid isPermaLink="false">60a26f4b0e7a851beaa5110b</guid><category><![CDATA[agile]]></category><category><![CDATA[architecture]]></category><category><![CDATA[banking]]></category><category><![CDATA[cloud]]></category><category><![CDATA[digitaltransformation]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Mon, 24 May 2021 17:43:05 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/05/Avatar-How-to-put-your-system-in-the-cloud--keeping-your-head-out-of-the-cloud-.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2021/05/Avatar-How-to-put-your-system-in-the-cloud--keeping-your-head-out-of-the-cloud-.png" alt="How to put your system in the cloud, keeping your head out of the cloud? Part2"><p>In the first part, I provided a subjective list of areas important to be considered in the process of migrating infrastructure to the cloud based on my project experience. In this article, I take a closer look at the first three:</p><ul><li>Inventory</li><li>Potential infrastructure for migration</li><li>Migration schedule</li></ul><h2 id="inventory-and-determination-of-the-infrastructure-s-potential-for-migration">Inventory and determination of the infrastructure's potential for migration</h2><p>Let's assume that we are already determined to migrate, we know why we want to do it, and we have an idea of how our infrastructure should look like. For example, that:</p><ul><li>we need to implement the CI / CD process,</li><li>we will build a cloud environment on, say, the popular Kubernetes,</li><li>we are interested in a local cloud (or on the contrary - only public)<br>and so on.</li></ul><p>So the moment has come when we need to analyze and verify the potential and limitations of our software to move to a new software infrastructure, and then hardware (which is a secondary issue).<br>This task is not trivial, what's more - it turns out to be very political.<br>There can be two general conclusions: "it is possible" or "it is impossible".</p><p>Of course, the "possible" solution includes both options: "it is possible, but the changes will be so costly that it does not pay off" and: "it is possible, easy and profitable" (and everything in between).</p><p>Packing components into containers is not a complicated undertaking. Of course, the devil is in the details and sometimes problems arise. In theory, however, this is a fairly simple configuration task.</p><p>The conclusions from the inventory of the potential and limitations of existing containerization software may turn out to be a gateway to cooperation for some suppliers, and for others they may close the door in their face. Therefore, this stage may be a political but not a technical one. I am writing about it because it is a very important element influencing decisions on how to implement the project, at least in the initial phase of collecting data and arguments.</p><p>But let's leave politics aside and focus on more design tasks.</p><p>I assume that the best solution is to perform an inventory in a mixed team, consisting of experts from the organization and from the supplier. Thanks to this, the organization can better understand the specificity of the software provided by the supplier, and the supplier gets to know the context of the functioning of its solutions in the organization and tightens cooperation with it.</p><p>It is a good idea to order a PoC (Proof of Concept), under which the supplier will containerize a selected element of the infrastructure - thanks to such experience, the assessment of the containerization potential will be more reliable. I know from my own experience that the suppliers will depend on maintaining cooperation and will gladly undertake such a task. On the other hand, the contracting authority will be able to verify, on the basis of the operating software, what the supplier's approach to containerization looks like in practice and the chances of success of the entire project.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_1-.png" class="kg-image" alt="How to put your system in the cloud, keeping your head out of the cloud? Part2" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/05/obrazek-wewnatrz_1-.png 600w, https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_1-.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="modularity-as-an-opportunity-for-success">Modularity as an opportunity for success</h3><p>If the infrastructure was built on the basis of modular, service components, then there will be no major problem with such PoC. Components can usually be easily closed into containers, i.e. to build a cloud-ready version.<br>There are of course many challenges here, such as:</p><ul><li>communication with the outside world,</li><li>security,</li><li>the impact of an external framework supporting containers on their operation,</li><li>monitoring, log analysis,</li><li>load distribution, division of architecture in the new environment.</li></ul><h3 id="watch-out-for-transport-between-components-real-life-example">Watch out for transport between components. Real-life example</h3><p>In the solutions provided by Softax, used for communication and data exchange between components, we use many types of transport. Our components have been built for several years in such a way that it is possible to define transport in a configuration. We have our own transport libraries dealing with load-balancing and fail-over issues, and we also use generally available solutions, such as gRPC or queuing systems.</p><p>Due to the fact that the cloud infrastructure has its own component management mechanisms, some features of our transports had to be modified in order not to duplicate the functionality (e.g. we disable load balancing mechanisms, because Kubernetes has its own and they manage the load of individual pods).</p><p>So, simply closing it into a container may not be enough, as some adjustments will almost always be necessary.</p><p>It was not difficult with our solutions, because their structure is modular. However, in the case of monoliths, you have to ask yourself some basic questions:</p><ul><li>First: can they be broken down into business domains or microservices at all?</li><li>Second (more difficult): what kind of division to apply?</li></ul><p>In general, any code, even a monolith, can be divided into smaller modules - of course, you always have to consider the benefits and losses of such an undertaking. It may turn out that there is no point in implementing it.</p><p>After all, the inventory process, which involves a lot of analytical and architectural and infrastructural works, must NEVER be omitted. What is key - this inventory should be made in cooperation with the authors of solutions. I have already encountered very cursory analyzes by external auditors, who treated the issue of digital transformation as an opportunity for political games between suppliers, but were not substantively embedded in the context of the system operating in the organization.</p><h2 id="the-first-schedule">The first schedule</h2><p>Migration should be viewed in the context of many years. If it is a small company, it is a minimum of one year, if large - 5 years. These values are based on experience. I am currently participating in a project that has been going on for over 1.5 years and we are perhaps 1/3 of its implementation.</p><p>In large organizations, where the infrastructure consists of hundreds or thousands of modules, migration may be conceptually simple but difficult to implement. Not for architectural and technical reasons (in my opinion, these are feasible), but precisely for organizational reasons: separation of power, business criticality, i.e. the risk of interrupting the continuity of the system, resistance to change (which may be associated with the loss of influence, position in the organization).</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_2-.png" class="kg-image" alt="How to put your system in the cloud, keeping your head out of the cloud? Part2" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/05/obrazek-wewnatrz_2-.png 600w, https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_2-.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="technological-elements">Technological elements</h3><p>This area is the easiest to define.<br>Regarding the sequence of work, the following factors should be considered:</p><p>1. Up-to-date software.<br>If the software is out of date, the question arises: should I upgrade its version during migration? Often they have to be adapted to a higher version of the operating system, which entails an upgrade to higher versions of system libraries, software libraries, etc. This can be a big undertaking!</p><p>2. Operating system. You should answer the following questions:</p><ul><li>what operating system do we work on?</li><li>is it worth changing?</li><li>what version of the system are we working on? can we and should we upgrade the software?</li></ul><p>3. The criticality of the component from the business point of view. You should answer the following questions:</p><ul><li>Where are the risks associated with migration and the problems that may arise from it the greatest in terms of business?</li><li>Which business functionalities are the main and which are supporting? For example, user authorization and account access are the basic critical business areas of the banking system, and the offer module is business-relevant, but less critical.</li></ul><p>4. Workload. When switching to the cloud, we usually don't know its performance potential. We have to check it in practice. It is better to perform such verifications on less loaded and - of course - less critical infrastructure elements. We don't know how, for example, Kubernetes will cope with the increased load - this has to be learned, and it can be a process that takes many months.</p><p>5. The level of complexity of the functionality. This is a non-trivial issue. As mentioned before, the migration process may involve a shift from a monolith-closer architecture to a microservice. For one of the clients, we are faced with a decision whether to migrate architecture from an orchestral model to an event model.</p><p>In the first phase, I advise the migration of as-is infrastructure, and in the next phases, you can approach the change of architecture to a more microservice or event model (although if the functionality is well-tuned, known, operating for years in a given architecture model, it is worth considering whether its modification in line with current technological trends actually pays off).</p><p>Regardless of the approach - the more complicated the functionality, the more nuances we can skip, and thus the more mistakes we can make during the migration.</p><p>So let's start with simple functionalities. Let's test how they work in the cloud, how to access logs, how they deal with data, how we can monitor them, what is the issue of a security audit (so important in banking infrastructure), etc.</p><p>Fortunately, every organization can answer these questions fairly quickly and plan the order in which individual systems and applications should be migrated. Any conscious manager will start with the part of the infrastructure that is the least critical and at the same time promises to be successful. However, he leaves the most difficult things for the end, which may never even migrate (this is also possible and, in my opinion, an acceptable solution).</p><h3 id="lots-of-environments">Lots of environments</h3><p>I know from experience that the most difficult period is the transition period when you need to maintain two or more versions of distribution processes. In any large organization (I am referring mainly to banks), there are many environments in the software development process. We can assume that there are usually four: development, testing, pre-production and production.</p><p>At the moment of launching the development process based on a new type of distribution (distribution to the cloud), we must maintain at least two distribution versions for some time.</p><p>Development path supported by the new model and production path supported by the current model.</p><p>Therefore (and this is one of the most important conclusions), before starting such an undertaking, we should have improved distribution processes in the CI / CD model in order to relieve administrators, testers and analysts as much as possible, so that they are able to responsibly and with a clear conscience often release subsequent versions of the software for production .</p><p>Why? Usually, an organization has optimized teams of specialists (and even teams a little too small in relation to the needs - this is a natural state). As a rule, all team members have a lot of work to do all the time - and the migration process should not multiply the number of tasks, but rather reduce repetitive tasks and allow you to focus on optimizations.</p><h3 id="people">People</h3><p>The entire project involves a lot of human resources from the very beginning. Therefore, in order for the migration process to be feasible, the team should be relieved by the maximum automation of processes. I am writing the maximum one on purpose, not the full one, as I approach this topic realistically. Automation is not an art in itself. It is to relieve people so that they can deal with large-scale migration tasks. If the organization already has automated distribution processes:</p><ul><li>the team is already adjusted to the needs and there is not so much space for optimization,</li><li>you need to hire the right people, and it takes many months for these people to be familiar enough with the company's infrastructure to be really useful. My experience shows that an ambitious person, with large solutions, needs at least half a year to understand the infrastructure that will be migrated. I believe it takes a year to get to know its details, enough to be able to pick out some nuances and search for lurking traps.</li></ul><p>Contrary to appearances, an organization that does not have implemented CI / CD processes may not migrate to the cloud longer than one that has these processes implemented. Time may be similar, or even - paradoxically - shorter for the former.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_3.png" class="kg-image" alt="How to put your system in the cloud, keeping your head out of the cloud? Part2" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/05/obrazek-wewnatrz_3.png 600w, https://www.softax.pl/blog/content/images/2021/05/obrazek-wewnatrz_3.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="social-hierarchy">Social hierarchy</h3><p>As for the aspects related to the social hierarchy in the organization, the division of powers and competences, the matter is more delicate. When creating the schedule, I always take into account such elements as the personality of team members and what they can gain or lose socially in the organization due to a given project. These elements are, in my opinion, more important than pure technical competence.</p><p>I will not give you a ready-made recipe here. I know from experience that well-established people have difficulty engaging in a project that may change that position. In such a situation, it may be better to involve people who have slightly lower technical competences, but are not afraid of changes, and who can raise their position in the organization thanks to new tasks. It is always easier to acquire technical competence than to convince experts to a project that, in their opinion, will lower the prestige of their work or even take the job away from them (in fact, it would not happen, but it is very difficult to overcome the great fear of change).</p><h3 id="summary">Summary</h3><p>In the second part of the digital transformation cycle, I focused mainly on run-up tasks:</p><ul><li>inventory,</li><li>potential for change,</li><li>designing the schedule,</li><li>and last but not least, about preparing people for the change that will be affected by this change.</li></ul><p>These are preliminary tasks, after which we can have a more precise vision of the change.<br>In the last part, we will talk about specific works and the way of their implementation, i.e. how and with whom.</p>]]></content:encoded></item><item><title><![CDATA[IT systems migrations - Part3: Technical perspective]]></title><description><![CDATA[… oh changes, changes - the 'favorite' part of our life. And yet the replacement of the system is nothing more than a change - but a change from the category of the largest projects that affect various aspects of IT systems management. ]]></description><link>https://www.softax.pl/blog/it-systems-migrations-part3-technical-perspective/</link><guid isPermaLink="false">600d99f437bc7451f5ebdd5d</guid><category><![CDATA[architecture]]></category><category><![CDATA[digitaltransformation]]></category><category><![CDATA[digital]]></category><dc:creator><![CDATA[Tadeusz Powichrowski]]></dc:creator><pubDate>Tue, 06 Apr 2021 10:00:06 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/02/Avatar03.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2021/02/Avatar03.png" alt="IT systems migrations - Part3: Technical perspective"><p>… oh changes, changes - the 'favorite' part of our life. And yet the replacement of the system is nothing more than a change - but a change from the category of the largest projects that affect various aspects of IT systems management. And although as we know "no risk-no fun", if we intend to replace the IT system with a new one, it is worth taking care to reduce the risk of the entire project. </p><p>How to start?</p><p>Let's start as broadly as possible and tame the change step by step, setting its boundaries and identifying potential risks. Let's take a look at what it means to replace the system.</p><h3 id="architecture">Architecture</h3><p>Panorama of IT systems architecture before and after migration - the architecture review will help to identify the connections of the exchanged system with the environment, inventory the interfaces with internal and external systems, and define the technologies and nature of the exchanged data. We will use this knowledge to build the migration infrastructure, the scope of the necessary tests for the propagation of migrated data and, very importantly, to ensure the right and substantive stakeholders of individual test tasks.</p><h3 id="security">Security</h3><p>Security of migrated data - during the migration, key business data will be transferred - our clients' data, account balances, financial operations data, contract data - hence the need to strictly regulate access to extracts and protect against unauthorized modification of the transferred data. And, after all, there are also requirements of external regulators - GIODO, GDPR, GIFI, KNF, and so on - appropriate safeguards are necessary at every stage of work. Isolating migration environments, compressing, encrypting and signing transferred data, and limiting access by service personnel is a good way to meet these requirements.</p><h3 id="hardware-migration-infrastructure-necessary-for-the-migration">Hardware - Migration infrastructure - necessary for the migration</h3><p>Carrying out data migration means preparing a test and production infrastructure adequate to the size of the transferred data. Infrastructure for running migration tests (ETL) and propagation of migrated data to related systems.</p><h3 id="software">Software</h3><p>Migration tools - while data export from the old system and data import to the new system are most often performed using native mechanisms of these systems, mapping data between systems may require the involvement of a dedicated ETL tool to map one data model to another.</p><h3 id="personal-resources">Personal resources</h3><p>Migration team - it's clear, migration will not do itself. What is needed is high technical qualifications related to the migration itself, as well as to the maintenance and development, testing and integration of the new system. It is worth ensuring that the knowledge about the operation of the new system remains in our organization.</p><h3 id="isolation-of-changes">Isolation of changes</h3><p>Soft freeze, Cold freeze - the overlapping of infrastructural and functional changes with migration works can have a huge impact on the assumed dates and the availability of the test infrastructure and personnel resources. It is worth setting the dates from which each planned IT change and business change will be agreed with the migration project (soft freeze), and shortly before the migration, it is worth introducing a ban on implementing other changes (cold freeze).</p><h3 id="data-quality">Data quality</h3><p>Migrated data errors - data in the old system are burdened with a history of various non-standard operator actions, failures and corrective actions. These irregularities are usually revealed during the mapping of data to the data model in the new system, tests of the data movement process and tests of the new system after migration. It is worth assuming that the repair of errors is carried out only in the source system, and for special cases, assume the preparation of special "manual" migration procedures.</p><h3 id="quality-of-functionality">Quality of functionality</h3><p>Tests, tests, tests - data transfer tests - are verification of the effectiveness and time optimization of the process of data extraction from the old system, transformation into a new data model and data loading into the new system.</p><p>There are several types of tests, for example:</p><p><strong>Migration reconciliation tests</strong> - are quantitative and qualitative checks of the accountability of individual stages of data transfer between systems.</p><p><strong>Migrated data propagation tests</strong> - it checks the operation of the surrounding systems on the data provided by the new system.</p><p><strong>Functional tests</strong> - are tests of the operation of necessary changes in the new system and related systems.</p><p><strong>General migration trials</strong> - final verification of the entire migration process and making sure that the planned migration process is under control, setting a GO / NOGO point, and making sure that all the necessary people will be available during the migration.</p><h3 id="post-migration-tasks">Post-migration tasks</h3><p>Cleaning up and archiving - after the migration, it is necessary to decommission the shutdown systems, archiving the old system data and migration products (results of subsequent stages of data transfer, reports, documentation), and recovery of hardware and software.</p><p>And when we have finished the migration ... it is worth remembering that the new system will not work forever, and there will come a moment when we start to wonder again whether it is time to replace the system with a new one. </p><p>And we will do it again.</p>]]></content:encoded></item><item><title><![CDATA[How to put your system in the cloud, keeping your head out of the cloud?]]></title><description><![CDATA[The first part of a series of four articles in which I briefly but specifically describe the migration of existing IT infrastructure to the cloud environment.]]></description><link>https://www.softax.pl/blog/migration-of-existing-it-infrastructure-to-the-cloud-environment/</link><guid isPermaLink="false">600db4ec37bc7451f5ebdd7e</guid><category><![CDATA[cloud]]></category><category><![CDATA[banking]]></category><category><![CDATA[digitaltransformation]]></category><category><![CDATA[digital]]></category><category><![CDATA[architecture]]></category><category><![CDATA[agile]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Mon, 08 Mar 2021 16:04:18 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/03/Avatar-cloud.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2021/03/Avatar-cloud.png" alt="How to put your system in the cloud, keeping your head out of the cloud?"><p>The text you are reading is the first part of a series of four articles in which I briefly but specifically describe the migration of existing IT infrastructure to the cloud environment.</p><p>I give examples based on the real process of migrating our software from the on premises environment to the Kubernetes / Openshift containerized environment, performed with the use of tools such as Jenkins (for automation), Jinja (for template), ELK Stack (for monitoring).</p><p>Although I am writing here about the migration of infrastructure, i.e. systems and applications to the cloud environment, and not about the migration of data from one system to another, it is worth remembering that these issues have a lot in common.</p><ul><li>There are many pitfalls in both processes related to the need to fully understand both the current solution and the target solution.</li><li>They require the development of the migration process based on assumptions that take into account the characteristic features of the current environment and the target environment.</li><li>Paradoxically, the most important thing is that in both cases old technological solutions lose their importance in favor of new ones, which may be associated with a change in the hierarchy of power, i.e. influences, competences, dependencies in the organization.</li></ul><p>In this article, I present the stages and characteristics of migration, which are important in my opinion, and discuss the purpose of the entire project.</p><p>The following sections provide a more detailed analysis of these steps / aspects.</p><h2 id="why-such-a-topic-and-why-am-i-writing-about-it">Why such a topic and why am I writing about it?</h2><p>We have recently successfully migrated several million payment cards to our Advantica system in one of the largest Polish bank. It happened in a way that was imperceptible to customers.</p><p>This success made me reflect that, as an organization, we have been dealing with various types of migrations for over 20 years and that this experience is worth sharing.<br>On the one hand, I want to show our competences in this cycle, on the other - to refer to the real project experience in migrating the infrastructure we provide to the cloud.</p><p>As a business supervisor of the project, I had the opportunity to observe what elements affect the success or failure of migration and this is what I am going to share with you today.</p><h2 id="a-world-of-constantly-new-ideas">A world of constantly new ideas</h2><p>Every few years there are new ideas for changes to optimize the software development process:</p><ul><li>Omni-channel was fashionable for several years,</li><li>then there was digital transformation and digitization of processes,</li><li>then agile,</li><li>and now there is a cloud.</li></ul><p>Organizations are constantly changing and we adapt to current trends.<br>Are they always meaningful? Are they always worth introducing, even if it comes with huge costs?</p><p>Each idea has its light and dark sides.</p><p>Here I will highlight a few aspects that should be considered in cloud migration projects to avoid failure and not waste time and money.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.softax.pl/blog/content/images/2021/03/cload-latarnie-.png" class="kg-image" alt="How to put your system in the cloud, keeping your head out of the cloud?" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/03/cload-latarnie-.png 600w, https://www.softax.pl/blog/content/images/2021/03/cload-latarnie-.png 760w" sizes="(min-width: 720px) 720px"><figcaption>Lanterns</figcaption></figure><h2 id="lanterns-that-will-let-you-swing-in-the-clouds">Lanterns that will let you swing in the clouds</h2><p>I am not writing here about which cloud is the best, or which containerized application management platform is worth recommending. Instead, I indicate reference points that are important in my opinion - lanterns, which will light up the bumpy road to migration.<br>For the project to be successful, it must be divided into the following stages - regardless of the adopted project methodology:</p><ol><li>Inventory, that is, determining what infrastructure elements we have, how they are built, what their implementation process looks like, what features are they characterized by, what functionalities they are responsible for handling.</li><li>In the next step, we assess the real potential of the infrastructure for migration to the cloud, and before the migration, the production and delivery of applications should be embedded in the CI / CD process. We try to predict which applications will cause the most problems and which will be relatively easy to migrate.</li><li>Assessment of the team's capabilities. What mental competencies and qualities does a team need to be successful in a migration project? Do we have such people? Where to get them? Will outsiders help or hinder?</li><li>Guided by the knowledge about the system and the forecasts developed in the previous stage, as well as about the team implementing the migration and the conditions in the organization, we can prepare the first schedule. It is about planning which items can be migrated, how and in what order. You can immediately assume that not all elements are worth migrating, as the cost of migration will exceed the benefits.</li><li>A good starting point is to perform PoC (Proof of Concept), that is to migrate a selected element of the system as if "on trial". The PoC will help determine the probable time of the project and predict what potential problems were not taken into account in the theoretical analysis phase.<br>The decision to implement PoC will allow you to verify the team's capabilities - whether its competences are sufficient, whether the composition is optimal, whether communication in the team is not flawed. This is valuable information that we don't know how to get until we start migrating specific applications. Which brings us to the next essential element:<br>If people, then the process, so:</li><li>project management methodology and why agile. While for years (and I have been participating in agile projects since 2013 - really!) I was skeptical about this approach, in this case I can finally see the real application of this methodology - it has found its place in such a project and it works! My skepticism was due to the fact that such projects took up much more resources and time than the PMI approach. With a clearly defined scope and method of implementation, there was no need to use an extensive project methodology. In the case of a project whose effect or scope is unknown at the beginning, the approach with the implementation and evaluation of small steps allowed for a relatively short period of time to verify the effect of work and adapt the route to current conclusions and experiences.</li></ol><p>In my opinion, the implementation of the above activities is a condition of success. However, in order not to lose sight of the direction in which we should be going, we should also remember to answer the following questions:</p><ol><li>What is the purpose of our project? So why do we carry out such a heavy and difficult undertaking?</li><li>How do we want to achieve this goal? So why do we choose a specific design methodology and specific technologies? Why do we adopt a given sequence of actions?</li><li>How will we then evaluate our actions? So how will we know that the project was successful?</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.softax.pl/blog/content/images/2021/03/Target.png" class="kg-image" alt="How to put your system in the cloud, keeping your head out of the cloud?" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/03/Target.png 600w, https://www.softax.pl/blog/content/images/2021/03/Target.png 760w" sizes="(min-width: 720px) 720px"><figcaption>Goal</figcaption></figure><h2 id="definition-of-the-goal">Definition of the goal</h2><p>Contrary to appearances, this element is not easy to define.<br>The goals of the company's management are different, the goals of middle managers (each of whom is responsible for a part of the entire system) are different, and the goals of people who directly maintain a given infrastructure are still different.</p><p>Business goals do not have to coincide with operational goals. But it is always about improving work efficiency and a better response to market needs that translates into profits. PR issues are also important.</p><p>Migration to the cloud itself, i.e. switching to a hardware platform outside the organization, getting rid of "scrap metal" - although treated as the main reason for migration, is not a key factor in my opinion.</p><p>Of course, it all depends on the scale of the organization. It may be more convenient and cost effective for small businesses to use external infrastructure. However, large organizations do not have such limitations, especially since the cost of maintaining infrastructure administrators is much greater than the cost of equipment. And regardless of the chosen platform, administrators will be needed.<br>So hardware is one dimension, the other - more important - is infrastructure and software maintenance and management.<br>In this respect, both the on-premises solution and the migration to the cloud have their advantages and disadvantages, and the purpose of the change is difficult to clearly define.<br>A more realistic goal in the case of migration is process automation, i.e. introducing solutions based on the principles of Continuous Integration (CI) and Continuous Delivery (CD) in the organization.</p><p>Streamlining the software distribution process from the developer to the production environment, where the functionality is made available to customers, is the primary goal of such a large undertaking.</p><p>From a business point of view, it is faster time to market.</p><p>From the operational point of view, this means saving resources that can be used, for example, to eliminate technological debt, improve the quality of software delivered through more tests, frequent implementation of software versions updated on an ongoing basis, thanks to which the quality of the infrastructure is higher.</p><p>The question: why do it, if the current solutions work, I leave it for a separate long discussion. You certainly already have an opinion on this matter.</p><p>In the next part of this article, I will go into more detail about the steps that I believe are necessary for a successful migration.</p><p>Stay tuned!</p>]]></content:encoded></item><item><title><![CDATA[IT systems migrations - Part2: Business perspective]]></title><description><![CDATA[So we decided.  We have a new system selected. 
But before we start using it, there are a few key business considerations that will affect how you transition to the new system. ]]></description><link>https://www.softax.pl/blog/it-systems-migrations-part-two-business-perspective/</link><guid isPermaLink="false">600d994637bc7451f5ebdd38</guid><category><![CDATA[architecture]]></category><category><![CDATA[digitaltransformation]]></category><category><![CDATA[digital]]></category><dc:creator><![CDATA[Tadeusz Powichrowski]]></dc:creator><pubDate>Mon, 15 Feb 2021 11:05:47 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/02/Avatar02.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2021/02/Avatar02.png" alt="IT systems migrations - Part2: Business perspective"><p>So we decided. </p><p>We have a new system selected. </p><p>But before we start using it, there are a few key business considerations that will affect how you transition to the new system. The basis is knowledge about the functioning of the old system and the possibilities of the new system as well as knowledge about the conditions of business functioning. </p><p>The use of this knowledge provides a chance for rational planning of the migration process and protects against chaos of spontaneous migration.</p><p>Below are some basic areas of analysis that will allow you to simplify migration as much as possible and reduce the risk of failure:</p><h3 id="cut-the-scope">Cut the scope</h3><p>Not all business products need to be migrated to the new system. Not all business products of the old system are available "in stock" in the new system, moreover, not all of them are needed to run a business after migration. Low-yield, rarely used products - it is easier to close them in the old system, move them to the archive than migrate to the new system. We focus on the "business core" and simplify data migration.</p><h3 id="fill-the-gap">Fill the gap</h3><p>How we will fill the functional differences between the old and the new system.<br>The new system does not always offer all the necessary functionalities to run our business. There is always a functional gap to fill in the new system and new ones may be needed in place of the excluded products. Migration to the new system cannot be a step backwards, it is to open up new business development opportunities - but not all ideas need to be implemented in time to launch the new system. We choose the most needed and the easiest to implement.</p><h3 id="static-data-migration">Static data migration</h3><p>Will historical data be needed in the new system? Moving the history of operations, mapping the history of business events of individual migrated products is not a piece of cake, especially when they occupy non-trivial GB or TB. If it is possible - leave the old system in the "read only" state or use the data warehouse. And if we need to migrate static data - remember that it is a time-consuming process (data size!), So it is worth doing it before or after the dynamic operational data migration. We focus on ensuring operational business continuity and simplifying the migration process.</p><h3 id="migration-model-how-do-we-migrate-to-the-new-system">Migration model - how do we migrate to the new system?</h3><p>The choice of migration method and time are other issues that need to be resolved.<br>The characteristics and scale of migrated business data determine the choice of migration method. When deciding on a business migration, we choose to open new business products and transactions only in the new one and gradual phasing out of operations in the old system - in accordance with the current rhythm of business. A technical migration choice is a one-time transfer of product and transaction status from a specific point in time. This operation can be performed manually / semi-manually with little support from IT tools, or automatically (when the scope of data to be transferred is complicated and too large) based on ETL tools dedicated to migration.</p><h3 id="migration-time-when-do-we-migrate-to-the-new-system">Migration time - When do we migrate to the new system?</h3><p>The selection of the appropriate migration date is the result of the need to synchronize with our business partners, the rhythm of internal operational activities and the activity of our clients. We prefer days with low business and operational activity and we plan the migration itself so that it is the shortest possible process.</p><p>When taking subsequent decisions, we must remember that the continuity of business is a key issue, and the quality of migration is measured by the success of the business.</p><p>Of course, replacing the system with a new one is also a task for IT teams - but more on that in the next post.</p><p>Stay tuned!</p>]]></content:encoded></item><item><title><![CDATA[Kubernetes -passing config to container - from Environment to Vault]]></title><description><![CDATA[After a break we are back with another post about Kubernetes. This time will we focus on how to pass configuration to application running in K8s cluster. ]]></description><link>https://www.softax.pl/blog/how-to-install-vault-on-kubernetes-passing-config-to-container-from-environment-to-vault/</link><guid isPermaLink="false">601423ab37bc7451f5ebddb0</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Grzegorz Giziński]]></dc:creator><pubDate>Mon, 01 Feb 2021 12:48:19 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/02/hashicorpvault.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://www.softax.pl/blog/content/images/2021/02/hashicorpvault.png" alt="Kubernetes -passing config to container - from Environment to Vault"><p>After a break we are back with another post about Kubernetes. This time we will focus on how to pass configuration to application running in K8s cluster. But first, to have some playground, we are going to build docker image with test application and deploy it to K8s cluster.</p>
<h2 id="imagerepository">Image repository</h2>
<p>To make images available to all cluster's nodes we create private docker repository running on master node:</p>
<pre><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2
</code></pre>
<p>On each of worker nodes we need to update <code>/etc/docker/daemon.json</code> with new repository definition:</p>
<pre><code>&quot;insecure-registries&quot; : [ &quot;k8admin:5000&quot; ]
</code></pre>
<p>and then restart docker.</p>
<h2 id="dockerimage">Docker image</h2>
<p>We will use simple <a href="https://nodejs.org/"><code>node.js</code></a> script acting as cloud service. Let's create <code>base.Dockerfile</code>:</p>
<pre><code>FROM node:15-alpine
WORKDIR /app
RUN npm install --production
RUN npm install express
RUN apk --no-cache add curl
</code></pre>
<p>This defines our base image: <code>node.js</code> with <code>express</code> framework on top of <code>Alpine Linux</code>, with extra <code>curl</code> for connectivity diagnostics. To build the image we execute command:</p>
<pre><code>docker build --build-arg https_proxy=http://proxy.yoursite.local:8080 -f base.Dockerfile -t k8admin:5000/blog-base .
</code></pre>
<p>This runs image build and tags it with private repository address. That way the image will go to our repo instead of central Docker registry:</p>
<pre><code>docker push k8admin:5000/blog-base
</code></pre>
<p>Let's define simple application image on top of base image:</p>
<pre><code>FROM k8admin:5000/blog-base:latest
ARG SRCDIR=.
COPY $SRCDIR/index.js .
</code></pre>
<p>and the execute commands:<br>
<a name="build-cmd"></a></p>
<pre><code>docker build --build-arg https_proxy=http://proxy.yoursite.local:8080 --build-arg SRCDIR=$1 -t k8admin:5000/blog-app.
docker push k8admin:5000/blog-app
</code></pre>
<p><code>blog-app</code> image gets <code>index.js</code> from given directory. Simplest version of <code>index.js</code> is to listen on port 3000 and send process environment in return to http GET request:</p>
<p><a name="get-env"></a></p>
<pre><code>const express = require('express')
const os = require('os')

function printObject(o) {
  let out = '';
  for (let p in o) {
    out += p + ': ' + o[p] + '\n';
  }
  
  return out;
}

const app = express()
app.get('/', (req, res) =&gt; {
		let r = 'Environment of ' + os.hostname + ':\n';
		r += printObject(process.env)
		res.send(r)
})

const port = process.env.BLOG_APP_SVC_SERVICE_PORT
app.listen(port, () =&gt; console.log(`listening on port ${port}`))
</code></pre>
<h2 id="deployment">Deployment</h2>
<p>Next - to make our app to the Kubernetes - we define <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">deployment</a> (<code>dep.yml</code>):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
    namespace: blog
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               env: 
               -   name: SOME_ENV
                   value: some-value
               -   name: OTHER_ENV
                   value: OTHER-value
---               
apiVersion: v1
kind: Service
metadata:
    name: blog-app-svc
    namespace: blog
spec:
    type: NodePort
    selector:
        name: blog-app               
    ports:
    -   port: 3000
        targetPort: 3000 
        nodePort: 30001
</code></pre>
<p>Some explanation for definition above:</p>
<ul>
<li><code>kind: Deployment</code> - the kind of object to be defined</li>
<li><code>metadata.name</code> - the name of deployment</li>
<li><code>metadata.namespace</code> - the namespace where deployment and all of its subobjects are to be placed. To create <code>blog</code> namespace execute <code>kubectl create namespace blog</code> before yor create deployment. Using namespaces makes live easier. In case of this blog I remove <code>blog</code> namespace to delete all its objects before moving to next scenario.</li>
<li><code>replicas</code> - sets the number of our app instances</li>
<li><code>labels</code> and <code>selectors</code> are mechanisms to search objects in the cluster</li>
<li><code>containers</code> - definition of application containers
<ul>
<li><code>name</code> - distinguishing name of container</li>
<li><code>image</code> - tag of the image</li>
<li><code>command</code> - command to run on the container in array manner</li>
<li><code>ports.containerPort</code> - port number to be exposed from the container; in our case it is the same port that was set in <code>index.js</code>.</li>
<li><code>env</code> - some environment variables to be passed to the container</li>
</ul>
</li>
<li><code>---</code> - object separator in multiobject yaml definition file</li>
<li><code>kind: Service</code> - definition of service object; <a href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</a> is K8s mechanism to enable communication to application</li>
<li><code>type: NodePort</code> - the kind of <a href="https://kubernetes.io/docs/concepts/services-networking/service/">service</a>, that exposes static port on each cluster's node
<ul>
<li><code>port</code> - internal service port</li>
<li><code>targetPort</code> - the port of the container; usually the <code>port</code> and <code>targetPort</code> have the same value</li>
<li><code>nodePort</code> - the port to be exposed on cluster node</li>
</ul>
</li>
</ul>
<p>Now we are ready deploy the app:<br>
<a name="depl-cmd"></a></p>
<pre><code>$ kubectl apply -f dep.yml
deployment.apps/blog-depl created
service/blog-app-svc created
</code></pre>
<p>Success! To 'taste' it let's invoke our app. First we need to determine to which cluster node it has been deployed (remember, we've chosen to run only 1 copy of the app):</p>
<pre><code>$ kubectl get pods  -n blog -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP          NODE      NOMINATED NODE   READINESS GATES
blog-depl-6bd7865df7-t5pmh   1/1     Running   0          9m56s   10.40.0.2   k8work1   &lt;none&gt;           &lt;none&gt;
</code></pre>
<p>The app is running on k8work1 node. Let's send it <code>http GET</code> request:</p>
<pre><code>$ curl -X GET k8work1:30001
Environment of blog-depl-6bd7865df7-t5pmh:
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME: blog-depl-6bd7865df7-t5pmh
SOME_ENV: some-value
OTHER_ENV: OTHER-value
KUBERNETES_PORT_443_TCP_ADDR: 10.96.0.1
BLOG_APP_SVC_SERVICE_HOST: 10.106.234.15
KUBERNETES_SERVICE_HOST: 10.96.0.1
KUBERNETES_SERVICE_PORT_HTTPS: 443
KUBERNETES_PORT: tcp://10.96.0.1:443
BLOG_APP_SVC_PORT_3000_TCP_PORT: 3000
KUBERNETES_PORT_443_TCP: tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO: tcp
BLOG_APP_SVC_PORT_3000_TCP_PROTO: tcp
BLOG_APP_SVC_PORT_3000_TCP_ADDR: 10.106.234.15
BLOG_APP_SVC_SERVICE_PORT: 3000
BLOG_APP_SVC_PORT: tcp://10.106.234.15:3000
BLOG_APP_SVC_PORT_3000_TCP: tcp://10.106.234.15:3000
KUBERNETES_SERVICE_PORT: 443
KUBERNETES_PORT_443_TCP_PORT: 443
NODE_VERSION: 15.4.0
YARN_VERSION: 1.22.5
HOME: /root
</code></pre>
<p>In return - as expected - we have got environment of the container. It is worth notice that - besides of env we have defined in deployment (<code>SOME_ENV</code>, <code>OTHER_ENV</code>) - there are several variables injected by Kubernetes. An application can use them for its own configuration. Because we defined the same port value for the service and the container, we can use <code>BLOG_APP_SVC_SERVICE_PORT</code> value in <code>index.js</code>. So instead of</p>
<pre><code>const port = 3000
</code></pre>
<p>we can set</p>
<pre><code>const port = process.env.BLOG_APP_SVC_SERVICE_PORT
</code></pre>
<p>and this way move port number from application code to K8s service definition.</p>
<p>To make calling cloud app easier we assembly the command:</p>
<pre><code>curl -X GET &quot;`kubectl get pods -n blog --selector=name=blog-app --field-selector status.phase=Running --template '{{range .items}}{{ if not .metadata.deletionTimestamp }}{{.spec.nodeName}}{{end}}{{end}}'`:30001&quot;
</code></pre>
<p>As you can see it filters the name of pod running in <code>blog</code> namespace and uses it to build <code>curl</code> parameter (thanks to <a href="https://github.com/kubernetes/kubectl/issues/450#issuecomment-706677565">https://github.com/kubernetes/kubectl/issues/450#issuecomment-706677565</a>). Let's make it <code>bash</code> function:</p>
<p><a name="cg"></a></p>
<pre><code>cg () { 
	curl -X GET &quot;`kubectl get pods -n blog --selector=name=blog-app --field-selector status.phase=Running --template '{{range .items}}{{ if not .metadata.deletionTimestamp }}{{.spec.nodeName}}{{end}}{{end}}'`:30001&quot;/&quot;$1&quot;; 
}
</code></pre>
<p>Now we can call our cloud app by simple <code>cg</code> command.</p>
<h2 id="configmaps">ConfigMaps</h2>
<p>Now that we have our test cloud running (and verified passing environment variables) we can move to <a href="https://kubernetes.io/docs/concepts/configuration/configmap/">ConfigMaps</a>. ConfigMaps are common, native way to pass non sensitive information to container.</p>
<p><em>To cleanup test environment delete and recreate <code>blog</code> namespace.</em></p>
<h3 id="fromtextfile">From text file</h3>
<p>Let's have a text file, just like this <code>file.cfg</code>:</p>
<pre><code>This is theoretical configuration file
being injected to container.
</code></pre>
<p>To convert it to ConfigMap we use command:</p>
<pre><code>$ kubectl create cm -n blog file-cm --from-file=./file.cfg
configmap/file-cm created
</code></pre>
<p>Parameter <code>--from-file</code> indicates that all file content is to be copied to ConfigMap we named <code>file-cm</code>.</p>
<p>and created ConfigMaps looks like:</p>
<pre><code>$ kubectl get cm -n blog file-cm -o yaml
apiVersion: v1
data:
  file.cfg: |
    This is theoretical configuration file
    being injected to container.
kind: ConfigMap
metadata:
  creationTimestamp: &quot;2021-01-20T13:42:02Z&quot;
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:file.cfg: {}
    manager: kubectl
    operation: Update
    time: &quot;2021-01-20T13:42:02Z&quot;
  name: file-cm
  namespace: blog
  resourceVersion: &quot;43679504&quot;
  selfLink: /api/v1/namespaces/blog/configmaps/file-cm
  uid: 0aea8662-b765-4113-ba69-be821f9f83f7
</code></pre>
<p>Next prepare deployment to use ConfigMap:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
    namespace: blog
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               volumeMounts:
               -   name: blog-vol
                   mountPath: /etc/blog.cfg
           volumes:
           -   name: blog-vol
               configMap:
                   name: file-cm
---               
apiVersion: v1
kind: Service
metadata:
    name: blog-app-svc
    namespace: blog
spec:
    type: NodePort
    selector:
        name: blog-app               
    ports:
    -   port: 3000
        targetPort: 3000 
        nodePort: 30001
</code></pre>
<p>What changed:</p>
<ul>
<li>new element <code>volumes</code> consists info of resources for the deployment
<ul>
<li><code>configMap.name</code> - indicates the resource is ConfigMap with given name</li>
</ul>
</li>
<li><code>volumeMounts</code> describes how resources are to be mapped to <code>blog-app</code> container
<ul>
<li><code>name</code> indicates <code>volume</code> - the source of data</li>
<li><code>mountPath</code> tells kubernetes where to mount the resource</li>
</ul>
</li>
</ul>
<p>We want our app to read ConfigMap and see what will be read, so we need to modify its source (<code>index.js</code>):</p>
<pre><code>const os = require('os')
const express = require('express')
const fs = require('fs')

const app = express()

app.get('/', (req, res) =&gt; {
		let r = 'Config map content from ' + os.hostname + ':\n'
		r += fs.readFileSync('/etc/blog.cfg/file.cfg', 'utf8')
		res.send(r)
})

const port = process.env.BLOG_APP_SVC_SERVICE_PORT
app.listen(port, () =&gt; console.log(`listening on port ${port}`))
</code></pre>
<p>Now we <a href="#build-cmd">rebuild</a> the app, <a href="#depl-cmd">deploy</a> new version and execute<br>
<a href="#cg"><code>cg</code></a> to see how it works:</p>
<pre><code>$ cg
Config map content from blog-depl-659bfdd9c5-2pnbv:
This is theoretical configuration file
being injected to container.
</code></pre>
<p>Works fine. But what's all this for? Coudn't we just copy config to the image and forget about ConfigMaps? Sure we could, but the whole thing is that with ConfigMap we can update data without need to rebuild the image or even to restart the pod. After update of ConfigMap definition its corresponding container value is updated automatically. Let's update the ConfigMap:</p>
<pre><code>$ echo &quot;And now it's updated.&quot; &gt;&gt; file.cfg
$ kubectl create cm -n blog file-cm --from-file=./file.cfg --dry-run=client -o yaml | kubectl apply -f -
</code></pre>
<p>The latter command thanks to <code>--dry-run=client</code> parameter generates yaml with updated config map definition and then applies it. This way existing ConfigMap is actually updated.<br><br>
After some time (needed to update cached values) query the app:</p>
<pre><code>$ cg
Config map content from blog-depl-659bfdd9c5-2pnbv:
This is theoretical configuration file
being injected to container.
And now it's updated.
</code></pre>
<p>The app sees updated ConfigMap without need to be restarted.</p>
<h3 id="fromkeyvalue">From key-value</h3>
<p>Let <code>some.env</code> have simple <em>env</em> like key-value content:</p>
<pre><code>blog_env=some_value
blog_env2=another_value
</code></pre>
<p>We can convert it into ConfigMap with command:</p>
<pre><code>$ kubectl create cm -n blog env-cm --from-env-file=./some.env -o yaml
apiVersion: v1
data:
  blog_env: some_value
  blog_env2: another_value
kind: ConfigMap
metadata:
  creationTimestamp: &quot;2021-01-27T12:31:15Z&quot;
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:blog_env: {}
        f:blog_env2: {}
    manager: kubectl
    operation: Update
    time: &quot;2021-01-27T12:31:15Z&quot;
  name: env-cm
  namespace: blog
  resourceVersion: &quot;45121359&quot;
  selfLink: /api/v1/namespaces/blog/configmaps/env-cm
  uid: 3dbae9de-787d-4f5d-b564-8207840957c9
</code></pre>
<p>This type of ConfigMap can be mapped to container's environment with definition:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               env:
               -   name: blog_env
                   valueFrom:
                       configMapKeyRef:
                           name: env-cm
                           key: blog_env
               -   name: changed_env
                   valueFrom:
                       configMapKeyRef:
                           name: env-cm
                           key: blog_env2
</code></pre>
<p>Section <code>env</code> defines environment variable <code>blog_env</code> with value from ConfigMap <code>env-cm</code> key <code>blog-env</code> and variable <code>changed_env</code> analogically.<br>
To see if it works we switch back to <a href="#get-env">returning environment</a> version of <code>index.js</code>, <a href="#build-cmd">rebuild</a> the app, <a href="#depl-cmd">deploy</a> new version and execute <a href="#cg"><code>cg</code></a>:</p>
<pre><code>$ cg | grep env
blog_env: some_value
changed_env: another_value
</code></pre>
<p>There is some weakness of this type of ConfigMap: corresponding container's values are not updated with map change automatically. To <em>see</em> modified ConfigMap value deployment has to be restarted:</p>
<pre><code>$ kubectl rollout restart deployment -n blog blog-depl
</code></pre>
<h2 id="secrets">Secrets</h2>
<p>When it comes to sensitive data such as password ConfigMaps may not be secure enough.<br>
Instead of them K8s offers <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Secrets</a>. Just like ConfigMaps Secrets can be created from file or from literal. We have not tried creating ConfigMap from literal, so let's do this with Secrets:</p>
<pre><code>$ kubectl create secret generic lit-sec --from-literal=pass=pass-value -o yaml
apiVersion: v1
data:
  pass: cGFzcy12YWx1ZQ==
kind: Secret
metadata:
  creationTimestamp: &quot;2021-01-27T15:11:01Z&quot;
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:pass: {}
      f:type: {}
    manager: kubectl
    operation: Update
    time: &quot;2021-01-27T15:11:01Z&quot;
  name: lit-sec
  namespace: blog
  resourceVersion: &quot;45144690&quot;
  selfLink: /api/v1/namespaces/blog/secrets/lit-sec
  uid: fd4c7427-2e47-43c6-8bf6-4d12366bd10f
type: Opaque
</code></pre>
<p>As you can see the value of created secret is not explicit but base64-encoded - so still not secure.</p>
<p>To access secret from container we need to mount it:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
    namespace: blog
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               volumeMounts:
               -   name: sec-vol
                   mountPath: /etc/secrets
           volumes:
           -   name: sec-vol
               secret:
                   secretName: lit-sec
</code></pre>
<p>The value of the secret is going to be available in <code>/etc/secrets/pass</code>, so we need to modify <code>get</code> method in our <code>index.js</code>:</p>
<pre><code>app.get('/', (req, res) =&gt; {
		let r = 'Secret value from ' + os.hostname + ':\n'
		r += fs.readFileSync('/etc/secrets/pass', 'utf8')
		res.send(r)
})
</code></pre>
<p><a href="#build-cmd">rebuild</a>, <a href="#depl-cmd">restart</a> and try with <a href="#cg"><code>cg</code></a>:</p>
<pre><code>$ cg
Secret value from blog-depl-84d59fd8b8-5sr7r:
pass-value
</code></pre>
<p>As the secret is mounted, its value is propagated to container automatically:</p>
<pre><code>$ kubectl create secret generic lit-sec -n blog --dry-run=client \
&gt;  --from-literal=pass=new-pass-value -o yaml | kubectl apply -f -
secret/lit-sec configured
</code></pre>
<p>After short time:</p>
<pre><code>$ cg
Secret value from blog-depl-84d59fd8b8-5sr7r:
new-pass-value
</code></pre>
<h3 id="limitingaccesstosecret">Limiting access to secret</h3>
<p>One way to protected sensitive data is to limit access to secret. It can be done by setting access <code>mode</code> for certain item and configuring securityContext for the container. First let's define secret with two keys:</p>
<pre><code>kubectl create secret generic -n blog lit-sec --from-literal=pass=pass-value --from-literal=user=user-value
</code></pre>
<p>Next set up deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
    namespace: blog
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               volumeMounts:
               -   name: sec-vol
                   mountPath: /etc/secrets
               securityContext:
                   runAsUser: 1000
                   allowPrivilegeEscalation: false                   
           volumes:
           -   name: sec-vol
               secret:
                   secretName: lit-sec
                   items:
                   -   key: pass
                       mode: 0400
                       path: pass
                   -   key: user
                       mode: 0444
                       path: user
</code></pre>
<p>Finally modify <code>index.js</code> so we could get <code>user</code> and <code>pass</code> separately:</p>
<pre><code>app.get('/user', (req, res) =&gt; {
		var r = 'User value from ' + os.hostname + ':\n'
		r += fs.readFileSync('/etc/secrets/user', 'utf8')
		res.send(r)
})

app.get('/pass', (req, res) =&gt; {
		var r = 'User value from ' + os.hostname + ':\n'
		r += fs.readFileSync('/etc/secrets/pass', 'utf8')
		res.send(r)
})
</code></pre>
<p>When we issue <code>cg user</code> we get <code>user-value</code>, but when we want to get <code>cg pass</code> an error <code>EACCES: permission denied, open '/etc/secrets/pass'</code> occurs. This is because we defined <code>mode: 0400</code> so only <code>root</code> can read <code>/etc/secrets/pass</code> and set containers' user id to <code>1000</code> indicating it should be run as regular user.</p>
<h3 id="disadvantagesofsecrets">Disadvantages of Secrets</h3>
<p>Despite its name Secrets are not secured by default. They are held non-ecrypted in Kubernetes <code>etcd</code> (which is storage mechanism for K8s cluster), so anyone who has access to <code>etcd</code> can know the Secrets. Fortunately encryption of Secrets <em>at rest</em> can be enabled using <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#configuration-and-determining-whether-encryption-at-rest-is-already-enabled"><code>--encryption-provider-config</code> of <code>kube-apiserver</code></a> and <a href="https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/">Key Management Service (KMS)</a>.</p>
<p>Still, anyone who can create any pod that uses a secret can know secret value. Disturbing, isn't it? This and other disadvantages of Secrets are described in <a href="https://kubernetes.io/docs/concepts/configuration/secret/#risks">theirs documentation</a>.</p>
<h2 id="hashicorpvault">HashiCorp Vault</h2>
<p>One of solutions for safe storage and management of sensitive data is <a href="https://www.vaultproject.io/">HashiCorp Vault</a>. Vault can be setup standalone or can be deployed to K8s cluster. In this article we will use standalone Vault and access it from K8s container using REST API.</p>
<p>To install Vault on Centos we simply follow the <a href="https://learn.hashicorp.com/tutorials/vault/getting-started-install#install-vault">documentation</a>.<br>
Then we start Vault Server in development configuration by issuing:</p>
<pre><code>$ vault server -dev -dev-root-token-id root -dev-listen-address '10.92.29.12:8200'
</code></pre>
<p>Parameters:</p>
<ul>
<li>'dev' - tells Vault to run in development mode - it is unsealed and uses volatile memory storage. Normally Vault Server starts in <em>sealed</em> state, where it knows its storage but does not know how to decrypt it. To <em>unseal</em> the Vault one must know <em>master key</em>.</li>
<li>'dev-root-token-id' - sets the token which allows to authenticate to Vault to value <code>root</code>. Otherwise Vault would generate random root token that we would have to remember.</li>
<li>'dev-listen-address' - tells Vault to use certain interface and address. By default Vault in dev mode runs on '127.0.0.1' and may not be accessible from outside of the the host.</li>
</ul>
<p>Export <code>VAULT_ADDR='http://10.92.29.12:8200'</code> and <code>VAULT_TOKEN=root</code> to configure your terminal session.</p>
<p>Let's add some secrets to our brand new Vault Server. First enable <a href="https://www.vaultproject.io/docs/secrets/kv/kv-v2">secret engine</a>:</p>
<pre><code>$ vault secrets enable -path=blog -version=2 kv 
</code></pre>
<p>This turns on secrets key-value engine version 2 (with history) on <code>blog</code> path. Then add some secret:</p>
<pre><code>$ vault kv put blog/entry some=thing
Key              Value
---              -----
created_time     2021-01-28T15:44:10.978011886Z
deletion_time    n/a
destroyed        false
version          3
</code></pre>
<p>Secret revealed its secret: it's third time I've set its value 😉</p>
<p>Now we can retrieve secret from Vault using both <code>vault</code> command:</p>
<pre><code>$ vault kv get blog/entry
====== Metadata ======
Key              Value
---              -----
created_time     2021-01-28T15:44:10.978011886Z
deletion_time    n/a
destroyed        false
version          3

==== Data ====
Key     Value
---     -----
some    thing
</code></pre>
<p>and REST API:</p>
<pre><code>$ curl -s -X GET --header &quot;X-Vault-Token: $VAULT_TOKEN&quot; $VAULT_ADDR/v1/blog/data/entry | jq -r '.data.data.some'
thing
</code></pre>
<h3 id="accessingvaultfromk8scluster">Accessing Vault from K8s cluster</h3>
<p>To make our Vault available from cluster we define service without a pod selector.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
    namespace: blog
    name: external-vault
    labels:
        name: blog-app
spec:
    ports:
    - protocol: TCP
      port: 80
</code></pre>
<p>To define how <code>external-vault</code> maps to network address we add an <em>Endpoint</em>:</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
    name: external-vault
    namespace: blog
    labels:
        name: blog-app
subsets:
    - addresses:
          - ip: 10.92.29.12
      ports:
          - port: 8200
</code></pre>
<p>These two combined tell kubernetes that service <code>external-vault</code> port <code>80</code> is to be redirected to ip <code>10.92.29.12</code> port <code>8200</code>.</p>
<p>Now let's assume that for some abstract reason we would like to forbid reading our secret<br>
more than once. Vault offers <em>Policies</em> to limit access to its object (file <code>ro.hcl</code>):</p>
<pre><code>path &quot;blog/*&quot; {
  capabilities = [&quot;read&quot;,&quot;list&quot;]
}
</code></pre>
<p>We write policy to Vault with command:</p>
<pre><code>$ vault policy write blog-ro ro.hcl
Success! Uploaded policy: blog-ro
</code></pre>
<p>and we create access token:</p>
<pre><code>$ vault token create -use-limit 1 -policy blog-ro
Key                  Value
---                  -----
token                s.sa1RaGpfFPeCTYEkc5q6YsOY
token_accessor       sxGyEDegi1DGNUzmp9Ev0jZT
token_duration       768h
token_renewable      true
token_policies       [&quot;blog-ro&quot; &quot;default&quot;]
identity_policies    []
policies             [&quot;blog-ro&quot; &quot;default&quot;]
</code></pre>
<p>Token defined this way allows only to read our secret and only to do it once.</p>
<p>Let's try it with our app. New deployment (service <code>blog-app-svc</code> remains unchanged):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: blog-app
    name: blog-depl
    namespace: blog
spec:
    replicas: 1
    selector:
        matchLabels:
            name: blog-app
    template:
        metadata:
            labels:
                name: blog-app
        spec:
           containers:
           -   name: blog-app
               image: k8admin:5000/blog-app:latest
               command: [&quot;node&quot;, &quot;index.js&quot;]
               ports:
               - containerPort: 3000
               env: 
               -   name: VAULT_PATH
                   value: &quot;/v1/blog/data/entry&quot;
               -   name: VAULT_TOKEN
                   value: &quot;s.sa1RaGpfFPeCTYEkc5q6YsOY&quot;
</code></pre>
<p>passes to container <code>blog-app</code> path to secret and one-time token to authorize the request. <em>I know passing token via environment is not the best way, but this is just for example.</em><br>
<code>index.js</code> has to be modified to access secret via external service:</p>
<pre><code>const express = require('express')
const os = require('os')

function printObject(o) {
  let out = '';
  for (let p in o) {
    out += p + ': ' + o[p] + '\n';
  }
  
  return out;
}

const http = require('http')
const options = {
	hostname: 'external-vault',
    path: process.env.VAULT_PATH,
    headers: {
	  'X-Vault-Token': process.env.VAULT_TOKEN
  }
}

function requestCall() {
	return new Promise((resolve, reject) =&gt; {
		http.get(options, (response) =&gt; {
			let chunks = [];
	
			response.on('data', (fragments) =&gt; {
				chunks.push(fragments);
			});
	
			response.on('end', () =&gt; {
				let body = Buffer.concat(chunks);
				resolve(body.toString());
			});
	
			response.on('error', (error) =&gt; {
				reject(error);
			});
		});
	});
}

async function exGet(req, res) {
	let r = 'Vault secret from ' + os.hostname + ':\n';
	prom = requestCall();
	try {
		aw = await prom;
		let j = JSON.parse(aw);
		r += printObject(j.data.data);
		res.send(r);
	}
	catch(e){
		res.send(e);
	}
}

const app = express();
app.get('/', exGet);


const port = process.env.BLOG_APP_SVC_SERVICE_PORT
app.listen(port, () =&gt; console.log(`listening on port ${port}`))
</code></pre>
<p>By the way, code above became more complicated because javascript is designed as asynchronous and we want to get response synchronously.</p>
<p>Again <a href="#build-cmd">build</a> the app, <a href="#depl-cmd">deploy</a> it and execute<br>
<a href="#cg"><code>cg</code></a>:</p>
<pre><code class="language-$">Vault secret from blog-depl-dfc994588-r8c4v:
some: thing
</code></pre>
<p>We've got out secret. But when we try to get it once more:</p>
<pre><code>$ cg
{}
</code></pre>
<p>empty object is returned. The token was one-time only.</p>
<p>There is so much more you can achive with Vault and Kubernetes, starting with <a href="https://www.vaultproject.io/docs/platform/k8s/helm/run">deploying in Vault to cluster</a>, through <a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar?in=vault/kubernetes">injecting secrets into pods using sidecar</a>, to advanced scenarios like those <a href="https://www.youtube.com/watch?v=0dSv3DFWNY0">presented here.</a></p>
<p><a name="end"></a><br>
That's it for now. This article has already gone too long 😉</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[IT systems migrations - Part1]]></title><description><![CDATA[Before we say we replace the IT system with a new one there are a few things to do. Here's what you should pay attention to.
The past year has affected all of us and has changed a lot in all aspects of our lives. Much of what has changed will stay with us for a long time...]]></description><link>https://www.softax.pl/blog/it-systems-migration-part-one/</link><guid isPermaLink="false">600a89d937bc7451f5ebdcf3</guid><category><![CDATA[architecture]]></category><category><![CDATA[digitaltransformation]]></category><category><![CDATA[digital]]></category><dc:creator><![CDATA[Tadeusz Powichrowski]]></dc:creator><pubDate>Fri, 22 Jan 2021 10:04:03 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/01/Avatar-IT-systems-migrations.png" medium="image"/><content:encoded><![CDATA[<h3 id="before-we-say-we-replace-the-it-system-with-a-new-one">Before we say… we replace the IT system with a new one</h3><img src="https://www.softax.pl/blog/content/images/2021/01/Avatar-IT-systems-migrations.png" alt="IT systems migrations - Part1"><p>The past year has affected all of us and has changed a lot in all aspects of our lives. Much of what has changed will stay with us for a long time - after all, our habits and behavior have changed. We have adapted and look to the future with hope.</p><p>Business conditions have changed, the economy has changed, and the requirements for IT systems have changed. Abruptly and unexpectedly, things shifted to backing the crazy changes in the real world.</p><p>Nobody expected such requirements, nobody could have foreseen it.</p><p>The IT world has survived... and now, when we are wiser about the experience of last year, the time has come to verify and set new safety margins in the maintenance of IT systems. Regardless of whether we use ITIL, ITSM, SDLC or "common sense" methodology, we must determine whether our system is approaching, or perhaps it is dangerously close to achieving the maximum effective life cycle.</p><h2 id="key-performance-indicators">Key performance indicators</h2><p>Each time we want to verify the condition of the IT system, we should define key performance indicators in the basic areas of the analysis:</p><ul><li>system capacity and performance measured by the maximum number of customers served by the system, the maximum number of transactions, the time of performing basic business and maintenance processes. Scaling the system vertically (increasing the computing power of a single server), scaling the system horizontally (parallelizing the processing on subsequent servers) have their limitations. Flexible computing power management - virtualization, application containers - is our system ready for such a solution?</li><li>maintenance costs and infrastructure expansion possibilities - it is an essential part of system lifecycle management. The increase in infrastructure computing power (number, type of processors) is a frequent determinant of the cost of database licenses, 3rd party software. And what if the possibilities of adding more processors will also end? What if 3rd party vendor policy necessitates costly system rebuild? Or maybe the technology of building our system is so exotic that assembling a team to operate it is almost impossible?</li><li>possibility of integration with new systems - new technologies emerge with the growth of business and the implemented internal and external systems. The use of Integration Platforms allows to solve most of these types of issues, but what if the efficiency of combining new and old communication technologies does not ensure the appropriate times of running business processes?</li><li>time and cost of introducing changes - the world and our immediate surroundings have been changing recently at a surprising pace and no one is surprised that business support systems have to keep up with business expectations and ensure a competitive advantage. Appropriately scaled development and system change implementation teams ensure optimization in this area. What if technical and personal conditions affect time and cost so significantly that they shift the delivery of changes beyond acceptable limits?</li><li>Cyber ​​Security - security of our clients, security of business processes, authentication and accountability of users, transaction authorization, resistance to external attacks and internal frauds... this issue cannot be ignored either.</li><li>availability of alternative solutions - it is an important issue that allows to estimate the cost of implementing a new system. The greater the variety of available solutions, the easier it will be to choose the target solution, it will be easier to complete the migration and maintenance team.</li></ul><p>There are many aspects to consider, many questions, so don't wait because time is passing by. Business is changing and IT systems must be ready for new challenges.</p><p>What is certain is the fact that we must be ready for changes, the more so as the implementation of a new system also means the possibility of introducing changes to the business model, introducing new business products and optimizing technologies and processes.</p><p>And if we decide on the new system - what next?</p><p>Well… yes, migration to the new system is a big undertaking, but it's not impossible. And we like challenges, we like changes, we like to enter new areas.</p><p>How to approach migration? - more about it in the next post. </p><p>Stay tuned!</p>]]></content:encoded></item><item><title><![CDATA[How to connect microservices: Part 1 Types of communication]]></title><description><![CDATA[Microservices can be combined in various ways. What are the advantages and disadvantages of individual approaches and what techniques should be used to make the constructed solution work efficiently, be resistant to failures and not cause difficulties in development and maintenance?]]></description><link>https://www.softax.pl/blog/how-to-connect-microservices-part-1-types-of-communication/</link><guid isPermaLink="false">5fe224db37bc7451f5ebdbe9</guid><category><![CDATA[microservices]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[Piotr Martyniuk]]></dc:creator><pubDate>Thu, 07 Jan 2021 17:12:04 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2021/01/microservices_avatar.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><div id="enableToc"></div><!--kg-card-end: html--><img src="https://www.softax.pl/blog/content/images/2021/01/microservices_avatar.png" alt="How to connect microservices: Part 1 Types of communication"><p>Microservices can be combined in various ways. What are the advantages and disadvantages of individual approaches and what techniques should be used to make the constructed solution work efficiently, be resistant to failures and not cause difficulties in development and maintenance?</p><!--kg-card-begin: html--><aside><!--kg-card-end: html--><p><em>The article consists of three parts:</em></p><ul><li><em>Part 1 Types of communication (present)</em></li><li><em>Part 2 Patterns of problem handling (published soon)</em></li><li><em>Part 3 Logical Architecture (published next)</em></li></ul><p><em>The first part (this one) describes the technical methods of communication between individual modules in the solution, the second part presents various design patterns for handling connection problems, and the third part presents logical ways of connecting components within the system architecture.</em></p><!--kg-card-begin: html--></aside><!--kg-card-end: html--><h2 id="why-separate-components-at-all">Why separate components at all?</h2><p>Collaboration within the solution of small, independent and isolated modules is the core of the microservice architecture. As the system grows, simple monolithic architectures, where the presentation is closely linked to the database are no longer sufficient. There is a need to divide the system into dedicated modules that can be developed separately,  separately delivered, run, managed and scaled.</p><blockquote><a href="https://www.softax.pl/blog/microservices-strengths-and-weaknesses-part1-small-modules/">Microservices - strengths and weaknesses: Part1 Small modules</a> - more about microservices as a set of separate modules.</blockquote><p>In any case, separate components, in practice separate processes, have to be connected somehow – i.e. allowed to communicate with each other and exchange data. The ways of such communication and the related techniques are described later in the article.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2020/12/microservices.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/12/microservices.png 600w, https://www.softax.pl/blog/content/images/2020/12/microservices.png 760w" sizes="(min-width: 720px) 720px"></figure><h2 id="how-can-microservices-communicate">How can microservices communicate</h2><p>Let us assume here that as modules we understand components working in separate processes and we do not consider the special case when they work on the same machine and inter-process communication provided by the operating system is possible.</p><p>The simplest and most historical ways of communication between the components were based on the exchange of files shared at a given disk location. This type of approach, although still present, has a number of disadvantages, first of all it introduces considerable delays related to data processing, and also raises maintenance problems.</p><p>In monolithic architecture, integration through a database is also often encountered - that is, various modules write and read internal data, "belonging to" different components. In the era of microservices, such an approach is considered a design error due to the too tight coupling of components and making the integration dependent on the method of storing the data in a particular module.</p><p>In this article, we'll look at network communication related to message passing. The area of ​​various communication protocols such as RPC (Remote Procedure Call), Corba, HTTP REST, SOAP, gRPC or GraphQL, or data representation within messages – XML, JSON, ProtoBuf is a topic that deserves a separate article and will not be elaborated here. Instead, we will look at the performance and reliability of various approaches, commonly used in connecting microservices:</p><ul><li>the most popular synchronous approach, where the caller forwards the request and actively waits for a response,</li><li>a modified variant of the above, where waiting for a response is not blocking,</li><li>forwarding messages to recipients via an asynchronous broker (message queues),</li><li>publishing to the so-called of the event stream, even without knowing specific recipients; the recipients independently decide which events are of interest to them and at what time they will handle them.</li></ul><h2 id="synchronous-communication">Synchronous communication</h2><p>In the case of this type of communication, the client (an initiating module), prepares the message and transmits it over the network to the server, i.e. the request processing module. The client waits for a response - at that time, it does not perform any other processing as part of its computing thread.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/communication_sync.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/communication_sync.png 600w, https://www.softax.pl/blog/content/images/2021/01/communication_sync.png 760w" sizes="(min-width: 720px) 720px"></figure><p>This type of approach is natural for a programmer, as it follows the standard procedural model of software development. The difference is that since the call is over the network, a timeout is usually used, which is the time limit for receiving a response. If there is no response within the given time, an error is generated and the result of remote processing remains unknown.</p><p>In the case of modules that play the role of various types of middlemen – i.e. their services are called by external clients, but they also call the services of a remote server – usually individual requests are handled in separate processes (historically) or threads. Until a response is received from the server, the given processing thread is suspended, holds the resources of the operating system (from the client and server side) and waits for the response. The server call is also usually covered by a timeout. The result of the server call after appropriate processing is forwarded to the client.</p><h3 id="user-perspective">User perspective</h3><p>For the user, synchronous communication seems to be the simplest model. For human-used interactive applications, using a given function means passing a message to the server and waiting for a response (usually accompanied by a waiting picture – the so-called loader). During this time, the application is inactive. After receiving the answer, the user is presented with an appropriate screen, in the event of an error, there is an message about the problem, but as we wrote earlier, there are situations (e.g. timeout) when the result of processing is not known.</p><p>What's more, a man becomes impatient while waiting after a few seconds, and as a quick action he evaluates the response significantly below 1s. This imposes high performance requirements on the transmission and implementation of the actual remote processing.</p><h3 id="advantages">Advantages</h3><p>Despite the various problems that we will discuss in more detail in a moment, the synchronous approach is still the most widely used in computer systems. It has its advantages. In addition to being the simplest, it allows to use standard available protocols – for example HTTP. Usually, it generates low latency, because no middleware is needed, and all resources are ready for operation. In addition, many external systems exhibit their API in this form, this applies to services available on the Internet, but also many internal modules with which our solution must integrate.</p><h3 id="problems">Problems</h3><p>Below we will address various problems, typical for the synchronous model.</p><p><strong>C10K problem</strong></p><p>The model with resources reserved for each processed request was one of the reasons for the so-called C10K problem, i.e. the difficulty of handling 10,000 parallel connections from clients on one machine. Fortunately, the topic is already somewhat historical, but allocating threads and other resources to each client connection is still very inefficient and can lead to resource saturation, especially if our server handles Internet traffic. It also makes scaling in the synchronous model difficult.</p><p><strong>The delay problem</strong></p><p>In a situation where individual subordinate services are called in turn, one has to wait for the end of one service to call the next. Delays in such a situation add up. The total processing time will be relatively long. The situation will worsen when the system as a whole is more stressed and the individual functions have longer execution times. Unfortunately, all delays in this case are cumulative.</p><p><strong>The dependency problem</strong></p><p>With a more complex architecture, often present in the world of microservices, a given functionality requires the cooperation of many components. In that situation, individual calls generate a tree. In the synchronous call model, where each service at a given level is called in turn, a failure on any branch makes the entire service unavailable. The problem gets worse when the failure is understood as not only a service error, but also failure to return a response within a given time. In addition, the services themselves are clearly dependent on each other, which is contrary to the microservices postulate to create only loose coupling between modules.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/error_propagation.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/error_propagation.png 600w, https://www.softax.pl/blog/content/images/2021/01/error_propagation.png 760w" sizes="(min-width: 720px) 720px"></figure><p><strong>Cascade of timeouts</strong></p><p>In the event of system overload, the response time of individual services increases. The original call (root of the call tree) was made with a timeout. All subordinates also, but in the case of a high load of the system, it may turn out that some subservices will be executed despite the fact that the time for the execution of the parent service has already run out.. This can put additional strain on an already overloaded system.</p><p><strong>Cascade of resource consumption</strong></p><p>As we wrote earlier, the synchronous approach assumes the reservation of resources for the purposes of communication with an external service - e.g. a dedicated thread of the operating system or a network connection. In a situation where the subservice responds with a delay (even within the timeout limits), resources in the parent service may be reserved for too long.</p><p>The above resources could be dedicated to supporting other downstream services running properly, however their use is blocked. The problem may also lead to overloading the entire system – in a situation where, for example, the thread pool can grow without limitation, and some submodules will block the resources of the parent service for a long enough. Therefore, it is extremely important to introduce a limit (e.g. the size of the thread pool that a given module can use) so as not to saturate the capabilities of the operating system of a given machine.</p><h2 id="non-blocking-request-response-communication">Non-blocking request-response communication</h2><p>In order to reduce the problems encountered in synchronous communication, a specific, asynchronous from the calling perspective, online communication model is often used. The process or thread servicing a given function, when calling another service, does not wait idly for a response, but can handle other tasks during this time. When a response arrives, it will be able to deal with it, but it does not block resources (e.g. threads or database connections) until that response arrives.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/communication_async.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/communication_async.png 600w, https://www.softax.pl/blog/content/images/2021/01/communication_async.png 760w" sizes="(min-width: 720px) 720px"></figure><p>Handling of the response depends on the capabilities of the given communication transport. To receive an answer, it may be necessary to perform polling, i.e. periodically check if the answer is already available. A slightly better solution may be to register the so-called callback to handle the response, which will be invoked by the transport framework when a response arrives. A good basis for this type of model can be the use of the operating system's asynchronous api - specifically the select / epoll function set.</p><p>Based on asynchronous capabilities in many programming languages ​​(e.g. async / await in Python or JavaScript), it is possible to write complex solutions where the all IO operations, especially communication over the network, are handled asynchronously, without blocking resources in dedicated threads.</p><h3 id="advantages-1">Advantages</h3><p>The non-blocking approach allows a more efficient use of available resources, especially in the case of a large number of connections entering the system from the Internet. Thanks to this, we also circumvent the C10K problem.</p><p>Due to the fact that we do not wait for a blocking response, we can order processing in other microservices at the same time and not sequentially. This speeds up the execution of a given business function, thanks to parallelization of work.</p><p>In this model, it is also easier to avoid cascading resource consumption in the event of slowdowns or failure of certain downstream services in the case of complex processing.</p><p>The advantage is now also a good support from programming languages and popular communication frameworks such as Node.js.</p><h3 id="problems-1">Problems</h3><p>One disadvantage of this model is a little more complex implementation than in the synchronous model. It is necessary to use additional programming techniques (functional, asynchronous) that require certain knowledge and experience.</p><p>It should also be remembered that most often this model assumes waiting for a response limited by timeout, so the unavailability of the target service will result in the unavailability of a higher level service.</p><p>The classic use of the above approach still assumes communication in the form of request-response. Support for one-way transmission – the event emission is limited in this case. And in some applications it is a very useful way of communication.</p><p><strong>Overload propagation</strong></p><p>In the non-blocking model, unfortunately, the overload of the target system with a large number of parallel requests is a greater risk than in the case of the synchronous model. This is due to the main advantage of the non-blocking model - no resources reserved for external module calls. This is especially true, when the target system is experiencing difficulties already and begins to respond slower. In this case, it is necessary to use additional communication patterns, such as backpressure or control of the number of parallel calls.</p><p><strong>Concurrency limitation for CPU-intensive tasks</strong></p><p>A certain problem in the asynchronous model is the processing that reserves the CPU for a long time within non-blocking software frameworks, where the so-called cooperative concurrency is used. That is, the the task must end itself or give back the processor to another job. Typically, IO calls automatically provide the ability to switch the context of processed tasks. However, the lack of such calls means that the CPU-intensive task does not allow for quick execution of many other pending small tasks. Some frameworks have mechanisms to prevent these types of problems, but it is worth remembering to periodically release the CPU as part of each longer executed task.</p><h2 id="communication-through-the-asynchronous-message-broker">Communication through the asynchronous message broker</h2><p>Another model used to connect microservices is the mechanism of communication through the transmission of messages using a broker. In this model, the individual modules are actually, from a communication perspective, independent of each other - they do not connect directly, but through an asynchronous module of the Message Broker (e.g. RabbitMQ or ActiveMQ). The consumer does not have to be available at the time of the message issuing by producer. What is necessary is the availability of the broker.</p><p>With the Message Broker different models of communication between the components are possible. Below we will look at the most popular ones.</p><h3 id="message-queue">Message queue</h3><p>The message queue is the original and still the most popular asynchronous communication pattern. Producers generate messages that they put into a queue located within the broker. On the other side, the consumer or consumers (there may be many) receive the messages and process them according to their logic.</p><p>A given message goes from the queue to one consumer only, consumers compete with each other by taking messages from the queue. Each message should usually be processed exactly once, therefore consumers should mark the correct processing of the message with a special message (ACK). If they don't, the broker should re-pass the message to the same or a different consumer. Therefore, it is important that in-consumer messaging is idempotent in order to detect and avoid processing of possible duplicates.</p><p>The queuing approach allows for easy scaling of consumers and distribution of the load over multiple machines, it isolates the load entering the system from the data processing itself - messages that we don't have the resources to process at the moment can be safely temporarily stored in the queue.</p><p>It is also important that the broker usually allows you to control whether messages in a given queue are to be saved permanently (on disk), or under the faster, but less resistant to failures option of keeping data only in the computer's operating memory.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/message_queue_many_consumers.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/message_queue_many_consumers.png 600w, https://www.softax.pl/blog/content/images/2021/01/message_queue_many_consumers.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="request-response-mode">Request-response mode</h3><p>The use of additional queues allows you to simulate request-response communication. In this model, the manufacturer initially generates a message and sends it to the broker queue. The consumer reads and processes the message. As a result of processing, the consumer (here acting as the producer) generates a message with the response and usually inserts it into a separate queue (or other asynchronous communication channel). The producer of the initial message reads the reply message and handles it according to its logic.</p><p>In such a model, we have possibilities and problems similar to those in synchronous communication (connecting with timeouts) with a slightly higher overhead resulting from the use of an intermediate layer. Similarly, if the producer of the message is waiting for a reply and the message consumer is unavailable, it will not generate a reply, i.e. the service will be out of order. The difference is that the broker has the function of isolating load spikes and is a platform for scaling the solution.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/message_queue_request_response.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/message_queue_request_response.png 600w, https://www.softax.pl/blog/content/images/2021/01/message_queue_request_response.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="broadcast-mode">Broadcast mode</h3><p>In addition to the usual one-to-one messaging, the message broker also offers other mechanisms, such as allowing messages to be forwarded to multiple recipients - i.e. broadcast. Such solutions are sometimes called one-to-many, fan-out or publish-subscribe.</p><p>In broadcast mode, the producer passes the message once, and its copy is sent to any number of recipients, each of whom should consume the message separately. Message recipients can be designated within the system logic, or they can dynamically subscribe to receive messages of a given type (in the publish-subscribe mode).</p><p>These types of solutions are used to transmit messages to several systems at once or to easily introduce changes in the operating system, extending the distribution of information to recipients.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/message_queue_broadcast.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/message_queue_broadcast.png 600w, https://www.softax.pl/blog/content/images/2021/01/message_queue_broadcast.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="message-collection-mode">Message collection mode</h3><p>On the other side of the broadcast mode, there is a model of collecting messages from multiple sources in one place. This approach is also known as fan-in or many-to-one mode.</p><p>This model can be useful in places where a central component is used. Such component controls the processing of a complex business process and manages the execution of each individual step. It can also be used to collect responses from multiple systems, allowing you to combine broadcast mode with the request-response mechanism. However, it should be remembered that the fan-in approach may make it difficult to scale solutions, by concentrating logic in one place.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/message_queue_collection.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/message_queue_collection.png 600w, https://www.softax.pl/blog/content/images/2021/01/message_queue_collection.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="advantages-of-communication-through-a-broker">Advantages of communication through a broker</h3><p>The broker communication approach is ideal for one-way calls. The producer sends the message and does not have to wait for a response. The broker accepts the message and is responsible for delivering to all intended consumers. From the producer's perspective, this is a convenient approach. It allows to balance the load on individual layers of the system and thus support temporary peaks in inbound traffic.</p><p>In this model, there is also no risk of overloading the message consumers – they retrieve data from the queue and process it with their own performance. The entire system can be overloaded, but the respective layers behind the queue can work properly and efficiently.</p><p>The Message Broker also usually has a number of useful features – it allows you to permanently save messages for greater reliability, or only process them in memory  for greater efficiency. It offers load-balancing and flow-control mechanisms, including backpressure sending, provides support in the event of queue overflow and other problems (e.g. DLQ mechanism - Dead Letter Queue and many others).</p><h3 id="problems-2">Problems</h3><p><strong>Required broker</strong></p><p>The Broker module allows you to introduce isolation between the layers. However, as an additional object in the architecture, it becomes another potential failure point. Of course, individual broker instances can be replicated in a clustered configuration, but then there appear problems related to quorum behavior, at-most-once and at-least-once delivery guarantees, idempotency verification and other subjects that complicate the solution and increase the risk of errors.</p><p><strong>Possibility of overloading the queues</strong></p><p>In the queue model, there is no risk of overloading message consumers, but there is a risk of overloading the queues themselves or the message broker. Of course, the broker can scale horizontally, but if the message consumer is not available for a longer time, the constant flow of messages from producers may result in exhausting the resources dedicated to the queue.</p><p><strong>Introduced delay</strong></p><p>The broker usually runs in memory mode. Delays in such a situation are minimal – usually less than 1ms. Of course, in this model we lose some reliability. The sender has no guarantee that the message will reach the addressee. In the model with permanent writing to the disk, the delays can be significant, but in some solutions it is necessary. Of course, a solution without a broker will be faster in such a situation.</p><h2 id="communication-through-the-event-stream">Communication through the event stream</h2><p>A model similar to communication through publish-subscribe queues, but the message is not directed to a specific recipient (or recipients). Instead, the individual modules deal with the production and reaction to events. In principle, they don't need to know anything about themselves, the event producers don't even need to know when, or if at all, anyone will consume the events they generate.</p><p>The business process itself is also subject to a certain abstraction – it does not have to be controlled by any dedicated component, it can only indirectly result from the complex process of generating and receiving events by various microservices.</p><p>From the point of view of the independence of the work of individual components, the event model is the most loosely coupled approach. We do not have a problem here with the fact that the failure of one component affects the efficiency of other components. It is true that an event stream handler (eg Kafka) is necessary, but individual microservices themselves decide about the pace and logic of their work.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2021/01/event_stream.png" class="kg-image" alt="How to connect microservices: Part 1 Types of communication" srcset="https://www.softax.pl/blog/content/images/size/w600/2021/01/event_stream.png 600w, https://www.softax.pl/blog/content/images/2021/01/event_stream.png 760w" sizes="(min-width: 720px) 720px"></figure><p>When using the Kafka system (currently the most popular framework for event stream), entries to the event stream simply go to a file. By default, with write caching enabled and only periodic synchronization to physical disk. This ensures very high efficiency. All data is kept in the file continuously for a configurable time - usually several days.</p><p>From a reliability perspective, Kafka provides at-least-once semantics. Any write to the broker, as long as the broker returns a confirmation (ACK), can be considered saved, and Kafka ensures that the every subscriber receives this message at least once.</p><h3 id="advantages-2">Advantages</h3><p>Event stream is a solution that is derived from message queuing. All the strengths of this approach also apply to the event stream.</p><p>The main advantage of the event stream approach is the very high independence of the components that communicate in this way. Both on the physical level – that is, the availability of consumers does not affect the producers in any way, and logical (which goes beyond the standard capabilities of the queue model), where processes should be built in such a way so that they do not depend on the consumption of events by specific modules.</p><p>A solution built on the basis of an event stream can also have very high, basically unlimited capacity. It is easy to scale the solution by adding new sharding modules. It's also relatively easy to add replication.</p><h3 id="problems-3">Problems</h3><p><strong>Limited transparency of the business process</strong></p><p>A modern approach for event stream, where the business process is a set of separate event processings in different modules, without central logic, gives flexibility, but at the cost of losing transparency of the business process. It is difficult to assess the correctness of the entire processing and in the event of a failure the exact place where the problem appeared.</p><p><strong>Difficulty in ensuring data consistency</strong></p><p>Unfortunately, using an event stream also means risks resulting from data duplication and its parallel handling – we do not have a good way to ensure continuous data consistency in such a model. Usually, the eventual consistency approach is used in this case, i.e. we allow a situation that data in different modules temporarily may not be consistent. To deal with complex business processes, where single events may generate errors during processing, compensating transactions and the SAGA model are used.</p><p><strong>Specific use in the user interface</strong></p><p>In the event model, it is possible to define the user interface in such a way that a positive message only means acceptance of the order for service, not its execution. The effect of this order does not have to be visible immediately. It may appear with some delay. However, this is usually not intuitive for the user.</p><p><strong>Complicated integration with external systems</strong></p><p>Event-based communication changes the way of interfacing with external systems, which mostly provide APIs in the request-response model. Special dedicated gates are needed to translate the event semantics into API required by the target system.</p><p>It should also be remembered that event stream brokers typically offer at-least-once data delivery semantics, which means that duplicates can occur and must be handled properly. Usually by using idempotent events and for example checking message identifiers. However, this complicates the solution.</p><h2 id="summary-what-approach-in-a-given-situation">Summary - what approach in a given situation</h2><p>Each of the presented models has its application in modern solution architectures, in microservices in particular. Synchronous communication is difficult to avoid – it is the most popular connection model, commonly used in integration with external systems. If we have the possibility – it is worth to use non-blocking request-response communication. It is particularly important in handling traffic entering from the Internet (i.e. where a lot of parallel network connections can potentially occur).</p><p>The queuing model is worth using when we want to make different parts of the system independent in terms of the load generated by the traffic passing through them. Also, all situations where the message producer does not have to or does not want to wait for a response can be easily handled in this way.</p><p>The event stream can be an alternative to the queuing model. It also allows you to separate different modules and the load generated by them. However, its proper use is to work in the integration of systems operating in the choreographic model, where individual modules track events occurring in the system, react to them, carry out additional operations according to their logic and possibly generate further events, that someone in the system can react to. This approach is now particularly recommended in the integration of microservices.</p><p><em>End of part one.</em></p><p><em>Part two, concerning the patterns of solving communication problems coming soon.</em></p>]]></content:encoded></item><item><title><![CDATA[Successful migration to IPS without any downtime]]></title><description><![CDATA[In November, one of the largest banks in Poland successfully migrated debit cards to our IPS system. The IPS system supports over 10 million debit cards. The system was replaced without interruptions in the access to card services.]]></description><link>https://www.softax.pl/blog/successful-migration-to-ips-without-any-downtime/</link><guid isPermaLink="false">5fde367d37bc7451f5ebdbcc</guid><category><![CDATA[banking]]></category><category><![CDATA[architecture]]></category><category><![CDATA[cloud]]></category><category><![CDATA[digitaltransformation]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Sat, 19 Dec 2020 17:27:25 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2020/12/Avatar-ips.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2020/12/Avatar-ips.png" alt="Successful migration to IPS without any downtime"><p>In November, one of the largest banks in Poland successfully migrated debit cards to our Interactive Payment Service system. The IPS system supports over 10 million debit cards. The system was replaced without interruptions in the access to card services. As a result, the bank's customers did not feel the system change or any downtime. The new card system is adapted to work in the cloud. <a href="https://www.softax.pl/en/products/ips-payments-and-cards-management">IPS</a> is part of the <a href="https://www.softax.pl/en/products/advantica-cloud-core-banking">Advantica Cloud Core Banking.</a></p>]]></content:encoded></item><item><title><![CDATA[Advantica - core banking system as experience - future is already here]]></title><description><![CDATA[We live in such a complicated world that we are not able to fully predict the consequences of our behavior, and more often we simply have too little knowledge to assess such consequences well. Wouldn't we like to use products that would be able to inform us about the consequences of our actions...]]></description><link>https://www.softax.pl/blog/advantica-core-banking-system-as-experiance-future-is-already-here/</link><guid isPermaLink="false">5fae7e6137bc7451f5ebdb23</guid><category><![CDATA[banking]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Tue, 17 Nov 2020 12:35:28 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2020/11/Avatar-3.png" medium="image"/><content:encoded><![CDATA[<h2 id="the-dream-of-intelligent-products-">The dream of intelligent products.</h2><img src="https://www.softax.pl/blog/content/images/2020/11/Avatar-3.png" alt="Advantica - core banking system as experience - future is already here"><p>We live in such a complicated world that we are not able to fully predict the consequences of our behavior and more often we simply have too little knowledge to assess such consequences well. Wouldn't we like to use products that would be able to inform us about the consequences of our actions and adapt to our real current needs?</p><p>A car is an example of a thing that is still a long way from being so.</p><p>Let's imagine that we are driving 30 km/h above the speed limit in a given place, and the car says to us "Slow down, because if you cause an accident, your insurance policy will be more expensive in the next year by 1000 PLN" or "slow down, because the fine in this place it will cost you 400 PLN and 5 penalty points" or "slow down, because this is where accidents happen very often".</p><p>Such a message can appeal to the imagination and influence our decision.</p><p>Another example. Can the car inform us about its condition and will we, for example, pass a technical examination of the vehicle?</p><p>Every time I go to such an vehicle examination (in my older car), I don't know what the result will be. Sometimes I missed such a test because of some minor detail. In such a situation, under time pressure, I had to make a quick repair and return for another examination. I really wish such situations did not happen. I would like to be informed in advance that a repair is needed to be performed. It doesn't even have to be the right sensors in the suspension (although that would be nice), it is enough for me to be informed about the points that will be checked.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2020/11/blog-adv1.png" class="kg-image" alt="Advantica - core banking system as experience - future is already here" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/11/blog-adv1.png 600w, https://www.softax.pl/blog/content/images/2020/11/blog-adv1.png 760w" sizes="(min-width: 720px) 720px"></figure><p>My expectations for a banking product are similar.</p><p>The account number is not only a unique character number assigned to me and my money, thanks to which it transfers this money to other entities.<br>The account number represents the possibilities offered to me by various types of services offered by my bank.</p><p>It is a product in the world of finance, like a car in the world of travel.</p><p>So I want this product to support me in all kinds of financial activities. That it would make it easier for me to do them, indicate new opportunities, and on the other hand, anticipate and inform me about the consequences of my behavior related to finances.</p><p>Additionally, it is supposed to be pleasant and convenient to use. It is supposed to respond to my current needs, skillfully recognize them thanks to access to my financial history and my historical activity related to the use of the interface and financial products available to me. And it should be remembered that the client is often associated with the bank for many years, some have accounts for up to 20/30 years. Generally, thanks to the experience and knowledge associated with me as a long-term customer, it can very well adapt to my needs.</p><p>Positive experience in using the product. It may be overlooked, but it's a bad strategy. A bank account can be more than just a simple billing tool.</p><p>I will come back to the example of the car again. Theoretically, we buy it mainly to transport us from point A to point B. If it were really so, we would buy the simplest car that would fulfill this function.</p><p>However, we don't buy the cheapest cars. We buy cars that give us some experience, not a set of basic functions. Often, we pay 50-100% of the base price of the car for gadgets and additional options, and it actually makes no sense from the point of view of the main goal.</p><p>Why? There are many reasons, of course, and this is not an article about the basics of marketing. However, I will focus on one aspect - the experience of owning and using a given product.</p><p>In analogy to the above, for me, banking products must provide experience that gives me a sense of support in the area of ​​finance, and not be a tool with a lot of functions.</p><p>It has to give me something more, and that something is e.g. better informing me about the consequences of my actions or proposing actions that support me in the area of ​​my financial needs.</p><p>These needs are different.<br>Each client has a different one.<br>One focuses on convenient payment of bills, another on a convenient tool for paying for purchases, yet another is looking for a good investment product.</p><p>I do not expect the Bank to support me in investing. There are no good solutions for investing in financial products, especially those with risk.</p><p>I expect more support in my daily settlements and controlling the condition of my finances. That's all.</p><h2 id="a-new-look-at-the-financial-product-functionality-vs-experience-">A new look at the financial product (functionality vs experience)</h2><p>To meet the above needs, it is not enough to have a good UX interface, be it mobile or web. A high-quality core banking solution is needed. A solution completely different from the previous standards.<br>Advantica approaches the product not as a set of features, functions, but as a financial experience that allows you to take full advantage of its possibilities, where each decision is supported by a set of benefits and consequences associated with it.</p><p>Consider an example like this:<br>Mr. Kowalski, wanting to reach the limit of annual payments on a credit card exempting him from the fee (50 PLN), made a transfer from the card, paying for some of his obligations. It turned out that a 3% commission, i.e. 18 PLN, was charged for this transfer (worth 600 PLN). He got upset, he felt cheated. Of course, the Bank is formally OK - the table of fees and commissions clearly states that the Bank charges a 3% fee for a transfer from the card (unfortunately, Mr. Kowalski read it after the operation). By the way, 3% is a lot, especially since the interest rate on the debit card is currently 2% (in accordance with the law).<br></p><p>What is the effect of such a situation?</p><ol><li>Mr. Kowalski resigns from this card - he does not need it. Why should he pay 50 PLN a year for it + interest on unpaid debt from time to time?</li><li>He trusts his bank less.</li><li>He perceives such a fee as too high in relation to the value of the operation.<br>The bank earned 18 PLN but made Mr. Kowalski feel angry.</li></ol><p>And anger is an enormous energy, so great that it can motivate the client to make an effort to transfer the account to another bank. So is it worth the risk? Certainly not, but is the banking solution flexible enough to inform the client during a given process about the possibilities and consequences of the operations he is currently performing?</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2020/11/obrazek-wewnatrz-.png" class="kg-image" alt="Advantica - core banking system as experience - future is already here" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/11/obrazek-wewnatrz-.png 600w, https://www.softax.pl/blog/content/images/2020/11/obrazek-wewnatrz-.png 760w" sizes="(min-width: 720px) 720px"></figure><p>Advantica breaks the behaviors described above.</p><p>With each operation we perform, full information is associated with its consequences. The customer is informed about the costs. Moreover, he or she can be informed about another possibility (better, cheaper, more convenient) to fulfill a given need, activity or intention.</p><p>The product is encapsulated with a set of possibilities. For example, an account allows you to define various types of transfers, but also standing orders, baskets, serial transfers, etc. After some time, various types of preferential services are associated with the account of a given customer. The more an account is used, the more functionalities associated with it will evolve. The product at Advantica is not the same for all customers. It is individual. Personalized.<br>And what is most unique, it evolves over time, adapting to current needs.</p><p>It is worth noting that this is a very unique feature.<br>This is not possible for e.g. a car. A car does not become more economical if it determines that you are driving mainly in the city, or does not turn into off-road if it determines that you are driving off-road.</p><p><strong>Our products, however, evolve.</strong></p><p>Our system, based on the history of the user's activity, or his profile built on the basis of comparison with other similar profiles, can prioritize the most expected services related to a given product. For example, if the customer makes regular payments, the system will prompt him to define a basket of transfers, serial transfers or standing orders. If he usually pays by card in different places (shops, internet), he will have a convenient option to change limits, freeze the card easily available. If he wants to make a transfer that will potentially cost him too much, the system will suggest another better option for him.</p><p><strong>It's a revolution. Just like Advantica.</strong></p><p>With each activity related to a given product there is a list of additional actions that, depending on the calculated scoring, may be prompted.</p><p>Owning a product is an experience that is constantly changing. The car will not surprise you with its new functionality. After 5 years, it will not say - I'm reducing the engine capacity, because you have not used its power for years. Or vice versa - it removes the power limit because you are driving in accordance with the rules of the road and you are responsible. Car companies do not have programs of the type - we have new off-road springs for you, because monitoring has shown that you use your car a lot on off-road roads.</p><p>Advantica can adapt the functionality to the customer profile. By logging into the interface, the customer first receives the elements that Advantica think are most needed for him. Then, along with subsequent logins and use, the system itself determines what may be important for a given user.</p><p>If the payment deadlines are approaching, the system primarily focuses on verifying whether there is enough money to pay them. It indicates the next payments and the proportion of funds available to commitments.</p><p>If many transactions have been made recently, history is highlighted.</p><p>If there were card transactions or, for example, it was not possible to pay due to the limits, then after logging in, the option to raise the limits and the appropriate ones appears immediately (and you have probably tried to raise the limits at the store at the checkout, and due to the many options and time pressure, you increased them all just to be sure to complete the transaction - which is neither a good nor completely safe solution).</p><p>So the concept of a classic dashboard disappears...</p><p>The intelligent system indicates those functions which, in its opinion, are the most important for the customer at a given moment.</p><p>Another important property of Advantica relates to the priority of experience. The client's products work independently. As self-organizing entities. If the main account "decides" that the customer is approaching payments - it will generate an appropriate notification to the interface, and the interface, through its mechanisms, will know what to do with this fact. Independently, the card product may report that its repayment date is approaching and there are no funds in the correct account. This product will also report a corresponding notification.</p><h2 id="summary">Summary</h2><p>How should the situation at the middle of this article be handled?</p><p>If you are trying to make a transfer from your card, the system should inform you about two things: first, of course, about the commission, and secondly, it should indicate that the operation you are trying to perform does not make sense, because you have enough funds on the current account to perform this operation.</p><p>My market experience shows that bankers are not ready for such a change. It must be clearly said - in this approach, they do not earn from hidden commissions and customer's faults (e.g. related to non-payment of the card on time, or a quick, expensive loan that the customer is forced to take).</p><p><strong>So you have to change the way you think. Banks can make money on products that the customer really needs and which they are willing to pay for. And the more products that hit the real needs, the more satisfied the customer, just like the bank that makes money on it. The wolf is full and the sheep are whole. It's a fair game - win-win for both sides. <a href="https://www.softax.pl/en/products/advantica-cloud-core-banking">Advantica</a> offers such products.</strong></p><p>Secondly, I believe that a good interface and a well-functioning product can (and even has the right to) cost money. Currently, banks do not charge fees for account maintenance or for the possession and operation of other banking products. I have no problem with paying for the bill, if it brings good user experience and additional options it will support me in my financial needs.</p><p>This value can and should cost money, and that's OK.<br>I leave the matters of what to earn to bankers - they are specialists in earning.</p><p>We provide banking that changes the perception of the client-bank relationship and this is the area we can focus on - how to use this value of the new relationship.</p><p>Stay tuned.</p>]]></content:encoded></item><item><title><![CDATA[The qualified electronic signature is not so black as he is painted]]></title><description><![CDATA[Not so long ago, a qualified electronic signature, which, according to eIDAS is equivalent to a handwritten signature, was used sporadically in B2C or B2B contracts. The main barrier was the fact that only few people (including company representatives) had a qualified electronic signature.]]></description><link>https://www.softax.pl/blog/the-qualified-electronic-signature-is-not-so-black-as-he-is-painted/</link><guid isPermaLink="false">5f719ccb37bc7451f5ebdaa5</guid><category><![CDATA[banking]]></category><category><![CDATA[sales]]></category><category><![CDATA[qualified electronic signature]]></category><dc:creator><![CDATA[Iwona Jedyńska]]></dc:creator><pubDate>Mon, 28 Sep 2020 10:36:39 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2020/09/Avatar-podpis.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2020/09/Avatar-podpis.png" alt="The qualified electronic signature is not so black as he is painted"><p>Not so long ago, a qualified electronic signature, which, according to eIDAS (<a href="https://eur-lex.europa.eu/legal-content/PL/TXT/?uri=CELEX%3A32014R0910">Regulation of the European Parliament and Council on electronic identification and trust services for electronic transactions in the internal market</a>) is equivalent to a handwritten signature, was used sporadically in B2C or B2B contracts.</p><p>The main barrier was the fact that only few people (including company representatives) had a qualified electronic signature. Therefore the reason of not knowing electronic signature was prosaic. To obtain a qualified electronic signature, you had to contact institution issuing such a signature and meet a representative of this company in person to confirm your identity. Additionally, you had to pay a lot for obtaining this signature until recently.</p><h3 id="what-has-changed">What has changed?</h3><p>First of all, you can get such a signature within a few minutes without leaving your home! Therefore, even if suddenly in the middle of the night it turns out that you need a qualified electronic signature at this moment, you can get it.</p><p>Moreover, its price does not scare you off and for one-time use you can even get it for free.</p><h3 id="how-is-it-possible">How is it possible?</h3><p>Firstly, there are solutions that allow the use of a qualified electronic signature without the need to use an additional reader and card, which not a long time ago were necessary in this process. Now you do not have to receive a qualified signature of any physical devices from the supplier.</p><p>Secondly, you can confirm your identity electronically. In Poland, using <a href="https://www.kir.pl/en/administration/mojeid/">MojeID</a>. Thus, there is no need to contact a representative of the qualified signature supplier in person.</p><h3 id="what-does-it-mean">What does it mean?</h3><p>Finally, we can conclude full-fledged contracts remotely, without personal contact between the parties involve in the contract. The COVID-19 pandemic has proven this form of contracting to be essential if companies want to sell products actively nowadays.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-podpis2-.png" class="kg-image" alt="The qualified electronic signature is not so black as he is painted" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/09/obrazek-wewnatrz-podpis2-.png 600w, https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-podpis2-.png 760w" sizes="(min-width: 720px) 720px"></figure><p>Imagine that while sitting in your armchair in the evening, sipping aromatic tea, you choose a car online and buy it for cash or loan via the Internet. The next day someone delivers the car to your door and gives you the keys to your brand new car? Comfortably? I like this vision.</p><p>Such solution is available as part of the <a href="https://www.softax.pl/en/products/sfo-sales-front-office">Softax Sales Front Office platform</a>.</p><p><a href="https://www.ausbanking.org.au/electronic-transactions-and-mortgages-should-be-here-to-stay/">The Australian Banking Association, delighted with the effectiveness of remote emergency processes used in Australia during the lockdown, appealed to the Australian government to become permanent solutions for mortgage loans, especially remote contracts to facilitate transactions, minimize costs and reduce problems of "personal" signatures and paper documents. </a></p><h3 id="how-to-get-a-qualified-electronic-signature">How to get a qualified electronic signature?</h3><p>A qualified electronic signature can only be purchased from a qualified trust service provider, that is, companies that have a relevant certificate. There are over 200 such entities on the European market. <a href="https://webgate.ec.europa.eu/tl-browser/#/">The list is here</a>.</p><p>In Poland, we have 27 such suppliers. <a href="https://www.nccert.pl/uslugi.htm">A full list of them is available on the website of the National Certification Center (NCCert)</a>.</p><p>A qualified electronic signature is branded by qualified certificate issued by one of European Union (EU) countries is valid in all EU countries.</p><p>A qualified electronic signature service in Poland is provided by KIR only. <a href="https://www.mszafir.pl/">More information about this solution can be found here</a>.</p><p>To obtain a Polish qualified mSzafir electronic signature remotely, it is necessary to confirm your identity. In Poland, you can do it through MojeID, i.e. by confirming your identity via electronic banking. The process only takes a few minutes.</p><p>At the end, unfortunately, you still have to pay. If you pay on-line, you will receive the qualified electronic signature right away. In Poland the cost of a qualified electronic signature issued for 2 years is approximately PLN 300 gross. Sporadically, some banks offer promotions, e.g. PKO BP until June 2020 gave a 30% discount on mSzafir.</p><h3 id="how-to-sign-a-contract-using-a-qualified-electronic-signature">How to sign a contract using a qualified electronic signature?</h3><p>Signing the contract with a qualified electronic signature is very easy. A document received electronically (e.g. by e-mail or in a mobile application) should be saved on the computer disk,  then you have to log in to the application that supports your qualified electronic signature and attach the document that you want to sign there.</p><p>Finally, you accept the enclosed document, usually confirming using an additional code generated in the application of the supplier of the qualified electronic signature and provided to you through another communication channel (e.g. in a mobile application). It may sound a bit complicated, but in practice the solution is intuitive and easy to follow.</p><p>After signing the document with a qualified electronic signature, you save it on your hard drive and deliver via electronic channel to the addressee. And that’s all. Simple, right?</p><p>The above process can be facilitated by automatically opening the application for a qualified signature and bypassing the need to save the document on the computer disk.</p><figure class="kg-card kg-image-card"><img src="https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-podpis.png" class="kg-image" alt="The qualified electronic signature is not so black as he is painted" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/09/obrazek-wewnatrz-podpis.png 600w, https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-podpis.png 760w" sizes="(min-width: 720px) 720px"></figure><h3 id="how-to-check-if-the-contract-was-signed-with-a-qualified-electronic-signature-and-by-whom">How to check if the contract was signed with a qualified electronic signature and by whom?</h3><p>Who signed the contract and whether it was surely signed with a qualified signature can be verified by several methods:</p><ul><li>using Adobe Acrobat Reader DC,</li><li><a href="https://ec.europa.eu/cefdigital/DSS/webapp-demo/home">in the DSS application available on the website of the European Commission</a></li><li>in applications provided by qualified trust service providers.</li></ul><p>Verification of a qualified electronic signature confirms the validity of this signature. As part of the verification process, the following are primarily checked:</p><ul><li>whether the qualified certificate was issued by a qualified trust service provider and was valid at the time of signing;</li><li>whether the signature validation data corresponds to the data provided to the relying party;</li><li>whether the unique set of data representing the signatory included in the certificate was properly delivered to the relying party;</li><li>whether the electronic signature was created by a qualified electronic signature creation device;</li><li>that the integrity of the signed data has not been compromised.</li></ul><h3 id="so-where-does-the-resistance-lie-that-remote-contracting-is-not-widely-available">So where does the resistance lie that remote contracting is not widely available?</h3><p>It seems that the "dark sides" of remote contracting are difficult to find at present. A qualified electronic signature is equivalent to a handwritten signature, it can be obtained in a few minutes without leaving home, the cost of the service seems to be at a level acceptable to many people.</p><p>Perhaps the main limitation is the qualified electronic signature has not yet became a popular way of signing contracts by financial institutions and entities selling products and services?</p><p>Still relatively few banks, insurance companies, investment houses or leasing companies use the remote form of concluding contracts with clients. Maybe it's time to change it?</p><p>If you need already working solution - feel free to <a href="https://www.softax.pl/en/#contact-us">contact us</a>, we will be happy to help you!</p>]]></content:encoded></item><item><title><![CDATA[Our story into core banking and the birth of Advantica Cloud Core Banking]]></title><description><![CDATA[What was our beginning? How did we get our knowledge of banking systems? It's our story into core banking and the birth of the Advantica Cloud Core Banking.]]></description><link>https://www.softax.pl/blog/our-story-into-core-banking-and-the-birth-of-advantica-cloud-core-banking/</link><guid isPermaLink="false">5f6deab537bc7451f5ebda10</guid><category><![CDATA[banking]]></category><category><![CDATA[cloud]]></category><category><![CDATA[core]]></category><dc:creator><![CDATA[Krzysztof Krzos]]></dc:creator><pubDate>Fri, 25 Sep 2020 15:25:16 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2020/09/Avatar-adv.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2020/09/Avatar-adv.png" alt="Our story into core banking and the birth of Advantica Cloud Core Banking"><p>I have been associated with Softax since 1997 and we have been implementing banking projects since the mid-1990s. It was a time of transition to new systems, migration time and building appropriate solutions related to them.</p><p>I remember the times of cooperation with the American company <a href="https://en.wikipedia.org/wiki/Digital_Equipment_Corporation">Digital Equipment</a>, which entered Poland in 1991, mainly with the offer of computer equipment and ready-made software running on it.</p><p>In addition, the company offered the DECbank FBS banking system, which served more as a branch support platform capable of interfacing with the central banking system. DECbank broke the idea of working on one mainframe system.</p><p>Digital was acquired by COMPAQ and then in 2002 by Hewlett Packard (HP).</p><p>We were strongly associated with them at the beginning of the company's operations as subcontractors of banking IT projects. Initially, our knowledge and competences in the field of banking systems grew on this cooperation.</p><p>In addition, we have rubbed against companies such as: Sanchez, Temenos, FIS-FNS, Accenture, and more specifically Alnova Technologies. Companies with hundreds of implementations of their banking systems. So we are talking about world giants in this area, and we had contact with their banking systems.</p><p>We learned about the strengths and weaknesses of these systems, and such a small company as ours was effective in providing solutions just covering their shortcomings and weaknesses. This is also how our competences were built. Projects that we implemented close to core banking systems were the value.</p><p>The knowledge gained in dozens of projects implemented over 25 years in the largest Polish banks, always taking into account the specificity and functionality of core systems, reflected on the architecture of the solutions we designed.</p><p>Why so many core systems projects?</p><p>This is due to the specific situation in Polish banking. The last 25 years have been a period of dynamic changes in banks' IT infrastructure. The change of architecture is primarily a transition to a service model based on an ESB bus, multi ESB, with a server part containing core banking systems and a channel part providing dedicated interfaces for customers through channels such as: WWW, WAP, IVR, Call Center available by phone, SMS then a new mobile channel. All as an alternative to branches (and these also received new interfaces). Replacing old systems with more modern ones that provide appropriate APIs resulted in migration and integration projects.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-.png" class="kg-image" alt="Our story into core banking and the birth of Advantica Cloud Core Banking" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/09/obrazek-wewnatrz-.png 600w, https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-.png 760w" sizes="(min-width: 720px) 720px"><figcaption>Banking</figcaption></figure><h3 id="projects-related-to-core-systems">Projects related to core systems</h3><p>At the beginning, we implemented projects for BRE Bank operating with the Globus system by Temenos. Another was the electronic banking project for Handlobank (operating on the Profile system and then migrated to CitiBank systems), then we implemented the project of launching one of the first (next to mBank) Inteligo virtual bank (also with the Profile system).</p><p>In 2004, we participated in the project of creating electronic banking for Lukas Bank (now Credit Agricole). There, the Polish product def2000 was working on the core system side.</p><p>After the takeover of Inteligo in 2002 by PKO BP, we started cooperation with the largest Polish bank. We have implemented electronic banking projects: iPKO, iPKO Biznes (project for business clients), iPKO Junior (project for children up to 13 years of age) with the integration of processes with core systems.</p><p>In the meantime, we were connected with the project of migrating Inteligo customers from Profile to Alnova.</p><p>And then from the CESAR2 departmental system to Alnova. In this venture, both systems were running simultaneously for some time and the customers were migrated successively. Our middleware knew which system to work with at the moment.</p><p>At Bank BGŻ (now BNP Paribas), we have implemented an integration bus, which required integration not only with the core system, but also with all bank systems.</p><p>During the cooperation with Pekao SA, we gained experience in integration with the Rocket (Systematics) system.</p><p>In the project with Alior Bank, we returned to cooperation with the Profile system.</p><p>All these projects were related to cooperation (and gaining knowledge and experience) in the field of core systems such as Fidelity Profile (Sanchez company until 2004), Temenos Globus (which was called T24, TCB, and now Temenos Transact), Alnova Financial Solutions (Accenture), Cesar, Flexcube, Systematics and the Polish core system defBank (defBank 2000 and later defBank 3000) by Asseco.</p><p>Bank consolidations, and thus data migrations between systems in PKO BP, Pekao SA, Lukas Bank (Credit Agricole), BGŻ (BNP Paribas), BRE Bank (now mBank) also influenced the competences we acquired, which translated into for our solutions. Everyone who has participated in migration or integration projects knows what details related to the specificity or limitations of core systems need to be entered and how to use or circumvent them to bring the expected functional value in the organization.</p><p>The expectations of high availability of channels, and thus of banking services, meant that we had to create solutions maintaining the availability of these services in the event of problems with large core systems. These were the beginnings of our proprietary Advantica system. It was not a planned project. The system grew out of the fact that it covered the functionality of core systems when:</p><ul><li>they were overloaded,</li><li>supported EOD (End Of Day)</li><li>there was a failure,</li><li>it was simply cheaper to order from us to circumvent functional limitations than to order them and implement them in such a core system.</li></ul><p>We duplicated the maintenance and service of bank accounts, settlements, and billing functionalities, often in order to relieve the main systems, but thanks to this, we also gained very valuable and hardly available knowledge.</p><h3 id="birth-of-the-advantica-cloud-core-banking">Birth of the Advantica Cloud Core Banking</h3><p>IT companies followed different paths - some specialized in large foreign solutions (mentioned above), others created their own solutions resulting from the market needs, and we simply implemented projects based on them, creating specific, modular products made of many elements.</p><p>And this is how Advantica was founded in Softax. A system consisting of many modules that allow for adjusting the functionality to the real needs of banks, fintechs or other financial institutions, providing a comprehensive version with the full functionality of the banking system, or, if necessary, simply supporting core systems operating in the organization.</p><p>Advantica's main modules provide key solutions for:</p><ul><li>analytical and general ledger,</li><li>customer files,</li><li>handling deposit and credit accounts of individuals and companies,</li><li>domestic, foreign and currency settlements (Elixir, Express Elixir, Sorbnet, Sepa, Swift, Euro Elixir, Target),</li><li>customer creditworthiness assessment,</li><li>servicing credit products as well as broadly understood issuing and servicing of payment cards (multi-currency, debit, credit, charge and pre-paid).</li></ul><p>Therefore, today our Advantica Core Banking can easily compete with systems such as: <a href="https://en.wikipedia.org/wiki/FIS_(company)">Profile system by Fidelity National Information Services Inc.</a>, <a href="https://www.accenture.com/pl-en/~/media/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Industries_1/Accenture-Alnova-Financial-Solutions.pdf">Alnova Financial Solutions by Accenture</a>, or <a href="https://www.temenos.com/products/transact/">Temenos Transact</a>, and on our home garden the <a href="https://pl.asseco.com/en/sectors/commercial-banks/12/asseco-core-banking-27/">defBank system by Assecco</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-adv-.png" class="kg-image" alt="Our story into core banking and the birth of Advantica Cloud Core Banking" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/09/obrazek-wewnatrz-adv-.png 600w, https://www.softax.pl/blog/content/images/2020/09/obrazek-wewnatrz-adv-.png 760w" sizes="(min-width: 720px) 720px"><figcaption>Core banking systems</figcaption></figure><p>The advantage of our solution is that Advantica was built modularly from the beginning, which significantly improves the implementation of changes and ensures shortening the TTM (Time To Market) period.</p><p>In addition, we have recently successfully carried out an internal project of launching Advantica in the cloud on Kubernetes (K8s), which further increases the efficiency of the system and enables the transfer of management and administration to the cloud provider.</p><p>Times change.</p><p>PSD2 opens the world of financial services to startups, and startups need proven, safe and, above all, implemented solutions. We know this, which is why we decided to create an ecosystem in which users can test an environment not dedicated to a specific bank, but general to the handling of financial services.</p><p>Advantica Cloud Core Banking will soon be made available to a wide group of users. You will be able to set up a test account, make transfers, manage financial products, set up a virtual card, all on the cloud version of the system.</p><p>Stay tuned.</p>]]></content:encoded></item><item><title><![CDATA[The path of digital transformation to invisible banks?]]></title><description><![CDATA[Advantica as a solution for the banking industry of the future. According to Business Insider research, in 2020, 75% of large financial institutions are implementing solutions based on Artificial Intelligence (AI) and their benefits are yet to be seen in the upcoming year.]]></description><link>https://www.softax.pl/blog/the-path-of-digital-transformation-to-invisible-banks/</link><guid isPermaLink="false">5f64a0e737bc7451f5ebd9bc</guid><category><![CDATA[banking]]></category><category><![CDATA[digitaltransformation]]></category><dc:creator><![CDATA[Iwona Jedyńska]]></dc:creator><pubDate>Fri, 18 Sep 2020 14:46:23 GMT</pubDate><media:content url="https://www.softax.pl/blog/content/images/2020/09/advantica.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.softax.pl/blog/content/images/2020/09/advantica.png" alt="The path of digital transformation to invisible banks?"><p>2020 is the year of accelerated digital transformation in the world. The main impulse was COVID-19 pandemic and the several-month lockdown, which caused a greater than ever demand for e-verification of customer identity, remote contracting and fully electronic customer service.</p><h3 id="step-1-artificial-intelligence">Step 1: Artificial Intelligence</h3><p>According to Business Insider research, in 2020, 75% of large financial institutions are implementing solutions based on Artificial Intelligence (AI) and their benefits are yet to be seen in the upcoming year. Artificial Intelligence helps financial institutions to collect detailed customer information, which is then used to personalise products and services as closely as possible to the customer's needs or for purposes related to minimising credit risk. </p><p>Therefore, AI is also used on a large scale when assessing the creditworthiness. Increasingly, banks and other credit institutions verifying customers, take into account very inconspicuous factors such as the timing of filling in the "date of birth" field or the frequency of charging a mobile phones.</p><p>The extended time needed to fill in the "date of birth" field may indicate that the applicant does not know "his" date of birth and must rewrite it from some document, perhaps he is not who he claims to be on the web. On the other hand, a constantly discharged phone may indicate the user's irresponsibility and forgetfulness that in the future could result in the late repayment of liabilities by this person.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.softax.pl/blog/content/images/2020/09/ai.png" class="kg-image" alt="The path of digital transformation to invisible banks?" srcset="https://www.softax.pl/blog/content/images/size/w600/2020/09/ai.png 600w, https://www.softax.pl/blog/content/images/2020/09/ai.png 760w" sizes="(min-width: 720px) 720px"><figcaption>AI Benefits</figcaption></figure><h3 id="step-2-e-verification-of-identity-and-remote-contracting">Step 2: E-verification of identity and remote contracting</h3><p>The lockdown accelerated implementation of solutions enabling remote customer identity verification on the market. Both banks, insurance companies and investment companies discovered the need to introduce electronic identification of clients (e.g. with a selfie), without the need for personal contact between the client and an employee of their institution.</p><p>Moreover, it is becoming more and more popular to conclude contracts remotely using a qualified electronic signature, which can now be obtained in Poland, e.g. via KIR, within a few minutes without leaving your home.</p><p>A qualified electronic signature is equivalent to a handwritten signature, so the contract is closed in the way that is unquestionably binding. Therefore can be used even for large-value transactions such as housing loans.</p><p><a href="https://www.ausbanking.org.au/electronic-transactions-and-mortgages-should-be-here-to-stay/">The Australian Banking Association, delighted with the effectiveness of remote emergency processes used in Australia during the lockdown, appealed to the Australian government to become permanent solutions of identiy and documents e-verification for remote contracts to facilitate transactions, minimise costs and reduce problems of "personal" signatures and paper documents. </a></p><h3 id="step-3-open-banking-and-invisible-payments">Step 3: Open banking and invisible payments</h3><p>The cooperation of Banks with FinTechs, with relatively light legal regulations applicable to these entities, will enable the rapid evolution of payment methods and the creation of completely new products and services.</p><p>Consumers expect purchases online and in traditional stores will be quick and convenient. Payment methods also need to follow this path. Customers are reluctant to fill in any payment instructions or click on subsequent windows. They expect the line between purchasing a product or using a service or payment will disappear. Thus, payment processes should be automated and take place completely in the background, i.e. become invisible.</p><p>An example of this type of initiative is NS, the public transport system in the Netherlands, which uses technology for "invisible tickets". Let's imagine that ... You are getting on the train. You know where you got on, which train, which place you took and where you got off. The fee is charged automatically. Isn’t it convenient?</p><p>Another example. <a href="https://www.bbva.com/en/bbva-launches-its-invisible-payments-strategy/">BBVA Bank is testing the system of consumer facial recognition in the network of internal cafeterias</a>. The process of paying for orders takes place without using credit card. You can walk into the restaurant, order a meal, eat and leave without even asking for a bill. Your account will be charged for it anyway. </p><h3 id="step-4-voice-assistants-voice-and-chat-bots-and-video-consultations-with-virtual-advisers">Step 4: Voice assistants, voice and chat bots, and video consultations with virtual advisers</h3><p>Bank of America voice assistant allows consumers to voice account management, Barclays gives voice to Siri, who guides the customer through the transfer process and Capital One, using Alexa (Amazon), allows free access to all functionalities of the mobile application. Our native PKO BP has also introduced a voice assistant to the IKO application.</p><p>Voice, chat bots or video consultations with a virtual advisor are also becoming more and more popular.</p><p>These services are intended to be an alternative for people who prefer to talk rather than write and so far have to contact the call center or use the bank's stationary branches to order instructions or obtain information.</p><h3 id="step-5-the-age-of-invisible-banking">Step 5: The age of invisible banking?</h3><p>Financial services are closely integrated into our daily life.</p><p>Artificial intelligence, that is, advanced analysis of our habits, decisions, conduct, voice assistants, voice and chat bots, invisible payments and numerous FinTechs popping up like mushrooms, gradually preparing us for the world of "invisible banking".</p><p>It seems like customers will have less direct interactions with banks and their employees, but will  meet more often their needs through universal (fintech?) applications that allow them to simultaneously manage their finances deposited in various banks and select products and services from the catalog offers across different banks.</p><p>Banks may soon become "invisible rock" for clients who ensure their funds security and guarantee them fair credit conditions, protecting against usury and dishonest practices emerging from quasi-loan or quasi-financial institutions.</p><h3 id="so-is-digital-transformation-the-key-to-the-future-of-the-banking-industry">So is digital transformation the key to the future of the banking industry?</h3><p>Only the increased use of technology seems to be the way forward for financial institutions. To be most effective, banks and financial institutions should redefine themselves as agile technology companies.</p><p>This will be a particular challenge for technologically neglected or small institutions, often struggling with the problem of outdated infrastructure and archaic core systems software, which significantly limit the possibility of integrating these solutions with modern services. Migration of the "old" systems to the new solutions would be very time-consuming and costly. Therefore, it is worth looking for solutions that will create a new services with no limitations of ineffective core systems, by implementing independent supplementary modules and even covering the functionality of the basic system.</p><p><a href="https://www.softax.pl/products/advantica-cloud-core-banking">Such solutions are offered by Advantica - Cloud Core Banking System - by Softax</a>.</p><p>Advantica is a modular system that, depending on the needs of the institution, can provide comprehensive functionality in the field of remote identity verification and remote contracting, maintaining deposit accounts, servicing loans and assessing creditworthiness, domestic and foreign settlements, mobile payments, currency exchange or payment cards (debit, credit, charge, pre-paid).</p><p>The implementation of selected Adavantica modules tailored to the needs of particular  financial institution will allow the core system to remain unchanged while providing customers with access to modern products and services, including for instance immediate payments available 24x7, which are often available by banks only at certain times due to  limitations of back-end systems.</p><p>Advantica is a great proposition for small financial institutions that need a certain solution, but concurently do not want to invest in the expensive system or commence a long-term project of implementing Temenos, Alnova or Profile solution.</p>]]></content:encoded></item></channel></rss>