How can you maintain a crucial but complex IT application with monolithic architecture? 

A good solution would be to rebuild it using microservices – an architecture style in which the extensive IT system is built as small, independent services. Microservices communicate with each other, and they are more resistant to various types of errors. If an error is made in any part of the application, only that one path/microservice will not work. 

That is why microservices have two substantial advantages:  

  • less effort in the scalability of an application  
  • faster deployment and testing time  

Developers can use many languages and deploy modules independently = results of small changes are visible immediately = quickly delivering value. 

What is monolithic architecture?  

Imagine a huge and complicated application e.g. responsible for the company's sophisticated licensing policy that affects the work of thousands of employees - where IT developers spend a ridiculous amount of time developing a simple feature, but also knowing full well it is far from being released to users. An application in which the security, upgrading old packages and libraries to newer versions, release process, testing, and maintenance - almost every area, required improvement, but was simply hard to change. Where the quality of the single class or method code wasn’t bad, but the overall picture of the architecture was messy at best or just plain ugly.  

If this sounds familiar, perhaps you also use an application that has monolithic architecture. This architecture is useful for simple applications that you will not want to change or scale in a very complicated way in the future. However, it is totally irrelevant to extensive IT systems. 

In our case, the legacy system based on monolithic architecture can’t be turned off or replaced. So, what can you do when you want to improve the existing monolithic system? 

How to convert monolithic architecture to microservices architecture

Microservices are a software architecture approach in which a large application is built as a suite of small, independent services. Each service is designed to perform a specific task and communicate with other services through well-defined interfaces to achieve a larger goal.  

Check what the difference is between microservices and containers.

In describing the case, our team decided to decompose the legacy system (the monolithic beast) by doing a split based on functionality (sort of contexts). They formed the following groups of future services: 

  1. Micro-apps (frontend + API, with persistent storage) 
  1. Micro-services (API or worker or both, with persistent storage) 
  1. Proxies (frontend + API, no persistent storage) 
This system was supposed to be fed by an API, and it partially was. However, over 20 tables appeared in its database during the most active development phase, in most cases significantly "loosely related," if at all. That happened to speed up delivering short-term functionality, but it illustrates the scale of the problem. – said Jakub Wrona Software Developer at Fingo.   

By doing this, they immediately discovered a new challenge that needed to be addressed first. In a monolithic system, the application lets the user sign in and usually sets a session. After that, every next call to the server is authorized for as long as the session is active. But what if you have more apps that function on their own?  

The developer team decided to move the authorization mechanism from the web server to network level. That meant the web servers were now "unconscious," but any traffic reaching them would now be passed to the authorization service bound with the load balancer. The authorization service is responsible for checking if a request was permitted. It takes the original request as a "subrequest" and acts as a logic gate responding with 200 or 403 to the load balancer.  

What are the pros and cons of their approach? The first is obvious - they don't need to duplicate any logic related to authorizing requests on your webservers.  

The cons (debatable though) are, for example: 

  • The authorization service requires some understanding of the system (paths, methods, etc.) to "decide" if access is permitted. 
  • The authorization service must be highly reliable and fast. Remember, every request, including those pertaining to assets, is passed to it, and the time it requires to authorize must be added to the total time to process any single request. 

To respond to 1. They've created another micro-service called "Roles and permissions." It has dedicated storage and is responsible for all the data about who can do what. 

Addressing 2. It was more challenging, but in the end, their authorization service is highly reliable and fast and still PHP based. To be precise, it's a Swoole server. The Swoole server reads an optimized set of rules and permission statements (1.) while starting and keeping them in memory.  

A simplified diagram of what we tried to achieve with the authorization layer could look as follows: 

Microservices – work in progress 

The developer team wanted to implement as much functionality as possible in an asynchronous fashion. The first decomposed functionality that records all events related to the user, products, and licenses was a great opportunity to do it that way.  

The monolithic system switched from saving event data into a database to pushing a message to the message bus. Following this, a worker picks that message up and a processed message is saved in storage. Moreover - the new microservice also exposes an API to search for events or retrieve single event information. They now have a few more microservices that follow this scenario. This explains the why and how of what "the workers” do. 

The message bus is another part of the system that must be reliable. Even when they internally discuss failure scenarios or try to identify single failure points, they need to consider the unavailability of the message bus.  

On the other hand, it's more likely that a worker could fail while processing a message. This may happen primarily when the result is written in the database or when a request to internal or external API is sent. To avoid the problem of missing data, we introduced a way to handle failures during message processing.  

They used a retry strategy to set an incremental "x-delay" number in minutes from an exponential sequence. They chose RabbitMQ, industry-standard technology, to power our queues. If messages are not processed correctly within a certain period, they are moved to a failure queue and wait for a decision. 

RabbitMQ is a compelling technology. It supports many useful features to mention different exchange types. It also helps you handle message versioning if your message structure gets changed within the project's life. 

Did you know...?
Microservices and
serverless computing are two concepts that can be used together to build and deploy applications flexibly, scalably, and cost-effectively.

But what if they can’t extract a bit of functionality and delegate it to a separate server? What if they have a complex page where it wouldn't make sense to extract part of its functionality?  

They did decompose a few problematic pages from the monolithic system, and they became either the mico-apps or proxies depending on storage. Once they addressed the authorization issue, the only problem was matching specific URI paths and forwarding them to another web server.  

In our case, the first page delegated is a "Search." Because all the data the user could search for is stored in an API, this micro app is a proxy. React frontend requests the backend service, which validates and passes it to the API. They couldn't call this API directly from React because we mustn't reveal its secrets to the end user. 

The KISS principle – Keep It Simple, Stupid  

They pull another bit from the monolithic system every few sprints and turn it into a separate service. With its codebase, tests, release process, and dedicated team focused on a reduced complexity and limited responsibility chunk. They "rewrite" environment and dependency and add functionality awaited by the business for a long time. The project is far easier and far more efficient, with reduced complexity and functionality. 

Microservices pros and cons 

Of course, microservices come with their own set of tradeoffs and challenges. For instance, managing many microservices can be complex. It requires a more advanced infrastructure, and it is said that microservices are the future of applications developed for the cloud. That is why it’s important to engage in projects with experienced DevOps engineers from the very beginning.  

The main advantages of using microservices is that they are modular and scalable. Because each service is self-contained and performs a specific task, it is easier to develop, maintain, and deploy them independently. This means you can make changes to one service without affecting the other services in the application. That is why microservices’ architecture is more resistant to bugs.

Additionally, because each service can be deployed and scaled independently, it is easier to scale the application as needed by increasing the resources available to certain services.   

Do you need regulatory compliance software solutions?

Accelerate your digital evolution in compliance with financial market regulations. Minimize risk, increase security, and meet supervisory requirements.

Do you need bespoke software development?

Create innovative software in accordance with the highest security standards and financial market regulations.

Do you need cloud-powered innovations?

Harness the full potential of the cloud, from migration and optimization to scaling and the development of native applications and SaaS platforms.

Do you need data-driven solutions?

Make smarter decisions based on data, solve key challenges, and increase the competitiveness of your business.

Do you need to create high-performance web app?

Accelerate development, reduce costs and reach your goals faster.