MYTOYS GROUP is a great company since they give developers and architects – through the use of scrum, guilds and chapters – time or flexibility for own ideas. So anyone can start guilds to develop own ideas. Through this system, it was possible to replace old monolithic architecture during the last two years and to implement a completely new service-oriented architecture (SOA), which better satisfies developer as customer needs. We call it “Multishop Platform”.
Our new platform now hosts multiple shops on the same source code. For each page type, all shops share the same page template and connected assets. New shops inherit the template and resources to speed up the process of setting up a new shop. This approach works well, when the requirements of shop individuality are reduced to a minimum. Implementation of some individuality through “components” is possible, but we observed higher costs and began to investigate why.
We’ve observed the following aspects and problems:
- Backend teams, which are working work on their own service, work independently from other teams. Their need for communication to other teams is low. We say: their communication overhead is low. As a result we observe, that their error rate is low and their development speed is high.
- In contrast our frontend team colleagues, working together on our monolithic frontend system, have a very high need for communication overhead . We observe that more communication overhead in parallel increases measurably the error rate and slows down the coding performance dramatically.
- During a scrum sprint the code commits and change requests from various features were collected together till a system deployment takes place. We observe that this methodology results in a high complexity change. This deployment will consume much time, because all artefact version must be aligned together correctly. Because of the higher change complexity the error rate increases significant.
- Resulting errors from the aspect before cannot be identified fast. The process to identify the erroneous commits in a bigger number of deployed commits consumes much time and therefore is of higher complexity.
- We observe that errors as we maintain the SOA architecture could not be fixed as fast as we need it to fix. We recognized that it will be very hard to prevent these errors when we not change our methodology in a way a code change can be set in production without functional dependencies to other code changes.
- Backend and Frontend Deployments were often required simultaneously, so that a new feature could go completely live resulting in deployments being complex and therefore high risk.
- We installed a virtual release team to deal with releases, but this reduces development time and feature throughput.
Microservices as components
Our TechVision transforms this service-orientated architecture into a microservices architecture using “components”. The idea is to develop “components” very similarly to the way that apps are developed for the app store. They are independently to implement and to deploy.
Therefore the question is: can we build the shop page templates completely on the base of components?
Team gains the advantage to separate business logic code for different features. This reduces the code complexity, improving maintainability, reducing risks, lowering costs, speeding up the development and deployment process and increasing the feature and change request throughput. When we put such components in a component app-store then the stakeholders can use them without the need for a change request or an additionally deployment process.
In short, we are plan to make components plug and play and we decouple the stakeholder process from the developing process. This will scale when we create more shops!
But how do we bring the components together?
A page assembler (PA) assembles the results from each required component to a valid web page. A valid protocol and interface between a page assembler and our new components are required. The assembler must collect the request from the user and deliver all necessary information to the components. Then each components HTML can be assembled to build a valid web page, which can be sent to the browser.
In summary, a component is an embedded container delivering all the resources to the Page Assembler – and the Page Assembler assembles the components together on the web page.
The main requirements for a Page Assembler (PA) in our scenario are:
- The PA acts as a kind of logic driven façade.
- The PA must be very stable, since otherwise the requested web page cannot be delivered to the browser. The idea is to rarely do code changes in PA microservices. We reach this goal in one aspect when we not implement business logic in the PA, because as everyone knows business logic will change very often. So rarely changes helps us reduce risks and the PA will reach a higher stability level.
- The PA must be fast to satisfy our users and customers with a high performance website. When we request MSV from the PA we want to do it in parallel. Reactive Programing or hard threading are our focused techniques. This means the time to first byte (TTFB) is equal to the longest MSV runtime (e.g. 50ms) + PA runtime (e.g. 50ms) = total runtime (100ms).
Asset handling is one of the main challenges in a distributed environment.
One approach is to write all the asset links into the webpage. The request for an asset goes through the MSV itself which then delivers the asset to the browser. The assets then can be implemented independently, our goals are reached, or?
This approach has 2 limitations:
- We have to allow the MSVs to be accessed from the outside world, which would otherwise not be necessary.
Ok, the first approach does not satisfy our needs.
A second approach is to deliver all the assets via the protocol from MSV to the PA. The PA then can handle the assets. During development time the assets can be linked (faster development overview). For production time the assets can be bundled (faster delivery). Additionally it will be possible to embed the header assets and to bundle only all the others (very fast delivery of some aspects). So the header will be viewed very quickly to the user, while the bundled assets load in parallel. For asset updates and PA integrated asset manager must add hash values to the asset names. When an asset code base changes, the manager can compare the new hash value to the given one and can decide to replace the current version with the newest one.
When a production error occurs, you can enable the PA with a special get parameter to overwrite bundling for a given request with the embedded link mode to enable better bug analysis. It is also possible to mix these two approaches, e.g. header assets will be directly embedded per link in the page whilst all other assets will be bundled and loaded later thereby speeding up page rendering. Alternatively, the components can decide how the page Assembler should handle assets, because they or their teams know best!
Configuration is also a big challenge for the PA. Therefore we propose an additional microservice implemented as a Configuration Service doing the following tasks:
- Determine which page should be assembled for each request (identifiying configuration).
- Determine which components are on that page (identifying components).
The PA requests a Page Configuration by sending the request URI as parameter to the Configuration Service. This service responds with a Page Configuration object containing information about the page’s layout container structure and embedded components. The PA uses the layout structure as a guide to set up an HTML page structure and to contact necessary MSVs to embed their resources. The result will be sent to the user as a web page.
The Page Configuration will be owned, managed and changed from the marketing department of each shop, so that only the components are developed by the developers. It is also possible through extensions to Page Configuration to handle topics such as A/B Testing and Personalisation.
Some of the next questions we will spent time for research
- How do we migrate to the new architecture?
- How do we deploy the MSVs in zero downtime?
- How do we do continuous deployment in this architecture? And which stages do we need for this continuous deployment pipeline?
- How do we orchestrate microservices in the cloud with autoscaling?
- How we can migrate into clouds to dynamically scale MSVs which are under heavy load?
In further articles we will explain our research results on those questions in detail. Currently we explore cloud orchestration using Docker and Kubernetes. We are trialing OpenShift for this purpose and seeing how this helps us to deal with orchestration models, hosting, scalability, security, and more tasks which are not directly related to development.