Build a microservice-based ecommerce web application with Kubernetes
Learn how to build a distributed, scalable ecommerce web app using microservices on Kubernetes.
Go back
Overview: Building a microservice ecommerce web app
Learn about architecting an online retail store web application that is scalable and well-equipped to handle spikes in traffic at a global scale. Using a microservice architecture, the application is deployed as multiple small services, each handling a specific aspect and task. Learn about data consistency and scaling in distributed web applications using a Redis-based multi-cluster database.
Backend for a microservice content-driven app
This application uses a microservice backend consisting of 11 different services focusing on a specific part of the retail app, each built independently and using a framework or language best suited for the task which includes:
- Shopping cart service (C# and .NET)
- Payments service (NodeJS)
- Product recommendations(Python)
- Transaction and data storage (Redis database, discussed below)
Each service is independently tested using its appropriate language, framework and libraries features.
There is a strong need for microservice architectures to communicate between each service, handled here through gRPC bindings implemented in each service.
It is built on top of Kubernetes, which handles the services' deployment, scaling, and orchestration.
The application consists of three Kubernetes clusters - two for the regional distribution of microservices and one for the configuration and load balancing between regions.
Learn about designing the backend for a content-driven web application built around a microservice architecture.
Hosting a microservice content-driven app
This application is built with Kubernetes, which automates each containerized, independent service's deployment, scaling, and management.
It is deployed to Google Kubernetes Engine (GKE) with Autopilot enabled. This service manages deployment, configuration, scaling, and security. It will automatically scale up the deployment if there is a spike in traffic and additional resources (such as CPU, memory, or other services) are needed to meet performance demands.
Incoming requests are handled through a Google Cloud Load Balancer that distributes requests between two regional Kubernetes clusters through the Multi-Cluster Ingress service . Each cluster serves its own frontend and complete stack of services, making them independent from each other to utilize regional distribution (to minimize latency and improve speed for customers) and improve resilience in case of issues with a cluster or location. This could be scaled up further, adding additional regions and clusters to scale closer to customers and provide even more resilience and reliability.
Through containerization, the frontend of this application is served through a dedicated Golang-based service that uses server-side rendering and templating to utilize this fast, scalable backend and minimize the number of requests optional to render content.
Learn more about hosting and serving options for a microservice, content-driven web application.
Data Storage for a microservice content-driven app
This application uses a Redis in-memory database cluster to store shopping cart data while a customer is browsing the store. This Redis setup is deployed to a single Kubernetes cluster for easy deployment, accessed directly from a microservice in the same cluster. Expanding this deployment to improve resilience, security, and scalability in a production environment is advisable. This could be accomplished using a fully managed database service external to the Kubernetes cluster deployment, such as Memorystore (Redis) , that handles scaling, availability, and management. (Learn more about updating the demo application to an external memory store service . )
While not included in this demo application, consider how your application stores and handles transactional data. Consider the type of data included and how the application will access it. For example, online store purchases and order history data are best stored in a relational database. Similar to the in-memory data store, this could be deployed externally to the Kubernetes cluster to avoid managing it yourself. Cloud SQL is a managed relational database service that can be connected to Kubernetes clusters and securely accessed from microservices.
Alternatively, consider a system like Cloud Spanner, a fully managed, highly scalable, high-performance database. It uses a dynamic schema with Strong-ACID and includes a PostgreSQL interface to make migration and development easier.
Learn more about data storage options for a content-driven application, particularly about handling global distribution, consistency, and scaling.
Frontend for a modern ecommerce application with Bootstrap
The application's frontend is built using the Bootstrap framework for HTML, CSS, and JS, which provides components and templates for responsive web designs.
This framework includes well-tested components that follow the web accessibility standards if used correctly. While these frontend frameworks support customization through scripting, styling, and markup, it is always important to confirm that any customizations and changes meet the accessibility standards and requirements.
Components from the Bootstrap library support web best practices for web forms, which is crucial for an online retail store.
When using a frontend library or framework, it is always recommended to follow their best practices to test and optimize the performance to ensure that it meets your expectations and requirements.
Learn more about designing and building the frontend for a modern ecommerce application.
Ecommerce web app deployed on Kubernetes
Explore the sample ecommerce application through a jump start solution - "Ecommerce web app deployed on Kubernetes."
As you explore the application, consider the key parts of the application discussed here, including
- Frontend built with Bootstrap and server-side rendering.
- Different microservices that make up the backend.
- Communication between different microservices via gRPC.
- Redis database that stores transactions and data.
- Three-cluster Kubernetes configuration, including two regions and a config cluster that manages the Kubernetes deployment.
- Deployment of clusters on Google Kubernetes engine with Autopilot that automatically scales and manages the deployment.
- Incoming request handling and load balancing through Multi-Cluster Ingress.
To get started, open the article and deploy the application. Alternatively, you can explore the code on GitHub or try it directly in a Google Cloud project.
Check out the Terraform and Kubernetes configuration on GitHub , the microservices application code , or Try it out directly in your Google Cloud project .