Any successful owner of a digital product will tell you that building a web application is only the first part of the journey. The biggest challenge is ensuring that it has the capacity to scale. This means that you have to be ready for a much bigger group of users than just your test group to use the app, and for the system to be able to handle it. How do you achieve this? By getting started on application scalability even before beginning development. In the following blog post, I’ll talk you through the full meaning of a scalable web application and how to overcome the major scaling challenges.
First things first - what does ‘scalability’ really mean?
The simplest definition of ‘scalability’ is the potential of your application to cope with increasing numbers of users simultaneously interacting with it. Ultimately, you want it to grow and be able to handle more and more requests per minute (RPMs). There are a number of factors that play a part in ensuring scalability, and it’s worth taking each of them into consideration.
The top challenges when it comes to scalability and how to overcome them:
Overloading the web server
With an increase in the number of clients simultaneously connecting to one web server, the server will eventually run out of CPU and RAM and cease performing. So what can be done to remedy this? Quite simply, server resources need to be increased in order to accommodate more clients. This can be done in a number of ways.
Vertical scaling: this is either the addition of resources to the existing server, or its replacement with another more powerful server. In this case, the architecture remains the same, but it's important to note that this isn’t a permanent fix. Why? Because even these resources will eventually run out and the vertical scaling would need to go on forever. This is why you need a better long-term fix. Enter horizontal scaling.
Horizontal hybrid scaling: This involves the addition of more servers which serve the same purpose as the first one. With an application’s continued popularity, the current servers runout of resources, thus we need to add more servers to serve other incoming clients.
A combination of both horizontal and vertical scaling: The above-mentioned scaling approaches are not mutually exclusive and can certainly be used in combination. Any application is capable of vertically scaling up, horizontally scaling out, neither, or both. You may well have a scenario in which parts of your application only vertically scale up, while at the same time other parts might horizontally scale out.
Failure to effectively distribute traffic
When you have multiple servers in place, you now need to make sure that the load is evenly balanced between them. The best way of doing this is through the use of a load balancer, which acts as an intermediary between the clients and the servers. It registers server IP addresses and can route traffic from the clients to the servers.
There are many different methods in which it is able to conduct its job of routing traffic between the servers, a popular one being round robin which sends requests to servers on a cyclical basis. But beware that having only one load balancer means that if it fails, the whole structure breaks down. To avoid this issue, it's worth setting up two or three load balancers at a time. With so many cloud services currently on offer, it's relatively easy to set up a load balancer that works.
Manual scaling vs Autoscaling
In situations where you find that the above methods of manual scaling are insufficient, you may want to resort to autoscaling.
Auto-scaling is a way to automatically increase or decrease the number of compute resources that are being allocated to your application based on its needs at given moments in time. In a traditional (non-Cloud) hosting environment, there are limitations around hardware resources, but with cloud computing, all of this can be automated.
Many people associate the term ‘autoscaling’ with handling sudden spikes in traffic, but in reality, the process is equally beneficial over the entire lifetime of your setup, which can be years. The main thing to remember, is that it is now possible to design a scalable architecture that will automatically scale-up or scale-down to meet your needs over the lifetime of your setup regardless of the changing speed and size of your application.
Scaling the front-end using cache tiers - So we’ve spoken about the scaling of the backend in detail. Now it’s time to consider how to scale the front-end. The purpose of Content Delivery Network (CDN) caching is to provide availability and performance for content served online. There are a few different ways in which this can be done, including the provision of Global Traffic Management (GTM) services to route content to the closest or fastest datacenter or the use of Edge serving.
How does this work? When a user from the other side of the world tries to reach your site in your datacenter, the request travels across several different hops. Each hop is a router connected to the Internet, and it therefore adds a certain amount of latency - and this latency can be minimised through the use of Edge caching.
This is where a CDN will provide a network of geo-distributed servers that will reduce time to load by moving the serving of the content closer to the end user.
Inefficient database management
Database management is key to effective scaling and this begins with the selection of a strong database engine. Next, you will want to look at designing the best possible schema that your app can handle in order to be able to increase the number of transactions per second. It’s worth exploring different options here, including DaaS - database as a service.
We previously discussed risk management through the use of several servers instead of one - the same rule applies with database management and the best method of introducing multiple database servers is through replication and sharding.
Replication concerns itself with making data copies. The best way of working would be via a master database from which data is copied to subordinate databases connected to it via a network connection. This would mean that every query which is executed on the master database would also be executed on the subordinate databases, minimizing risk.
Sharding is essentially the splitting of your data into smaller subsegments, spread across distinct and separate ‘buckets’. This results in increased performance because it reduces the impact on each of the individual resources, allowing them to share the burden rather than having it all in one place.
Replication and sharding used in combination - Replication and sharding can be used together to good effect, as they can lead to both high availability and improved performance. Using sharding, there will be a number of instances with particular data based on keys. These instances can then be individually replicated to produce a database which is both replicated and sharded.
Poorly performing queries
All web applications rely on vast amounts of database queries, if you have an engine and scheme with relations. For a small-scale apps, which handle a few hundred users, this querying is fairly non-problematic, but in the case of millions of users, the load problem once again kicks in. What’s the answer here then? Using a cache.
Every time a user logs in, a database query runs to get the user at every request. But here, we would first try to get the user from the cache - if the user exists then there is no need to query the database, and if the user does not exist, then we would go ahead and query the database.
Code faults and lack of updates
Sometimes the most basic tasks might slip our mind, which is a shame, as they play a big part in ensuring application scalability. It’s therefore worth incorporating the below points into the day-to-day work of your developers and performing regular checks to ensure that they are happening.
Code cleanliness - Make sure that you separate frontend and backend layers in your application and detach background jobs from the main system. Also, select design patterns wisely.
Faults in app architecture - Any faults in the architecture of your application will of course have a negative impact when it comes to scaling, so it’s always worth doing regular reviews of this.
Keep updating using the latest tools - keeping each tool up to date helps to avoid problems resulting from outdated parts of your system.
Conclusion
There are many factors that play a part in effective application scalability, which results our applications being better able to cope with increased demand. Importantly, we can also make better use of our resources with downscaling during periods of lower traffic. I hope that the above tips will help you when you’re looking to take your application to the next level.
Need further support with your app? Want to find out more about our work across a range of different frameworks? Why not drop our experts an email on hello@10clouds.com or visit our website: www.10clouds.com