The Kubernetes - taming the cloud

The Kubernetes – taming the cloud

When you want to use Linux to deliver services to a company, these services will need to be made secure, flexible and scalable. Beautiful words, but what do we mean by them?

“Secure” means that users can access the data they need, either read-only or write access. At the same time, no data is in contact with anyone who is not authorized to view it. Security is misleading: You may think that you have everything preserved to find later that are holes. Designing in security from the beginning of a project is much easier than trying again later.

“Flexibility” means that your services tolerate infrastructure failure. There may be a malfunction in the server disk controller that is no longer able to access any disk, making the data inaccessible. Or, the failure may be a network switch that no longer enables two or more systems to communicate. In this context, “single point of failure” or SPOF is a failure that adversely affects service availability. Flexible infrastructure is not an SPOF.

“Scalable” describes a system’s ability to handle demand spikes safely. It also tells how easy it is to make changes to the system. For example, add a new user, increase storage, move infrastructure from Amazon Web Services to Google Cloud – or even move at home.

Once your infrastructure transcends a single server, there are many options to increase security, flexibility, and scalability. We will traditionally focus on how to solve these problems, and new technology is available that changes the face of large computing for applications.

To understand what is possible today, it is useful to look at the traditional way of implementing technology projects. In the old days – more than 10 years ago – companies buy or rent equipment to run all components of their applications.

Even relatively simple applications, such as WordPress, have many components. In the case of WordPress, a MySQL database is required to handle a web server such as Apache and PHP code. So, they will create a server, set up Apache, PHP, MySQL, install WordPress and shut them down.

By and large, that worked. It has worked well enough that a large number of servers are still configured this way. But this was not correct, and the two biggest problems were flexibility and scalability.

Lack of flexibility means that any significant problem on the server will result in loss of service. Apparently, the catastrophic failure does not mean a web site exists, but there was no room for scheduled maintenance without affecting the site. Even installing and activating a regular security update for Apache will require a few seconds of disruption of the site.

The flexibility problem was largely solved by the creation of “high availability groups”. The theory was that there were two servers running the website, so configured so that either website would crash. The service provided was flexible, even if they were not individual servers.

Slashdot effect
The problem of scalability is more difficult. Suppose WordPress gets 1,000 visitors a month. One day, your work is mentioned on Radio 4 or Breakfast TV.

Suddenly, you get more than a month of visitors within 20 minutes. We’ve all heard of “broken” websites, which is why: scalability is not scalable.

Two servers that help with flexibility can manage more workload than one server alone, but this is still limited. You will pay 100% of the time for two servers and most of the time they work perfectly.

It is possible that a single person will run your site. Then John Humphries mentions your business on “Today” and you will need 10 servers to handle the burden – but only for a few hours.

The best solution to the problem of flexibility and scalability is cloud computing. Set up one or two server instances – small servers running your app – on Amazon Web Services (AWS) or Google Cloud, and if one fails for some reason, it will automatically restart.

Automatically set the metering correctly and when Mr. Humphries quickly increases the workload on your web server instance, additional server instances are automatically turned on to share the workload. Later, with low interest, these additional matters are withheld, and only for the payments you use. Perfect … or is it?

While the cloud solution is a lot more flexible than traditional standalone servers, there are still problems. Updating all running instances of the cloud is not easy.

Cloud development also presents challenges: Laptops used by your developers may be similar to the cloud version, but this is not the case. If you are committed to AWS, migrating to Google Cloud is a complex task. Suppose for whatever reason you don’t want to give computing to Amazon, Google, or Mi.

Leave a Comment