In this blog post, I conducted an experiment, which shows one of the many advantages the Cloud Foundry Environment can offer to you. It will demonstrate how to use the existing resources more efficiently and therefore, how to lower your infrastructure cost.
I recently read a well-written blog post by
ivanfemia “
Why SAP Cloud Platform is a real Cloud Foundry Platform” in which he showed how easy it is to deploy a hello world application (written in
Golang) on SAP Cloud Platform. This is only one show case of the power of the
Cloud Foundry Application Runtimes.
Now you might wonder “Why should I care about the different application runtimes in Cloud Foundry? My apps are written in <insert your favorite programming language> and they work fine”.
Well, each (or at least most) programming language has been designed for a specific use-case and performs extremely well on those tasks. This doesn’t mean that they are suited for all tasks. So yes, your apps might run fine, but they might also waste computational resources which eventually costs you money.
This blog post is about a small experiment I have conducted to test this hypothesis (I also got some inspiration from
Tim Nolet)
I Experiment
Apps
I’ve implemented the same program in three very popular languages:
Node.js,
Java and
Go. The app is a simple web server with two HTTP endpoints, one endpoint simply returns a random number (not much computation needed, right?), while the other one returns the first 75 Fibonacci numbers (computationally more intense). The code is implemented in the same manner in all languages (as far as it is possible). I choose those easy tasks, same as Tim, for one specific reason. I didn’t want to compare the performance of web framework X and Y. I only want to compare the application runtimes and discover how they behave in load scenarios.
Code
You can find the code of all three applications (including instructions how to deploy them) on
Github.
Test
I deployed the apps easily with 'cf push' to my trial landscape of the SAP Cloud Platform and performed a simple
loadtest with an appropriate npm module. Each run triggered 30,000 requests, whereby the number of concurrent requests is variable. Those load tests were performed for both endpoints (random number and first 75 Fibonacci numbers) separately. I used 'cf app <appname>' to see how the CPU utilization and the memory consumption of the applications altered during the experiment.
II Results
Memory / Disk consumption
To me, the most surprising finding was the memory / disk consumption. I knew that the Java Virtual Machine (JVM) need to reserve a lot of RAM just to start up (even though our actual program might not need it) while Go is very economical with resources. Still, it was astonishing to see how little RAM the Go app actually needs to run (and bear the load of 30k request):
Those are the max values that occurred during the test
CPU Utilization
The following chart shows the CPU utilization over the course of the experiment. We can see that all requests to the ‘Fibonacci’ endpoint eventually grow to a CPU utilization of 100%, it’s worth mentioning that the Java app reaches this limit only during higher stress levels whereas the Node implementation maxes out quite early.
The mean latency time shows a similar behavior. The mean latency of the Node app (Fibonacci endpoint) grows almost linearly after its CPU utilization reaches 100% whereas the Go and Java implementation have a mean latency time which is almost constant.
As a consequence of the previous charts we can also see that the throughput (served requests per second) starts to grow almost linearly with the number of concurrent requests. Once the CPU utilization hits 100% the throughput stops growing and might actually drop (as we can see in the case of the Fibonacci endpoint of the Node implementation).
III Conclusion
We have seen that the sample implementations in Java, Node.js and Go show very different behaviors during the load test. While the Go app is very economical with memory and disk, the Java app required the by far the largest memory and disk consumption. The Node app reached the maximum CPU utilization very early in the experiment, whereas the Java app could handle the load with the lowest CPU utilization.
All three apps reacted with slower response times and a larger average latency once the CPU utilization climbed over 90%.
It’s crucial to keep in mind that a CPU utilization of 100% “only” results in slower response times. Overloading of the memory results in a crash of the application!
If we would like to be able to bear the same load on all implementations, we needed a faster CPU / greater memory for certain implementation. This upscaling causes a
more expensive cost of ownership! This shows that the question of the optimal programming language is not only a technical decision, but also business decision.
Luckily it doesn’t matter what the answer to this question is: The SAP Cloud Platform is able to run the most efficient implementation of your application or (micro-)service.
Disclaimer: The purpose of this experiment is NOT to bash any programming language / runtime or to tell you which one is the best. The main conclusion is:
Different runtimes result in very different resource consumption behaviors. This post should raise the awareness that there is no single / best programming language. The choice of the used runtime should be evaluated carefully before one starts a new project.
I hope you enjoyed reading this blog post and I would appreciate it if you could leave me your thoughts on this in the comments below.
Update: Compared to the previous version of this blog post (in which I used express.js instead of the low-level http module) most results remain unchanged. The only big different is, that the CPU utilization of the 'node/random' endpoint did not explode with a growing load. This means the application was able to serve more requests faster.
I also noticed that the memory/disk consumption of the node app decreased, since external dependencies have been removed and don't consume memory resources anymore.