Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
ronnyfiebig
Explorer
2,535
What do you expect from the Google Cloud Platform? Speed? Simplicity? Good documentation?

At least in the last point there is unfortunately still a need for optimization in many areas.
In the following I would like to show you how to get the most out of Google's Backint implementation and achieve very good data rates.

The starting point is a 2 TB SAP HANA 2.0 SPS 4 on a SLES 12 SP4.

Technical Setup


As a first step, we need to set up the following things:

  • a Google Cloud Storage Bucket

  • a technical service account to write into the bucket

  • install the backint agent on the HANA VM and configure it

  • optimize SAP HANA backint performance


The Cloud Storage Bucket


Google offers three different main storage classes, information about the different classes can be found here. Information about costs can be found here. In contrast to the "local" hard disks, there is no information about the speed of each class.
If you look at the names you might think that the Regional Storage should be faster than the Coldline Storage. But that's not the case, after consulting your colleagues at Google! All storage classes are designed for at least 600MB/s and thus achieve completely sufficient speeds, even for larger systems. [This paragraph contains an error, please note the update!]

In our example, we have created a Coldline Bucket because it has the lowest cost.


Service Account


In Google's setup recommendation it says that a service account is only needed if backint is to write into a bucket generated in another project.
In my experience the creation of a service account is recommended in any case. Without this account it came again and again to problems with the backup of the data base.
Almost half of the errors could be attributed to missing permissions. According to the documentation the project admin should have enough rights. Only after the creation of the service account all backups ran without problems.



We have named our Service Account according to the target system and assigned it the "Storage Object Administrator" authorization.

Install the backint Agent


Too many words I will not lose about the installation, because Google has documented the whole really well:

Since our findings, the documentation has already been revised at this point and a note on optimization has been included in the documentation.

But what performance do we get if we just set up backint?
In short a very modest one, the backup of the 2TB system took 23h 59min in the first run. This is of course completely unacceptable.

What do we do now?

Optimize SAP HANA backint performance


The SAP HANA Operations Guide from Google says the following regarding the possibility of backup streaming.
Multistreaming is useful for increasing throughput and for backing up databases that are larger than 5 TB, which is the maximum size for a single object in Cloud Storage.

I would use the features not only starting from 5TB size, but from the first GB on. The reason? The backup time of our hosted Solution Manager 7.2 (72GB) could be reduced from one hour to 14min.

We have achieved the best performance with the following settings.



  • DISABLE_COMPRESSION in backint configuration


This increased the size of the backup from 1.9 to exactly 2TB. But with this step the performance could already be improved a little bit (1h).

  • Increase SAP HANA backup parameters according to 2657261


In contrast to the rather conventional values in the note, we have set the parameters as follows.

  • data_backup_buffer_size from = 8192

  • parallel_data_backup_backint_channels from = 12


And how has the performance improved now due to the adjustments?

The backup time decreased from 24h to 1h 22min, the data throughput increased to 428,29MB/s. For comparison, the backup to a directly attached hard disk takes 1h 51min and delivers a data throughput of only ~304MB/s. In my opinion, the throughput of the backup and restore process can certainly be improved. After all, we are still 170MB/s away from the maximum possible values.
If I get further insights I will update this post.

I hope I was able to help some with this short blog post to improve backup performance on the Google Cloud platform.

Update regarding the selected storage class - 07.07.2020


In the paragraph about the Cloud Storage Bucket I wrote that I chose the Coldline Bucket because of the fact that the performance does not differ between the buckets. This is absolutely correct, but it is only one side of the coin.

The other side has a lot to do with the retention time of the backups. The different bucket types have mandatory retention times and deleting them beforehand costs money.
For example, if your backup retention time is 30 days, Coldline is suddenly more expensive than standard storage. This is simply because you delete data from the bucket before the mandatory time and pay money for each of these deletions.

In the following I have calculated this as a diagram for 52 weeks with a full backup of 1.5TB each, weekly LOG backups 60GB (very optimistic) and a storage retention time of 30 days. Here are the annual costs in US-$ without consideration of georedundancy























Storage Type Yearly Costs
Standard Storage 2095,55
Nearline Storage 805,98
Coldline Storage 2096,35
Archive Storage 4152,79


Commulated costs for GCP Cloud Buckets in the HANA environment


Of course, if you have a retention time of 90 days, then Coldline will suddenly become interesting again, as the pure storage costs are much lower than with Nearline.

 
2 Comments
Labels in this area