cancel
Showing results for 
Search instead for 
Did you mean: 

How to tune the ASE IO performance on docker container?

former_member232292
Participant
0 Kudos
851

Dear All,

I want to create a docker image for sybase ASE/IQ. And I got some problems these days -- while the DB running, there always are much more higher extra IO on the host generated by kworker threads. It impacts the IO performance heavily. I can't find solution for it. Please kind advise. Here's the details --

I'm using a image of sles11 from docker hub -- https://hub.docker.com/r/darksheer/sles11sp4 -- And installed Sybase ASE 15.7sp141 on the container of it. Then while I'm creating the DB server, I found --

1. The srvbuildres command runs very slow -- usually it only takes 3-5 minutes to finish on host, but it takes 1.5h to complete on the docker container.

2. I used "top -d 1" and "iostat -x -k 1" to check IO busy -- found the io_wait is always low , but the svctm are high -- it means the IO is very slow.

3. I use pidstat to trace the IO request on the host , and found -- most IOs were consumed by kworker threads --

Here's a sample while I creating a small test -- create database -- on docker container --

08:06:05 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command

08:06:06 1105 66252 0.00 16.00 0.00 0 dataserver

08:06:06 0 66528 0.00 0.00 0.00 1 kworker/2:2

08:06:06 0 66574 0.00 2400.00 0.00 0 kworker/1:0

08:06:06 0 66584 0.00 96.00 0.00 0 kworker/u256:1

08:06:06 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command

08:06:07 0 65159 0.00 720.00 0.00 0 kworker/u256:7

08:06:07 1105 66252 0.00 112.00 0.00 0 dataserver

08:06:07 0 66528 0.00 11696.00 0.00 2 kworker/2:2

08:06:07 0 66530 0.00 14368.00 0.00 0 kworker/3:1

08:06:07 0 66573 0.00 4768.00 0.00 0 kworker/0:2

08:06:07 0 66574 0.00 4960.00 0.00 1 kworker/1:0

08:06:07 0 66584 0.00 848.00 0.00 0 kworker/u256:1

08:06:07 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command

08:06:08 0 65159 0.00 2304.00 0.00 0 kworker/u256:7

08:06:08 1105 66252 0.00 208.00 0.00 0 dataserver

08:06:08 0 66528 0.00 18464.00 0.00 0 kworker/2:2

08:06:08 0 66530 0.00 20608.00 0.00 1 kworker/3:1

08:06:08 0 66573 0.00 2256.00 0.00 0 kworker/0:2

08:06:08 0 66574 0.00 18256.00 0.00 0 kworker/1:0

08:06:08 0 66584 0.00 192.00 0.00 0 kworker/u256:1

The IO of kworker is much higher than the DB process -- "dataserver", and made the "create database" completed in 5minutes. And I made a same test on the host, the pidstat shows --

eisen-suse11:~ # pidstat -d 1

Linux 3.0.101-63-default (eisen-suse11) 01/19/22 _x86_64_

13:30:07 PID kB_rd/s kB_wr/s kB_ccwr/s Command

13:30:08 PID kB_rd/s kB_wr/s kB_ccwr/s Command

13:30:09 PID kB_rd/s kB_wr/s kB_ccwr/s Command

13:30:10 4860 0.00 4.00 0.00 isql

13:30:10 PID kB_rd/s kB_wr/s kB_ccwr/s Command

13:30:11 4845 404.00 404.00 0.00 dataserver

13:30:11 PID kB_rd/s kB_wr/s kB_ccwr/s Command

13:30:12 PID kB_rd/s kB_wr/s kB_ccwr/s Command

So without that kworker, the same "create database" command completed just in 1 seconds... I can't search document for it, only found how to limit the CPU/Memory/GPU resource of kworker-- https://docs.docker.com/config/containers/resource_constraints/ -- But no comments on IO tunning. Please kind help. Thanks in advance for any ideas.

Regards

Eisen

former_member232292
Participant
0 Kudos

I made an analyze on the pidstat output and found --

docker:/tmp # cat d2_pid.log |grep dataserver|awk 'BEGIN{io=0} {io=io+$5} END{print io}'

897640

docker:/tmp # cat d2_pid.log |grep kworker|awk 'BEGIN{io=0} {io=io+$5} END{print io}'

5.21821e+07

the IO from kworker is about 50 times of io from dataserver...And I test with another Sybase ASE docker image from dockerhub -- ASE16.0 over Centos-- it's just the same...No idea if SAP iq would be better...
View Entire Topic
former_member232292
Participant

I find the key --

Because the docker's host is SLES12 so all the FS on it are all default BTRFS, And this BTRFS will generate lots of jounal logging activities while DB running on docker container.

So now with putting the device file on ext3/ext4 FS and mounted to docker container. issue fixed.