on 2020 May 31 7:40 PM
Hi,
I'm testing idea to have im-memory instance with memory limitation.
the command line is:
dbsrv17 -im nw -c 1m -ch 1m -n dm dm.db
I expected to get error message like "out of space/memory" but got crash while filling database with a test data.
==================
...
Press 'q' to shut down the database server
Cache size adjusted to 10424K
....
Cache size adjusted to 63916K
Cache size adjusted to 64176K
Cache size adjusted to 64440K
Cache size adjusted to 64704K
Cache size adjusted to 64840K
Dumping cache info to 'SA_cache_6098.0.dmp' ...
Completed dumping cache info
Dumping cache info to 'SA_cache_6098.1.dmp' ...
Completed dumping cache info
Fatal error: cache page allocation
(&=��
terminate called after throwing an instance of 'DBException'
what(): Fatal error: cache page allocation
SQL Anywhere Support Utility Version 17.0.10.6127
Aborted (core dumped)
========================
It looks like some kind of bug on cache page allocation.
Sunce I can't test my idea on server and I can't find the answers in the documentation, it might be a good idea to ask those questions here.
Request clarification before answering.
That's pretty much behaving as expected. In-memory mode works by saving copies of what would be written to disk in-memory. When you run out of memory, the server will stop processing any further requests (ie, a fatal error). See the "Caution" section at the bottom of this page: http://dcx.sap.com/index.html#sqla170/en/html/3bc822196c5f1014a8aeb73da61cc3d8.html
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I've read that "Cautions" before the test. But if "When this happens, the database server issues an error and stops processing requests." means fatal error and crash with core dump as "stops processing requests" it is definitely bad architecture.
Something is wrong with the implementation of this model, what puts it in the category "do not use".
Personally, I expected the server stops to process INSERT and UPDATE requests if there is not enough memory, but it processes DELETE and SELECT, all requests when memory is released after DELETE. I can't see why it can't be implemented.
So you would expect the server to just ignore/reject some requests while allowing others? Note that even read-only requests like SELECT may need more cache because of access to data not yet cached, complex queries that require work tables and the like. In my understanding exhausted memory is a fatal error.
And obviously the solution is simple: Just provide enough memory, as the caution paragraph tells.
Well, if it is demo/kiosk, then SQLA doesn't fit? Only for people who can afford pants? I investigated SQLA in-memory like cache keeper.
It is only your understanding when full of data database should die due to lack of memory. And it is absolutely not obvious solution "just provide enough memory". What is "enough"? If data can be calculated with some approximation, what about processes running in parallel with memory allocation? Of coz, you can teach users SQLA dao in meaning "enough". 10TB memory, single process in system or kernel mode.
The solution is very simple, it is called memory management. Reserve, for instance, 10% of the total process memory to allow server running without fatal error and crash.
I can't see any contradictions to allow one request and disable other. What about read-only mode? Are you against it? It allows select but rejects update and insert.
Maybe, then, you are looking for read-only mode? http://dcx.sap.com/index.html#sqla170/en/html/3bcd7eb06c5f101480aeda759d0119f2.html*loio3bcd7eb06c5f...
In-memory mode has a specific goal of allowing as many modifications as can fit in the memory you configure the server to use and also allows the database to be on read-only media. Read-only mode also allows the database on read-only media (IIRC) but cannot deal with, for example, a database that requires recovery to start up.
Even if we reserved a certain percentage of the memory to hold updated pages your server would hit a wall and then there'd be no way to allow further updates without restarting the server. Problems would happen sooner since not all memory is available for holding updates. Similar to your question about "how do I know how much memory is enough?", one could ask of your proposed solution "how much memory do I need so that 10% is enough?".
As Volker said, if we knew more about what problem you were trying to solve we could guide you to the appropriate solution.
I think Read-only mode with editable temp files could be a trick but with in-memory model it will use common cache.
The server is normally running and on some transaction the server hit a wall. Ok, last transaction is aborted. I can't see any obstacles to continue to work, the data is still valid. Thus, SELECT is still possible. Correct? Next, during server work if some data deleted, the pages should be returned to "free pool" and available cache size will be increased. So DELETE is possible as well. Correct? It is normal work. Why this schema can't be valid when server hits memory limit?
It is so natural that I would rather say there is bug in server than assume SQLA didn't implement it.
The case is closed 🙂
In my very humble understanding, there is no interest in further hints by us forum members, as the question has been basically ROLLBACKed... - But of course, that could be another misunderstanding or misjudgement on my part.
Otherwise, one could also mention that SELECTing might also require to load more pages into the cache and will fail when the server is not allowed to evict already loaded (dirty) pages and there are no more clean pages to evict...
If dbsrv17 -im nw was a Superhero it would be Batman 🙂
> Volker, relax.
If you want to get the last word on this forum, don't try attacking Volker Barth, he's one of the good guys 🙂
User | Count |
---|---|
53 | |
6 | |
6 | |
5 | |
5 | |
4 | |
3 | |
3 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.