on 2025 May 08 8:14 AM
Hello SAP Community,
I am facing an issue with my CAP project (Java-based) where the database gets initialized with unexpected data on every deployment, specifically when the structure of a table is modified (e.g., adding new fields). Here is the context:
Question:
How can I prevent the database from being initialized with this unexpected data on every deployment when modifying the table structure? Is there a configuration or process in CAP (Java) or HANA that could be causing this behavior?
Thank you for your help!
Request clarification before answering.
Hi @stefaniaZ,
A distinction must be made between test data (mocks) and initial real data:
The .csv files created in the test/data folder are only deployed when the application is not in production mode. This folder can therefore be used to store example data for tests and demonstrations.
The .csv files created in the db/data folder, on the other hand, are deployed in all environments, including production. This folder is therefore suitable for initial configuration data.
Have you tried cleaning the build artifacts: /gen, .cds deploy. There is also the --auto-undeploy parameter in cds deploy which allows you to : “Tell HDI deployer to automatically undeploy deleted resources”.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you,
I tried running the command:
cds deploy --auto-undeploy
and in the prompt, I saw the following message:
successfully deployed to db.sqlite
After performing the deploy, the tables were cleared as expected.
Here is my deployment procedure for all environments:
cf login --sso
mbt build -t gen --mtar mta.mtar (this generates a folder within the project containing the mtar file, which is the one I deploy)
cf deploy gen/mta.mtar
I opened the generated mtar file, but I do not see any .csv files inside it. However, the database still gets cleared during the deployment. Could you help me understand why this is happening?
Thank you again for your support!
cds deploy --to hana --auto-undeploy : https://cap.cloud.sap/docs/guides/databases-hana#cds-deploy-hana
Maybe you can also cleaning the generated files before building the project using this documentation: "Delete generated files and directories of the previous build from the CAP Java project" => https://cap.cloud.sap/docs/java/assets/cds-maven-plugin-site/clean-mojo.html
In the mtar file you opened, did you see anything under the differents /resources paths ?
Another idea: try cloning the project and building/deploying it from another space...
Hope this helps you move forward with the issue
Thank you for your suggestions @valentincadart ,
I can’t run your commands directly from the command line in when it does them on my local and not on cloud foundry.
Thanks to your advice, I was able to partially resolve the issue by enriching the undeploy.json file with the following two entries:
"src/gen/**/*.hdbtabledata",
"src/gen/**/*.csv"
However, I wanted to return to the initial setup to replicate the problem from the beginning. My goal is to get back to the state where I can delete the data, repopulate the file, and verify that the data is no longer being cleared.
After removing the two entries from the undeploy.json file and redeploying, I now encounter the following error:
Inserted 0 records of the data source file "myFile.csv" into the target table "MYTABLE"
However, it is not true that the data already exists or is duplicated. Do you have any idea why this might be happening?
Thank you,
Stefania
Hi Stefania,
I came across some documentation that confirms using undeploy.json is indeed the correct approach to avoid deleting data during a redeploy:
Regarding your attempt to revert to the original setup where the CSV deployment repopulates your table, I found this documentation on .hdbtabledata which might help clarify how the import is behaving. In particular, please check the parameters like no_data_import and delete_existing_foreign_data, which could be affecting whether data is imported or skipped.
To debug further, you can unzip the generated MTAR archive and inspect the contents under: x-db-deployer/src/gen/data/your_csv.hdbtabledata. This file should show how the CSV is being treated during deployment.
Let me know what you find in those parameters—especially if no_data_import is set to true or if there's anything unexpected.
Hi.
I tried creating the .hdbtabledata file, but two files keep getting generated, and I believe the wrong one is being used. When I open the .tar file, I see an .hdbtabledata file under src/data, which is the correct one, and another under src/gen/data, which is the incorrect one. I believe the system is using the wrong file. This is very strange behavior.
Hi @valentincadart ,
The undeploy solution works, but the issue with data not being inserted anymore still remains. We are not sure how to properly use the .hdbtabledata file; if we manage to get it working, I will update the thread. Thank you for your support.
Hi @stefaniaZ,
I just saw this recent blog about data loss when deploying from CAP to HANA .
The problem looks similar, I hope this helps.
User | Count |
---|---|
34 | |
22 | |
16 | |
8 | |
5 | |
5 | |
4 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.