In his great Liquibase introductory blog michael.wenz described how to get Liquibase and schema evolution up and running. I want to tackle a very specific DB upgrade issue today in which developers will nearly always run when using JPA as an underlying framework. Let's look at a sample scenario:
Working this way will get you very far (and there might be even use cases where it is totally sufficient), just incrementally doing the small changes in Liquibase and let JPA handle the big new table definitions. Where is the issue?
Redeployment
What will happen if you either move your application to a different account or if you are lucky and don't have to work on your project alone anymore and another developer is going to run it locally? Yes, at start-up Liquibase will try to run the scripts against the not-yet-existing database and will fail. It seems like a hen-egg problem. Let's look at some ways to solve this.
An obvious solution is to simply deactivate Liquibase locally adding "-Dliquibase.should.run=false" to your launch configuration. This way, you will always have JPA create the database for you and if you make some entity changes and your setup allows it you simply delete the local database and have it recreated by JPA. You track the changes to existing entities in a Liquibase script.
Disadvantages:
Another solution is to do everything through Liquibase and set the EclipseLink property "eclipselink.ddl-generation" in your persistence.xml from "create-tables" to "none". Now, for every entity you create you write a Liquibase script with the table definition. Everybody can now update his database and updates will go through easily. Liquibase offers a command to create such a script and also upcoming diffs automatically.
Disadvantages:
To me, the second way seems OK but one looses lots of simplicity and beauty. Instead of easy to understand JPA entities we duplicate this definition to a totally different and really lengthy script format. Interestingly enough there actually is a way to unite these technologies and get the best of both worlds:
PreConditions
If we analyze the problem scenario we realize that actually all we need in most cases is a check if the app is started for the first time or not. If it is, we want JPA to create everything, if it is not, we want to execute the Liquibase scripts. Easy enough. Liquibase offers conditional checks called preconditions which can be put into changesets and which define when the changeset should be executed.
Example:
<changeSet id="3" author="rw">
<preConditions onFail="MARK_RAN">
<tableExists tableName="USERS" />
</preConditions>
<sql>ALTER TABLE FIXTURES ADD (EXTID VARCHAR(255))</sql>
</changeSet>
In the example above I check for a well known table (users). If this table does not exist, it will ignore the other commands inside the changeset and mark it as "ran" which is the intended outcome since JPA will create everything for us. If I execute this now locally for the first time, the following output appears in the console:
INFO 04.12.12 09:35:liquibase: Marking ChangeSet: db/db.changelog-20120924-Fixtures.xml::3::rw::
(Checksum: 3:b94a1d5a11fae972bd39032ea93dd396) ran despite precondition failure due to
onFail='MARK_RAN':
db/db.changelog.xml : Table USERS does not exist
Voila!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
26 | |
24 | |
21 | |
13 | |
9 | |
9 | |
9 | |
9 | |
8 | |
8 |