Showing posts with label Hibernate. Show all posts
Showing posts with label Hibernate. Show all posts

Tuesday, June 19, 2012

Read-only transaction

As discussed earlier, for BE processing the pessimistic locking is adopted. The lock on mysql (select ... for update) is only released after transaction commits. This posts a challenge while there are multiple BE threads running and might easily get into dead lock situation.

To reduce the lock contention, I use the following approach: create new nested transaction to do the DB update, so only acquire/release lock in very short period time rather than holding it for the whole long transaction period (In my case, auditing). Note that the outer transaction need to be set to read only to avoid the flush completely, which is not just unnecessary but also failing because the object has older occVersion (Optimistic Concurrency Version). Although the outer and inner transactions have two different sessions, they are bound to the same thread, Hibernate will try to flush the objects in outer transaction right after the inner transaction commits.

// Or similarly, txTemplate.setReadOnly(true);
sessionFactory.getCurrentSession().setFlushMode(FlushMode.NEVER); 

// Simplify for testing, in the real world, we get the objects by a big query, and 
// running a bunch of things to determine whether the object need to be updated or not
final CoSTemplate ct = (CoSTemplate)templateDao.read(4);

TransactionTemplate txTemplate = new TransactionTemplate(transactionManager);
txTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRES_NEW);

txTemplate.execute(new TransactionCallbackWithoutResult() {
    protected void doInTransactionWithoutResult(TransactionStatus status) { 
    // the object is reloaded from DB by getSession().refresh(tpl, LockMode.UPGRADE);
    CoSTemplate ct2 = (CoSTemplate)templateDao.lockRead(ct);
  
    History history = new History("something");
    ct2.addHistory(history);
    templateDao.update(ct2);
}
});

Wednesday, June 29, 2011

Query with large result set

when caching is not required, it may be better to use StatelessSession.

ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setFetchSize( 1000 ).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY)

- The fetch size is the number of rows physically retrieved from the database at one time by the JDBC driver as you scroll through a query ResultSet with next(). As a general rule, setting the query fetch size is only effective for large result sets.I typically set this value to 100 for large result sets. (by default 10)

Friday, September 24, 2010

I wonder for a while


Criteria crit = getSession().createCriteria(ApSummary.class)
.add(Restrictions.eq("controller.id", controllerId));

This "controller.id" surprises me a bit since when we do a join in Criteria query, generally we have to use createAlias() such as:

Criteria crit = getSession().createCriteria(ApSummary.class)
.createAlias("controller", "hwc").add(Restrictions.eq("hwc.id", controllerId));

The trick here is that controllre.id would be SMARTLY interpreted as column "controllerId" in ApSummary without doing the extra join!

Monday, August 30, 2010

A lot of optimism with a little pessimism

org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [xyz.WlanServiceTemplate] with identifier [17]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect):

This StaleObjectStateException is quiet common when optimistic locking strategy (FIRST WIN) is applied in persistence. Generally it indicates the caller side has a out-of-date copy than the one in DB. It prevents the lost-update issue, which is OK for UI since the use could reload and retry. But for backend, this StaleObjectStateException may be annoying because LAST WIN may be acceptable. One straightforward but cumbersome approach is to catch the exception, rollback and retry again until succeed.

A much better approach is to apply pessimistic lock (session.get(id, LockMode.UPGRADE) in such scenario, so called "a lot of optimism with a little pessimism" which neatly avoid the need to EVER catch or recover from StaleObjectExceptions!

Thursday, August 12, 2010

Custom BIT functions in HQL


public class MySQLDialect extends MySQLInnoDBDialect {

public MySQLDialect() {
super();
registerFunction("bit_or", new StandardSQLFunction("BIT_OR", Hibernate.INTEGER));
registerFunction("bitwise_and", new VarArgsSQLFunction(Hibernate.INTEGER, "", "&", ""));
registerFunction("bitwise_or", new VarArgsSQLFunction(Hibernate.INTEGER, "", "|", ""));
}
}

String queryString = "select bit_or(cc.protocol) as protocolBitmask "
+ "from CtlCountry cc, ApHardwareTypeProfile atp, ApProfileRadio radio "
+ "where cc.platformName=atp.ctlPlatformName "
+ "and atp.profileType=:profileType "
+ "and atp.profileType=radio.profile "
+ "and cc.countryCode=:country "
+ "and radio.radioIndex=:radioIndex "
+ "and bitwise_and(cc.protocol, radio.radioCap) = cc.protocol";

MySQL's BIT_OR(), BIT_AND() operators are like COUNT(), AVG(), they apply bit operations on all matched rows. Bitwise operators &, | apply on expression.

A strange "java.lang.IllegalStateException: No data type for node: org.hibernate.hql.ast.tree.MethodNode" error occurred when I used upper case function name "BIT_OR" in HQL. After I changed it to use lower case "bit_or", it works.

Monday, March 15, 2010

Hibernate Native SQL

Hiberante HQL/Criteria doesn't support the query for Map collection with dereferenced key object. AddEntity() method will convert the result into domain object, otherwise, it would be an object array.


String sqlStr = "select t.* from ce_template t, ce_apProfileTemplate_wlanServiceTemplate aw "
+ "where t.id=aw.apProfileTemplateId and t.templateType='APPROFILE' and t.oldId=0 "
+ "and aw.wlanServiceTemplateId=:id";
Query query = getSession().createSQLQuery(sqlStr)
.addEntity("t", Template.class)
.setInteger("id", t.getId());

Saturday, March 6, 2010

Formula in Map



<key column="ITEM_ID"/>
<map-key type="string" column="IMAGENAME"/>

<property name="filename" column="FILENAME" not-null="true"/>
<property name="sizeX" column="SIZEX"/>
<property name="sizeY" column="SIZEY"/>



This has the drawback that the embedded component class (Image) doesn't have the property imageName which is the map key. It may be undesirable.


<key column="ITEM_ID"/>
<map-key type="string" column="IMAGENAME"/>

<property name="name" type="string" formula="IMAGENAME"/>
<property name="filename" column="FILENAME" not-null="true"/>
<property name="sizeX" column="SIZEX"/>
<property name="sizeY" column="SIZEY"/>



In Hibernate 3.x, formula comes as a rescue here.

[Reference] Hibernate: how to map a collection of embedded components keyed by one of the component's properties?

Tuesday, February 9, 2010

n+1 select & fetch strategy

The n+1 select is for return *LIST* of objects, and the object contains collection. Then we can use 'subselect' fetch strategy to reduce the number of queries. For example, Template (1-->*) DeployedController, with default fetch strategy 'select', to return a list of template it requires 1 + n selects.


select * from template;
select * from deployedController where templateId=?
select * from deployedController where templateId=?
...

With 'subselect', it becomes only 2 queries -

select * from template;
select * from deployedController where templateId in (select id from template)

This is for lazy loading, the 2nd query is run when the 1st time the deployedController collections is accessed. For eager loading, change fetch strategy to 'join', then only one query (left outer join) is used but the resultset contains duplicate or null values.

The subselect option is currently only available for collection, not for *-->1 assocation.

Eager loading:
1. lazy="false", stands for fetch="select", follow by an immediate second select
2. fetch="join"
Lazy Loading:
1. fetch="subselect"
2. batch-size="N"

Thursday, February 4, 2010

Common Hibernate Exceptions

1. a different object with the same identifier value was already associated with the session
In session cache, a persistent object exists uniquely. To avoid this exception, use merge() instead of saveOrUpdate().

One example, many-to-one between authRadius->radius, and I have authRadiusSet[authRadius1(radius1), authRadius2(radius2)], here radius1 and radius2 actually have same database identifier but different objects. When I tried to save authRadiusSet, I got this exception.

2. Duplicate entry '1' for key 2
The object tree for cascade save/update contains two objects with same object id (java object).

3. A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance
What is happening here is that Hibernate requires complete ownership of the preferences collection in the User object. If you simply set it to a new object as in the above code sample, Hibernate is unable to track changes to that collection and thus has no idea how to apply the cascading persistence to your objects! So either user.setPreferences(new HashSet()) or user.setPreferences(null) will cause such error. This is also a reason to provide helper methods such as addPreference() and removePreference() to avoid setPreferences is called (made private). See ...

4. Found shared references to a collection org.hibernate.HibernateException
It throws when session flush. It is due to one collection referenced by two entities. To relove it, use merge().

5. TransientObjectException
The object graph to be persisted cannot contains any transient objects (id != 0) without cascade defined. Also note that, if it contains detached object (id >0) and no cascade defined, the detached object will be deleted upon persisting.

6. ObjectDeletedException
Deleted object would be re-saved by cascade. It occurs when you move one object from one collection to the other and you have 'delete-orphan'. There is not good solution.
https://forum.hibernate.org/viewtopic.php?t=981282

Wednesday, January 20, 2010

Bag vs. List


public class FilterGroup {
private List<Fitler> filters = new ArrayList<Fitler>();
...
}








I mapped List as above but got bizarre result. Although there are only 8 items in the list, it returned a list with over 1000 item and most of them are null. It turns out this is because the column "orderPos" used for list index contains a big number 1025+.

I do want a sorted list. So I change to use bag mapping with order-by. It works well. In Hibernate document, it said bag is mapped for Collection. Internally it is implemented as ArrayList but ignores index (Bag is different from set in that it allows duplicates). So I can map it to List as well and add a order-by to sort it in database level. (set, map, bag all can have order-by)






Also, note that we need "delete-orphan" cascade for collection. So when we do a update on FilterGroup, the filters removed from the collection will be deleted from database automatically. Otherwise, it will still remain.

Monday, January 11, 2010

Session.get() return proxy?

Surprising, the object returned by session.get() is a proxy rather than the real object. What is going on?

Hibernate API said Session.load(id) return a proxy and Session.get(id) return a persistent object. But actually get() may return proxy if previously it is lazily loaded in the same session, e.g., as a property of other object. A proxy is a proxy even you have called its methods.

Typecasting and instanceof work well for proxy object, however, there is a pitfall, for super-class or interface, it will not give you the implementation class. e.g., it is instanceof Template, but not instanceof VnsTemplate.

To ensure we always get the real persistent object, I implement read() in Dao as follows:


public T read(int id) {
T obj = (T)getSession().get(entityClass, id);

if (obj instanceof HibernateProxy) {
return(T) ((HibernateProxy)obj).getHibernateLazyInitializer().getImplementation();
}

return obj;
}

Alternatively, HibernateProxyHelper.getClassWithoutInitializingProxy(o) could be used to get class info.

Note that we couldn't use session.evict() to detach old proxy object and call get() to retrieve the real object. The old proxy object will still be referenced by other object, and access to its member may throw LazyInitializationException even inside the transaction since it has been detached.

Thursday, November 19, 2009

Join (association) Table

one-to-many can be mapped with join table as many-to-many (unique="true"), it gives some flexibility that many side doesn't need to have FK to one side, hence they are more loosely coupled. The relation is kept in a separated table. E.g., both Template and Task have one-to-many relationship with DeployWlc. Then we can keep DeployWlc table intact and have separated join tables ce_template_deployWlc, and ce_task_deployWlc.


<class name="Person">
<set name="addresses">
<key column="personId" not-null="true"/>
<one-to-many class="Address"/>
</set>
</class>

<class name="Person">
<set name="addresses" table="PersonAddress">
<key column="personId"/>
<many-to-many column="addressId"
unique="true"
class="Address"/>
</set>
</class>

For many-to-one, we can also have join table, so that on database table, many side doesn't have FK column, but on java object, it has the reference. If both side defines the join table, it creates a bidirectional one-to-many/many-to-one association using a join table.

<many-to-one name="person" class="Person" column="personId"/>


<key column="addressId" unique="true"/>
<many-to-one name="person" class="Person" column="personId"/>


A cascade myth: the CASCADE on collection is not for join(association) table, but for the base table on the other side. The data entries on association table are updated without cascade setting.

The association could be strong or weak. For strong association, define Hibernate cascade "all, delete-orphan" will do it; For weak association, we may run into some issues. For example, the bidirectional association table, the *inverse* side may have FK constraint violation issue. E.g., ApGroup(inverse) <--> Ap, the deletion of ApGroup will fail since it is referenced by association table. It is OK for Ap deletion which is taken care of by Hibernate. So the solution is add database level "on delete cascade" for FK constraint tbl_apGroup_ap -> apGroup.

alter table tbl_apGroup_ap add index FK4F9906CCF1F23ED3 (apGroupId), add constraint FK4F9906CCF1F23ED3 foreign key (apGroupId) references tbl_apGroup (id) on delete cascade;

It means the deletion on parent table (pointed) tbl_apGroup will cascade the deletion on child table tbl_apGroup_ap.

one-to-one is mapped with many-to-one (unique="true") with foreign key association. If using one-to-one, it is using primary key association.

Tuesday, April 28, 2009

object references an unsaved transient instance

    Match m = new Match();
    m.setPlayDate(playDate);

    for (int playerId : playerIds) {
        Player p = playerDao.read(playerId);
     
        // update player's stat
        PlayerStats ps = playerStatsDao.findByPlayerAndYear(playerId, year);
        ps.increment();
        playerStatsDao.saveOrUpdate(ps);
   
        m.addParticipant(p);
    }
  
    matchDao.persist(m);

org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: poker.db.model.Match.

On 2nd iteration of the loop, in line 8, while querying playerstats, it threw the above exception. It was doing a *flush* before the query since playerstats was dirty atm.

I really made an effort to dig into it. It turns out that it was caused by the *inverse* setting of a many-to-many relationship. I used to think it doesn't matter as long as you set either side of it.

In Class Match:
public void addParticipant(Player p) {
    getParticipants().add(p);
    p.getMatches().add(this);
}

PlayStats * <---> 1 Player * <---> * Match
All relationships are bi-directional. PlayStats is owner side by default for the many-to-one. And I set (inverse=true) on Player side by rolling a dice. I thought when I called match.addPaticipant(), both collections are updated and the relation is built. It shouldn't make any difference by setting inverse.

I imagine that the reference link is uni-directional for underlying presistence with the inverse setting. When Player is the owner of this many-to-many, the link is PlayerStats ->Player->Match, since Match object is a transient object (not saved yet) when we tried to save/update PlayerStats, it threw the above exception. However, if we change the owner side to Match, the reference link is broken, PlayerStats->Player, Match->Player, save/update PlayerStats has nothing to do with the state of Match object anymore. Thus it runs perfectly!

* Inverse=true means relationship owner to avoid unnecessary update. Basically on many side, insert a new record already with FK value, you don't need another update statement to set it.
http://www.mkyong.com/hibernate/inverse-true-example-and-explanation/

Alternatively, the second solution is to set FlushMode of query in playerStatsDao.findByPlayerAndYear().

public PlayerStats findByPlayerAndYear(int playerId, int year) {
    String queryString = "from PlayerStats ps where ps.player.id = :playerId "
                          + "and ps.year = :year";
    Query query = getSession().createQuery(queryString)
                .setInteger("playerId", playerId)
                .setInteger("year", year)
                .setFlushMode(FlushMode.COMMIT);
    return (PlayerStats) query.uniqueResult();
}

By default, FlushMode is AUTO, flush synchronize the in-memory persistent object with underlying database store by running the SQLs, the changes not visible to others until tx.commit() though.

Note that flush occurs in three cases: transcation commit, session.flush() invoked or before the query. By changing flush to tx commit time, we avoid the "object references an unsaved transient instance" issue. It is also good for performance.

Cascade

We may get FK constraint exception when we try to delete the parent record (e.g., Player table). We can define database level cascade such as "on delete cascade|set null".


create table player_stats (
`id` integer unsigned not null auto_increment,
`player_id` integer unsigned not null,
primary key (`id`),
foreign key (`player_id`) references player (`id`) on delete cascade
)
engine = InnoDB;

Note that it is uni-directional. Only works for parent -> child cascade.

With Hibernate cascade, we don't need to define this on DDL. And the cascade is more flexible - bi-directional. We can cascade change from any side following the association link (one-to-many, many-to-one, many-to-many etc). It handles cascade on higher level, i.e., entity to entity. On example is tbl_apGroup, tbl_apGroup_ap, tbl_ap. The database cascade would be defined on FK tbl_apGroup_ap referencing to tbl_apGroup. But Hibernate cascade is from domain object ApGroup to Ap.