Tuesday, April 28, 2009

object references an unsaved transient instance

    Match m = new Match();
    m.setPlayDate(playDate);

    for (int playerId : playerIds) {
        Player p = playerDao.read(playerId);
     
        // update player's stat
        PlayerStats ps = playerStatsDao.findByPlayerAndYear(playerId, year);
        ps.increment();
        playerStatsDao.saveOrUpdate(ps);
   
        m.addParticipant(p);
    }
  
    matchDao.persist(m);

org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: poker.db.model.Match.

On 2nd iteration of the loop, in line 8, while querying playerstats, it threw the above exception. It was doing a *flush* before the query since playerstats was dirty atm.

I really made an effort to dig into it. It turns out that it was caused by the *inverse* setting of a many-to-many relationship. I used to think it doesn't matter as long as you set either side of it.

In Class Match:
public void addParticipant(Player p) {
    getParticipants().add(p);
    p.getMatches().add(this);
}

PlayStats * <---> 1 Player * <---> * Match
All relationships are bi-directional. PlayStats is owner side by default for the many-to-one. And I set (inverse=true) on Player side by rolling a dice. I thought when I called match.addPaticipant(), both collections are updated and the relation is built. It shouldn't make any difference by setting inverse.

I imagine that the reference link is uni-directional for underlying presistence with the inverse setting. When Player is the owner of this many-to-many, the link is PlayerStats ->Player->Match, since Match object is a transient object (not saved yet) when we tried to save/update PlayerStats, it threw the above exception. However, if we change the owner side to Match, the reference link is broken, PlayerStats->Player, Match->Player, save/update PlayerStats has nothing to do with the state of Match object anymore. Thus it runs perfectly!

* Inverse=true means relationship owner to avoid unnecessary update. Basically on many side, insert a new record already with FK value, you don't need another update statement to set it.
http://www.mkyong.com/hibernate/inverse-true-example-and-explanation/

Alternatively, the second solution is to set FlushMode of query in playerStatsDao.findByPlayerAndYear().

public PlayerStats findByPlayerAndYear(int playerId, int year) {
    String queryString = "from PlayerStats ps where ps.player.id = :playerId "
                          + "and ps.year = :year";
    Query query = getSession().createQuery(queryString)
                .setInteger("playerId", playerId)
                .setInteger("year", year)
                .setFlushMode(FlushMode.COMMIT);
    return (PlayerStats) query.uniqueResult();
}

By default, FlushMode is AUTO, flush synchronize the in-memory persistent object with underlying database store by running the SQLs, the changes not visible to others until tx.commit() though.

Note that flush occurs in three cases: transcation commit, session.flush() invoked or before the query. By changing flush to tx commit time, we avoid the "object references an unsaved transient instance" issue. It is also good for performance.

Cascade

We may get FK constraint exception when we try to delete the parent record (e.g., Player table). We can define database level cascade such as "on delete cascade|set null".


create table player_stats (
`id` integer unsigned not null auto_increment,
`player_id` integer unsigned not null,
primary key (`id`),
foreign key (`player_id`) references player (`id`) on delete cascade
)
engine = InnoDB;

Note that it is uni-directional. Only works for parent -> child cascade.

With Hibernate cascade, we don't need to define this on DDL. And the cascade is more flexible - bi-directional. We can cascade change from any side following the association link (one-to-many, many-to-one, many-to-many etc). It handles cascade on higher level, i.e., entity to entity. On example is tbl_apGroup, tbl_apGroup_ap, tbl_ap. The database cascade would be defined on FK tbl_apGroup_ap referencing to tbl_apGroup. But Hibernate cascade is from domain object ApGroup to Ap.

Friday, April 24, 2009

ConcurrentModificationException

If you try to modify a list while iterating through it, then you will encounter the ConcurrentModificationException.

The foreach loop is *is* syntactic sugar for iterating. However, you need to call remove on the iterator - which foreach doesn't give you access to.


for (SgeApAdapter apAdapter : apsToUpdate) {
if (...) {
apsToUpdate.remove(apAdapter);
}
}


It throws ConcurrentModificationException. We have to use iterator explcitly to remove item in collection.

Wednesday, April 8, 2009

Process.getInputStream()

The concept of an InputStream always refers to data coming IN to the thread which is invoking the InputStream method. Similarly, an OutputStream always refers to data pushed OUT by the thread which is invoking the OutputStream method.

The only place where this is really tricky is when you are working with Process objects. The getInputStream() method of Process returns a Java InputStream that reads from the standard output stream of the running Process.

The key thing to remember when using Runtime.exec() is you must consume everything from the child process' input stream. Otherwise, the child process may hang due to the buffer filling up.


Process proc = Runtime.getRuntime().exec(cmd);

StreamGobbler errorGobbler = new StreamGobbler(proc.getErrorStream());
StreamGobbler outputGobbler = new StreamGobbler(proc.getInputStream());
errorGobbler.start();
outputGobbler.start();

if (proc.waitFor() != 0) {
System.out.println("RestoreMain: Migration failed.");
result = false;
}


StreamGobbler

public void run() {
try {
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
String line = null;
while ((line = br.readLine()) != null) {
// ------------------------------------------------------------
// Exhaust Stream
// ------------------------------------------------------------
log.debug("StreamGobbler: " + line);
}
} catch (Exception e) {
log.debug("Failed to exhaust stream.");
}
}