tag:blogger.com,1999:blog-3406746278542965235.post8929328635280149893..comments2022-04-04T04:14:32.990+02:00Comments on Technology blog: That pesky HibernateKoen Serneelshttp://www.blogger.com/profile/17324386688560910205noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-3406746278542965235.post-47015209873322517652011-02-13T11:49:42.267+01:002011-02-13T11:49:42.267+01:00@javarevisited:
For this example I was using HSQL...@javarevisited:<br /><br />For this example I was using <a href="http://hsqldb.org" rel="nofollow">HSQLDB</a>Koen Serneelshttps://www.blogger.com/profile/17324386688560910205noreply@blogger.comtag:blogger.com,1999:blog-3406746278542965235.post-81061692752392148072011-02-13T11:47:55.561+01:002011-02-13T11:47:55.561+01:00@Joonas:
You are right. In typical scenario'...@Joonas: <br /><br />You are right. In typical scenario's you would be using some kind of locking. And, if you would be using optimistic locking, then performing a simple "delete from child where parent_id = ?" (as I described) will break optimistic locking, since we did not take the version column into account.<br /><br />I overlooked this since I'm actually not using any form of locking in the scenario I described. <br />So again, in case of optimistic locking the "cascade delete DDL" will not be an option.<br /><br />So in the optimistic locking case it does make sense that hibernate deletes the rows one by one to make sure the version column is considered. But as you pointed out: even then there exist probably better ways to do it (using in clause and combining id/version for example) so I'm not sure that even in that case we should be happy that hibernate deletes the childs one by one, without giving us another option.<br /><br />However, I do still believe (and that was the point of me writing this) that if you don't use optimistic locking (you use pessimistic locking or no locking at all) there is no reason that hibernate should default to this non performant way of deleting relationships. In that case there is no technical reason why they are deleted one by one.Koen Serneelshttps://www.blogger.com/profile/17324386688560910205noreply@blogger.comtag:blogger.com,1999:blog-3406746278542965235.post-88834754859058056232011-02-10T13:40:48.022+01:002011-02-10T13:40:48.022+01:00Why you are not using hibernate annotations? are y...Why you are not using hibernate annotations? are you forced to use Java < version 5?alwaysAWhiteBelthttps://www.blogger.com/profile/07163385700040169936noreply@blogger.comtag:blogger.com,1999:blog-3406746278542965235.post-69881510684314791362011-02-09T14:39:31.535+01:002011-02-09T14:39:31.535+01:00which in memory database are you using buddy ? is ...which in memory database are you using buddy ? is that KDB ?<br /><br /><br />Thanks<br />Javin<br /><a href="http://javarevisited.blogspot.com/search/label/thread" title="deadlock in java and how to fix it" rel="nofollow">How to detect deadlock in java </a>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3406746278542965235.post-32142320489021068872011-02-09T12:23:36.503+01:002011-02-09T12:23:36.503+01:00Your deletes would've made at least a bit more...Your deletes would've made at least a bit more sense had you been using optimistic locking; those would had been "delete from tbl where id = ? and version = ?".<br /><br />I'm pretty sure hibernate batches the deletes well, so that one statement per table is prepared and others are fired using it; of course it can never beat "delete from tbl where other_id = ?" but then again there's no optimistic locking there either.<br /><br />Not sure if it'd be faster if hibernate would issue one large update per table like:<br /><br />delete from tbl where (id, version) in ( (?, ?), (?, ?), ...)<br /><br />(at least postgres supports this)<br /><br />That would be both efficient (single scan for the rdbms or optimizable) and support optimistic locking.joonashttps://www.blogger.com/profile/16527557976338478850noreply@blogger.com