Help

Inactive Bloggers
14. Aug 2004, 14:11 CET, by Gavin King

Just had an interesting discussion on ejb3-feedback@sun.com, started by David Cherryhomes, which saw me stupidly insistingthat something can't be done when in fact, now that I think about it, /I realize I've actually done it before/, and that even the Hibernate AdminApp example uses this pattern!

So, just so I don't forget this pattern again, I'm going to write it down, and also write a reuseable class implementing it.

The basic problem is pagination. I want to display next and previous buttons to the user, but disable them if there are no more, or no previous query results. But I don't want to retrieve all the query results in each request, or execute a separate query to count them. So, here's the correct approach:

public class Page {
   
   private List results;
   private int pageSize;
   private int page;
   
   public Page(Query query, int page, int pageSize) {
       
       this.page = page;
       this.pageSize = pageSize;
       results = query.setFirstResult(page * pageSize)
           .setMaxResults(pageSize+1)
           .list();
   
   }
   
   public boolean isNextPage() {
       return results.size() > pageSize;
   }
   
   public boolean isPreviousPage() {
       return page > 0;
   }
   
   public List getList() {
       return isNextPage() ?
           results.subList(0, pageSize-1) :
           results;
   }

}

You can return this object to your JSP, and use it in Struts, WebWork or JSTL tags. Getting a page in your persistence logic is as simple as:

public Page getPosts(int page) {
    return new Page( 
        session.createQuery("from Posts p order by p.date desc")
        page,
        40
    );
}

The Page class works in both Hibernate and EJB 3.0.

13. Aug 2004, 06:34 CET, by Steve Ebersole

Another major change in Hibernate3 is the evolution to use an event and listener paradigm as its core processing model. This allows very fine-grained hooks into Hibernate internal processing in response to external, application initiated requests. It even allows customization or complete over-riding of how Hibernate reacts to these requests. It really serves as an expansion of what Hibernate tried to acheive though the earlier Interceptor, Lifecycle, and Validatable interafaces.

Note: The Lifecycle and Validatable interfaces have been moved to the new classic package in Hibernate3. Their use is not encouraged, as it introduces dependencies on the Hibernate library into the users domain model and can be handled by a custom Interceptor or through the new event model external to the domain classes. This is nothing new, as the same recomendation was made in Hibernate2 usage.

So what types of events does the new Hibernate event model define? Essentially all of the methods of the org.hibernate.Session interface correlate to an event. So you have a LoadEvent, a FlushEvent, etc (consult the configuration DTD or the org.hibernate.event package for the full list of defined event types). When a request is made of one of these methods, the Hibernate session generates an appropriate event and passes it to the configured event listener for that type. Out-of-the-box, these listeners implement the same processing in which those methods always resulted. However, the user is free to implement a customization of one of the listener interfaces (i.e., the LoadEvent is processed by the registered implemenation of the LoadEventListener interface), in which case their implementation would be responsible for processing any load() requests made of the Session.

These listeners should be considered effectively singletons; meaning, they are shared between requests, and thus should not save any state as instance variables. The event objects themselves, however, do hold a lot of the context needed for processing as they are unique to each request. Custom event listeners may also make use of the event's context for storage of any needed processing variables. The context is a simple map, but the default listeners don't use the context map at all, so don't worry about over-writing internally required context variables.

A custom listener should implement the appropriate interface for the event it wants to process and/or extend one of the convenience base classes (or even the default event listeners used by Hibernate out-of-the-box as these are declared non-final for this purpose). Custom listeners can either be registered programatically through the Configuration object, or specified in the Hibernate configuration XML (declarative configuration through the properties file is not supported). Here's an example of a custom load event listener:

public class MyLoadListener extends DefaultLoadEventListener {
    // this is the single method defined by the LoadEventListener interface
    public Object onLoad(LoadEvent event, LoadEventListener.LoadType loadType) 
            throws HibernateException {
        if ( !MySecurity.isAuthorized( event.getEntityName(), event.getEntityId() ) ) {
            throw MySecurityException("Unauthorized access");
        }
        return super.onLoad(event, loadType);
    }
}

Then we need a configuration entry telling Hibernate to use our listener instead of the default listener:

<hibernate-configuration>
    <session-factory>
        ...
        <listener type="load" class="MyLoadListener"/>
    </session-factory>
</hibernate-configuration>

Or we could register it programatically:

Configuration cfg = new Configuration();
cfg.getSessionEventListenerConfig().setLoadEventListener( new MyLoadListener() );
....

Listeners registered declaratively cannot share instances. If the same class name is used in multiple <listener/> elements, each reference will result in a seperate instance of that class. If you need the capability to share listener instances between listener types you must use the programatic registration approach.

Why implement an interface and define the specific type during configuration? Well, a listener implementation could implement multiple event listener interfaces. Having the type additionally defined during registration makes it easier to turn custom listeners on or off during configuration.

10. Aug 2004, 14:18 CET, by Steve Ebersole

Hibernate3 adds the ability to pre-define filter criteria and attach those filters at both a class and a collection level. What's a pre-defined filter criteria? Well, it's the ability to define a limit clause very similiar to the existing where attribute available on the class and various collection elements. Except these filter conditions can be parameterized! The application can then make the decision at runtime whether given filters should be enabled and what their parameter values should be.

Configuration

In order to use filters, they must first be defined and then attached to the appropriate mapping elements. To define a filter, use the new <filter-def/> element within a <hibernate-mapping/> element:

<filter-def name="myFilter">
    <filter-param name="myFilterParam" type="string"/>
</filter-def>

Then, this filter can be attched to a class:

<class name="myClass" ...>
    ...
    <filter name="myFilter" condition=":myFilterParam = my_filtered_column"/>
</class>

or, to a collection:

<set ...>
    <filter name="myFilter" condition=":myFilterParam = my_filtered_column"/>
</set>

or, even to both (or multiples of each) at the same time!

Usage

In support of this, a new interface was added to Hibernate3, org.hibernate.Filter, and some new methods added to org.hibernate.Session. The new methods on Session are: enableFilter(String filterName), getEnabledFilter(String filterName), and disableFilter(String filterName). By default, filters are not enabled for a given session; they must be explcitly enabled through use of the Session.enabledFilter() method, which returns an instance of the new Filter interface. Using the simple filter defined above, this would look something like:

session.enableFilter("myFilter").setParameter("myFilterParam", "some-value");

Note that methods on the org.hibernate.Filter interface do allow the method-chaining common to much of Hibernate.

Big Deal

This is all functionality that was available in Hibernate before version 3, right? Of course. But before version 3, this was all manual processes by application code. To filter a collection you'd need to load the entity containing the collection and then apply the collection to the Session.filter() method. And for entity filtration you'd have to write stuff that manually modified the HQL string by hand or a custom Interceptor.

This new feature provides a clean and consistent way to apply these types of constraints. The Hibernate team envisions the usefulness of this feature in everything from internationalization to temporal data to security considerations (and even combinations of these at the same time) and much more. Of course it's hard to envision the potential power of this feature given the simple example used so far, so let's look at some slightly more in depth usages.

Temporal Data Example

Say you have an entity that follows the effective record database pattern. This entity has multiple rows each varying based on the date range during which that record was effective (possibly even maintained via a Hibernate Interceptor). An employment record might be a good example of such data, since employees might come and go and come back again. Further, say you are developing a UI which always needs to deal in current records of employment data. To use the new filter feature to acheive these goals, we would first need to define the filter and then attach it to our Employee class:

<filter-def name="effectiveDate">
    <filter-param name="asOfDate" type="date"/>
</filter-def>

<class name="Employee" ...>
    ...
    <many-to-one name="department" column="dept_id" class="Department"/>
    <property name="effectiveStartDate" type="date" column="eff_start_dt"/>
    <property name="effectiveEndDate" type="date" column="eff_end_dt"/>
    ...
    <!--
        Note that this assumes non-terminal records have an eff_end_dt set to a max db date
        for simplicity-sake
    -->
    <filter name="effectiveDate" condition=":asOfDate BETWEEN eff_start_dt and eff_end_dt"/>
</class>

<class name="Department" ...>
    ...
    <set name="employees" lazy="true">
        <key column="dept_id"/>
        <one-to-many class="Employee"/>
        <filter name="effectiveDate" condition=":asOfDate BETWEEN eff_start_dt and eff_end_dt"/>
    </set>
</class>

Then, in order to ensure that you always get back currently effective records, simply enable the filter on the session prior to retrieving employee data:

Session session = ...;
session.enabledFilter("effectiveDate").setParameter("asOfDate", new Date());
List results = session.createQuery("from Employee as e where e.salary > :targetSalary")
        .setLong("targetSalary", new Long(1000000))
        .list();

In the HQL above, even though we only explicitly mentioned a salary constraint on the results, because of the enabled filter the query will return only currently active employees who have a salary greater than a million dollars (lucky stiffs).

Even further, if a given department is loaded from a session with the effectiveDate filter enabled, its employee collection will only contain active employees.

Security Example

Imagine we have an application that assigns each user an access level, and that some sensitive entities in the system are assigned access levels (way simplistic, I understand, but this is just illustration). So a user should be able to see anything where their assigned access level is greater than that assigned to the entity they are trying to see. Again, first we need to define the filter and apply it:

<filter-def name="accessLevel">
    <filter-param name="userLevel" type="int"/>
</filter-def>

<class name="Opportunity" ...>
    ...
    <many-to-one name="region" column="region_id" class="Region"/>
    <property name="amount" type="Money">
        <column name="amt"/>
        <cloumn name="currency"/>
    </property>
    <property name="accessLevel" type="int" column="access_lvl"/>
    ...
    <filter name="accessLevel"><![CDATA[:userLevel >= access_lvl]]></filter>
</class>

<class name="Region" ...>
    ...
    <set name="opportunities" lazy="true">
        <key column="region_id"/>
        <one-to-many class="Opportunity"/>
        <filter name="accessLevel"><![CDATA[:userLevel >= access_lvl]]></filter>
    </set>
    ...
</class>

Next, our application code would need to enable the filter:

User user = ...;
Session session = ...;
session.enableFilter("accessLevel").setParameter("userLevel", user.getAccessLevel());

At this point, loading a Region would filter its opportunities collection based on the current user's access level:

Region region = (Region) session.get(Region.class, "EMEA");
region.getOpportunities().size(); // <- limited to those accessible by the user's level

Conclusion

These were some pretty simple examples. But hopefully, they'll give you a glimpse of how powerful these filters can be and maybe sparked some thoughts as to where you might be able to apply such constraints within your own application. This can become even more powerful in combination with various interception methods, like web filters, etc. Also a note: if you plan on using filters with outer joining (either through HQL or load fetching) be careful of the direction of the condition expression. Its safest to set this up for left outer joining; in general, place the parameter first followed by the column name(s) after the operator.

07. Aug 2004, 00:50 CET, by Christian Bauer

I just cleaned up some of my files and found this snapshot in one of my folders. I made it two weeks ago, as I finally had time to install Oracle 10g. I remember how I was shaking my head in desperation as I realized what they had done. Let me explain.

Back in the days of Oracle8, they decided to write their system installer tools in Java. They apparently had no idea what Java was about, so what we got was the most hated and horrible application installer you can imagine. It required a patched/hacked JDK, which was en vogue at this time, especially in big corporations with mediocre programmers. Usually, some poor guys were forced to use Java, and their CTO/VPs made them suffer. So, naturually, they passed that on to their users. But at least it worked, if you knew the workarounds. At some point, you had a working Oracle installation (the first Linux versions were also quite interesting...).

But this wasn't good enough for Oracle. They had a hacked JRE, their own window and widget toolkit, their own application framework, just for the installer. Hey, why not use this for /all/ Oracle applications? Adminstration tools, wizards, SQL console, all of them! Great idea! I first came in contact with the new Oracle Enterprise Manager in Oracle 8.1.x (I don't even want to know when it first reared it's ugly head in public). It was horrible. Well, actually you could do some useful things on Linux and Solaris which you only could do with 3rd party tools on Windows before, like browsing your database catalog graphically. But it wasn't that useful in practice. Want to have a quick look at a table's content? Sure, fire up the SQL console, there is no Quick select. You need more power in the SQL console than a trivial history of your queries? Autocompletion? PL/SQL editors? Use one of our full blown IDE's or buy a third party tool. But it was OK, in the end. Together with PL/SQL developer, I coded away on a stored procedure app for more than a year. Painful, but interesting.

It even got better with Oracle 9; finally the Oracle Enterprise Manager was a nice (albeit still a hacked JRE and proprietary toolkit) application that you could actually use in your daily work. At this point I had given up on stored procedure business logic and moved to ORM. Long story short, Oracle finally had good GUI management tools. Not perfect, but quite OK. Even the SQL consoles were sometimes usable: they had three different options, each one had different features, and each one had other features missing. Laughable, compared to other vendors, but in the world of Oracle, acceptable.

Fast forward: Oracle 10g. Installed it two weeks ago on a Linux box. It was pain (and I really know how to install Oracle...) and suffering. I didn't realize that they dropped the classic Oracle Enterprise Manager until I had to debug the startup code for the replacement: A web-based database control, that runs in a bastardized Apache/OAS/BC4J environment. Yes, you need a full application server now, just to have some basic management tools for your database (that is, if you don't have the old OEM 9.x around somewhere). Hundreds of megabytes of memory on the server are needed. You might also ask if this new web-based interface is better than the old desktop (client/server concept though) enterprise management tools. A web-based interface for daily work better than a GUI frontend? Surely not.

But wait, it gets worse. Remember that screenshot I have found? It shows isqlplus, the newest incarnation of the never-dying SQLplus console. Yes, it's primary use is to execute SQL. The old CLI-based sqlplus tool had some real good value in the early days, like flexible formatting control of the result printout for reporting, etc. The two graphical tools we got with Oracle 8 and 9 have been really bad, but at least they are quick to start and use. Now look at this new isqlplus. A textarea, a submit button, some primitive history functionality. This is a standalone product now with all the bells and whistles. It looks like it even has its own instance of the application server (at least it has its own TCP port). It probably even has it's own development group and VPs.

This is called progress. Thanks Oracle.

(I didn't have time to check if the Oracle 10g Grid Manager Foo is actually the rebranded old Enterprise Manager. There is still hope...)

30. Jul 2004, 00:04 CET, by Gavin King

Simple is a seductive notion. We all want to make things simple. But when we talk about software, simple could mean at least two different things.

There are always two ways of looking at nontrivial software: the implementation view, and the user view. I have always gotten the impression (I may be wrong) that many people who say that J2EE is too complex confuse the second view for the first. So, just in case I'm right about this, I think we should rephrase. Perhaps we should say J2EE is too /inconvenient/.

A simple framework is very often /not/ the best solution to a difficult problem. Making life easy for developers does not mean giving us simple tools. The problem with J2EE development is not that the J2EE stack provides too much complex functionality; it is that the functionality is not exposed to the user in a convenient way.

For example, Hibernate is /much/ more complex (implementation-wise) than a CMP 2.1 entity bean container. Arguably, some of Hibernate's user-visible functionality is also more complex. For example, HQL has many more features than EJBQL and could therefore be said to be more complex. Likewise, Hibernate's support for multiple inheritance mapping strategies is a complex feature, not visible in CMP 2.1.

But it is that very complexity that makes Hibernate powerful, and, perhaps paradoxically, simplifies life for people using Hibernate to write applications. By contrast, CMP 2.1 is simultaneously too simple (featurewise) and too complex (from the point of view of the user).

The best frameworks are /powerful/, not simple. But they do try to get their internal complexity out of the face of the user, as far as possible. Sometimes it is possible, other times not. If you try to oversimplify a difficult problem, you end up making life more difficult for your users, who are then forced to work around your framework's limitations, instead of working with it.

There are very, very few simple solutions to difficult problems. Be very sceptical when someone claims to have found one.

Showing 1176 to 1180 of 1213 blog entries