Help

Inactive Bloggers

Over the past few months we have been adding some simplifications to the way you can use and specify native sql queries in Hibernate. Gavin even blogged about some of them earlier , but I thought it were about time we brought some more news on this blog about it.

Auto detect return types and aliases

The newest feature related to native sql is that Hibernate will now auto detect the return types and even aliases of any scalar value from a sql query.

Before you had to do something like:

List l = s.createSQLQuery("SELECT emp.regionCode as region, emp.empid as id, {employee.*} 
                           FROM EMPLOYMENT emp, EMPLOYEE employee ...")
         .addScalar("region", Hibernate.STRING)
         .addScalar("id", Hibernate.LONG)
         .addEntity("employee", Employee.class)
         .list();
Object[] data = l.get(0);

Today you do not need to specify the type of the scalar values, but simply the alias name so Hibernate knows what data you actually want to extract from the resultset.

List l = s.createSQLQuery("SELECT emp.regionCode as region, emp.empid as id, {employee.*} 
                           FROM EMPLOYMENT emp, EMPLOYEE employee ...")
         .addScalar("region")
         .addScalar("id")
         .addEntity("employee", Employee.class)
         .list();
Object[] data = l.get(0); 

If the query is only returning scalar values then addScalar is not needed at all; you just call list() on the query.

List l = s.createSQLQuery("SELECT * FROM EMPLOYMENT emp").list();
Object[] data = l.get(0); 

It of course needs some more processing from Hibernate, but it makes experimentation and some data processing problems easier to do.

No redundant column mappings

Previously when you specified native sql in named queries you had to use the return-property element to (redundantly) specify which column aliases you wanted Hibernate to use for your native sql query. It were redundant because in most cases you would simply just be specifying the exact same columns as you had just done in the class mapping.

Thus it could get pretty ugly and verbose when you were starting to have even just mildly complex mappings such as the following which is from our unit tests for a native sql stored procedure call.

<sql-query name="selectAllEmployees" callable="true">
 <return alias="employement" class="Employment">
 <return-property name="employee" column="EMPLOYEE"/>
 <return-property name="employer" column="EMPLOYER"/>                     
 <return-property name="startDate" column="STARTDATE"/>
 <return-property name="endDate" column="ENDDATE"/>                       
   <return-property name="regionCode" column="REGIONCODE"/>                       
   <return-property name="id" column="EMPID"/>                                            
   <return-property name="salary"> 
    <return-column name="VALUE"/>
    <return-column name="CURRENCY"/>                      
   </return-property>
 </return>
 { call selectAllEmployments() }
</sql-query>

In the upcoming Hibernate 3.1 you can do the exact same with loss less code:

<sql-query name="selectAllEmployees" callable="true">
 <return class="Employment"/>
 { call selectAllEmployments() }
</sql-query>

or in code (for normal sql):

List l = s.createSQLQuery("SELECT * FROM EMPLOYMENT emp")
          .addEntity(Employee.class)
          .list();
Object[] data = l.get(0); 

This also removes the need for always using the curly brackets syntax (e.g. {emp.name})to handle the aliasing as long as you are not returning the same entity type more than once per row.

Hope you like it, Enjoy :-)

25. Nov 2005, 00:58 CET, by Christian Bauer

It would be great if I could use the TestNG plugin for IntelliJ but I'm still on version 4.5 and its only available for IDEA 5. Tried to switch a few times but the XML editor just doesn't work anymore and throws exceptions faster than I can click. I want the XML editor of IDEA 3.x back, it just worked and didn't have the goofy indentation routines of 4.x...

Still, after using TestNG only for a a day or two, I think that its so much better than JUnit that I wonder why I didn't try earlier. I guess I should have listened to Gavin who was raving about how easy it was to get the Seam tests up and running. It actually has all the things I've been missing with JUnit.

So I got my first TestNG setup running nicely for CaveatEmptor. By the way, I've uploaded a new alpha release that includes all the stuff I'm going to talk about here (and quite a few other new things). For those of you interested in the progress of the /Hibernate in Action/ second edition: I've completed 95% of all mapping examples, which I think (and hope) was the most time consuming part. I'm now updating the chapters about sessions, transactions, caching, etc., so finishing the manuscript this year is possible, I guess.

Back to TestNG and how I used it to test EJBs. One of the nice things about TestNG is how easy it is to configure runtime infrastructure you need for your tests. First I wrote a class that boots the JBoss EJB 3.0 container. TestNG will run the startup() and shutdown() methods for a suite of tests:

public class EJB3Container {

    private static InitialContext initialContext;
    private EJB3StandaloneDeployer deployer;

    @Configuration(beforeSuite = true)
    public void startup() {
        try {

            // Boot the JBoss Microcontainer with EJB3 settings, loads ejb3-interceptors-aop.xml
            EJB3StandaloneBootstrap.boot(null);

            // Deploy CaveatEmptor beans (datasource, mostly)
            EJB3StandaloneBootstrap.deployXmlResource("caveatemptor-beans.xml");

            // Add all EJBs found in the archive that has this file
            deployer = new EJB3StandaloneDeployer();
            deployer.getArchivesByResource().add("META-INF/persistence.xml");

            // Deploy everything we got
            deployer.create();
            deployer.start();

            // Create InitialContext from jndi.properties
            initialContext = new InitialContext();

        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }
    }

    @Configuration(afterSuite = true)
    public void shutdown() {
        try {
            deployer.stop();
            deployer.destroy();
            EJB3StandaloneBootstrap.shutdown();
        } catch (Exception ex) {
            throw new RuntimeException(ex);
        }
    }

    public static Object lookup(String beanName) {
        try {
            return initialContext.lookup(beanName);
        } catch (NamingException ex) {
            throw new RuntimeException("Couldn't lookup: " + beanName, ex);
        }
    }

}

First thing on startup, I boot the JBoss Microcontainer, which is actually the kernel of the future JBoss AS 5.x. Because I'm using the EJB3StandaloneBootstrap it will automatically search for the configuration files for the EJB 3.0 container. You need those and the required libraries in your classpath. If you want to try this download CaveatEmptor and copy the libraries and configuration.

Next I'm deploying some beans from caveatemptor-beans.xml, right now this includes just a single datasource for integration testing. It is configured to use JTA and bound to JNDI, services provided by the microcontainer. You can set up and wire other stateless POJO services there if you have to.

Finally I'm deploying all EJBs for my tests. The EJB3StandaloneDeployer supports many different deployment strategies and I decided to use search the classpath and deploy the JAR that contains this file. For EJB 3.0 persistence, which is what I'm primarily testing in CaveatEmptor, I have to have a META-INF/persistence.xml configuration anyway for an EntityManagerFactory. I actually have to figure out how to deploy an exploded archive with a single command and auto-discovery of EJBs...

Now if I write a test class I will need a starting point, I need to get a handle on one of my EJBs. I can look them up in JNDI - that's what the static lookup() method is good for - or I could wire them into my test classes with the microcontainer. I decided to use a lookup:

public class CategoryItem {

    @Test(groups = {"integration.database"})
    public void saveCategory() {

        CategoryDAO catDAO = (CategoryDAO)EJB3Container.lookup(CategoryDAO.class.getName());
        Category newCat = new Category("Foo");
        catDAO.makePersistent(newCat);

    }
}

Here I'm testing a data access object. Well, I'm not actually testing much as I don't assert any state after saving. I still have to port all my JUnit tests over but my first goal was to have the infrastructure ready.

The CategoryDAO is a stateless EJB that I wrote with the Generic DAO pattern. To make it an EJB I had to add a @Stateless annotation and get the EntityManager injected into the bean. What about transactions?

For a stateless EJB the default transaction attribute is REQUIRES, so any method I call on the DAO is executed in a system transaction. If several are called in the same test method they all run in the same transaction.

I've also started using TestNG groups, a great feature. By marking the test method as belonging to the group integration.database I can target it in my TestNG suite assembly in testng.xml:

<suite name="CaveatEmptor-EJB3" verbose="1">

    <test name="Runtime">
        <packages>
            <package name="org.hibernate.ce.auction.test.testng.runtime"/>
        </packages>
    </test>

    <test name="Integration">
        <groups>
            <run><include name="integration.*"/></run>
        </groups>
        <packages>
             <package name="org.hibernate.ce.auction.test.testng.persistence"/>
         </packages>
    </test>

</suite>

I still have to figure out how to create the assembly in a way that says use this runtime for this group of tests. Right now I'd need several suites for this.

Since I'm stuck with the old IntelliJ I've got to run the tests with Ant (I'd really like to have that green bar ;):

<target name="testng.run" depends="testng.package"
    description="TestNG tests with the JBoss EJB3 Microcontainer">
    <mkdir dir="${testng.out.dir}"/>

    <testng outputDir="${testng.out.dir}">
        <classpath>
            <pathelement path="${testng.build.dir}/${proj.shortname}.jar"/>
            <path refid="project.classpath"/>
            <path>
                <fileset dir="${container.lib}">
                    <include name="**/*.jar"/>
                </fileset>
            </path>
        </classpath>
        <xmlfileset dir="${testng.classes.dir}" includes="testng.xml"/>
    </testng>
</target>

So I'm hooked on TestNG now and will use it for all EJB tests in the future. I've got to check out Gavins TestNG configuration for JSF + EJB testing in Seam and how he simulates the HTTP environment...

Oh, and if any Jetbrains guys read this: fix the XML editor. There are people out there who use it to write docs. Thanks :)

21. Nov 2005, 14:34 CET, by Gavin King

Annotations are undoubtedly the coolest new thing in Java SE 5 and will deeply change the way we write Java code. In the process of designing EJB 3.0, Hibernate Validator and Seam, we've had a chance to really start to stretch the use of annotations to the limit. It's striking just how many kinds of things may be expressed more elegantly and efficiently in declarative mode when you have a facility for mixing declaration and logic into the same source file. We've seen that in practice, whatever initial misgivings people may have about Java annotations, once they actually start using something like EJB 3.0 in a real project, they experience such a productivity increase that they quickly become comfortable with the approach.

Inevitably, there are a number of places where the annotations spec disappoints me. Two problems are already well-documented on this blog and elsewhere: first, annotation member values may not be null; second, annotation definitions do not support inheritance. We regularly need a null default value for an annotation, and have to work around the problem by using some magic value such as the empty string to represent null. This is a truly ugly solution and I really don't understand why JSR-175 could not have allowed null values. Lack of inheritance is inconvenient when defining annotations that share members with the same semantics. I won't belabor these issues since they are now well-understood.

Now, in practice, neither of these problems is that big a deal to the aplication code that is actually using the annotations. These problems are mainly an inconvenience to framework developers who define annotations. So we can live with this.

The /worst/ limitation of JSR-175, as it stands today, is the incredible paucity of facilities for validating annotated classes. Surprisingly, I have not seen this discussed elsewhere. The only facility provided for constraining annotations is the @Target meta-annotation, which specifies what kind of program element (class, method, field, etc) may be annotated. Compared with the functionality provided by DTDs or XML schemas, this is amazingly primitive. And, unlike the previous limitations we mentioned, this problem burdens the end user of an annotation-based framework rather than the framework designer.

We /at least/ need to be able to write down the following constraints:

  • This annotation annotates classes that implement or extend a particular class or interface
  • This annotation annotates methods with a particular signature
  • This annotation annotates fields of a particular type
  • This annotation occurs at most once in a class
  • This annotation annotates methods/fields of classes with a particular annotation

Probably, we should also be able to write more sophisticated things such as:

  • The following annotations are mutually exclusive
  • If this annotation appears, another annotation must also appear

Let's show some examples:

  • The @PersistenceContext annotation only makes sense on a field of type EntityManager, or a method with the signature void set<Name>(EntityManager). If the user puts it somewhere else, it is a mistake!
  • The @PostConstruct annotation may only appear on one method in a class. If the user has two @PostConstruct methods, it is an error.
  • The @Basic annotation only makes sense for @Entity classes and @Embeddable classes. If it appears on a session bean, it is an error.
  • The @Stateful annotation may only appear on classes which implement Serializable.
  • The @Lob annotation may only appear on fields or getter methods of type String or byte[]
  • @Stateless and @Stateful can't appear together on the same class
  • Etcetera...

In each of these examples, a programming error could be caught at compile time instead of runtime.

Actually, the lack of a proper constraint language in the current release of Java is already starting to lead some people down the wrong path. The Java EE 5 draft uses an annotation called @Resource for dependency injection of all kinds of diverse things, many of which are not resources in the usual sense of the word. Some resources require extra information such as authenticationType or mappedName, information which is not even meaningful for other types of resources. So the @Resource annotation is turning into a bag of unrelated stuff, most of which is irrelevant to any given type of resource. This is a construct with extremely weak semantics, and extremely low cohesion. It gets more complex, and less cohesive, each time we discover a new kind of resource. It's the annotation equivalent of a class called Resource with methods like sendJmsMessage(), executeSqlQuery() and listInbox().

If we would have a proper constraint facility for Java annotations, the Java EE 5 group would have realized that the @Resource annotation needed to be split into several annotations, each of which was constrained to apply only where it was relevant.

Instead of this:

@Resource(authenticationType=APPLICATION) Connection bookingDatabase;

We would have ended up with this:

@Inject @AuthenticationType(APPLICATION) Connection bookingDatabase;

Which uses finer-grained, more semantic, more cohesive annotations. @AuthenticationType would be constrained to apply only to resource types for which it makes sense. Notice that the @Inject annotation, being less specific to the anticipated kinds of resources, is actually more reuseable by future expert groups who discover new uses for dependency injection.

Let's hope that truly validatable annotations are a feature of Java SE 6.

A second problem that has regularly bothered me is that JSR-175 defines no standard way to override annotations specified in the Java code via some well-defined external metadata format, and have the overridden values accessible in a uniform way via the reflection API. This forces framework developers to define their own languages for overriding annotations, and their own facilities for parsing the metadata, merging values and exposing the merged values to the application. Why is this important?

Well, it is very often useful for other frameworks, or even the application itself, to consume annotations provided by a framework such as EJB3 or Seam. Well-designed annotations express information about the semantics of a component and its role in the system that is useful to other generic code that is built with an awareness of the component model. For example, the @Entity annotation is certainly of interest to aspects other than persistence! (Seam uses it, for one.) But the EJB3 specification provides no straightforward, foolproof way for the application to be able to tell if a class is an entity bean, since the EJB3 container considers both classes annotated @Entity and classes mentioned in the deployment descriptor. Growing an entire API for exposing the merged EJB3 metadata would keep the expert group busy for months.

However, in my view, the importance of annotation overriding has been overestimated. JSR-175 was badly named. The word metadata is highly overloaded and so some people have taken Java annotations to be a facility for expressing system /configuration/. In fact, annotations are clearly a terrible place to express configuration! Annotations, used well, enable /declarative programming/, which is a quite different thing. My prediction is that we will discover that in practice people will use annotation overriding much less often than they expect.

For example, contrary to certain critiques of EJB3 that have been published in blogs and elsewhere, /nobody is suggesting that you should configure datasources or JMS queues using annotations!/ Rather, an annotation would provide a /logical name/ of a datasource which would be configured elsewhere using XML. There is no reason for a logical name to change between different deployments of the system. A second example comes from the world of ORM. There is almost /never/ a good reason for table and column names to change between different deployments of a system. (If this were really necessary, it would be virtually impossible to build systems using handwritten SQL!) So there is no reason to make your code harder to understand by splitting mapping information out of the definition of the entity bean. On the other hand, schema and catalog names /do/ change, and hence do not belong in annotations.

So, while I do hope that a future revision of Java SE will provide a standard annotation overriding facility, I can probably just live without it. My intuition is that, by nature, most of the information that does change between different deployments of the system is of much less interest to the application or additional aspects.

21. Nov 2005, 09:01 CET, by Gavin King

Don't miss the Seam webinar on December 7 at 1PM EST. Enrol here .

For people based in Melbourne or Atlanta, I'll be speaking at EJV on December 15, and at AJUG on January 16.

Earlier today I saw a transaction question targeted for a completely different audience pop up as the first headline news item on a well known java news site. Besides giving me and my colleagues a good laugh about bugs and transactions it also touched upon one of the questions that have given me a couple of free beers in bar bets and been mind-boggling for students during trainings. The question relates to the following (simplified) code:

Connection con = DriverManager.getConnection(url, name, pwd);
con.setAutoCommit(false);
Statement st = con.prepareStatement("delete VITAL_DATA");
st.executeUpdate();
con.close();

Assuming that VITAL_DATA contains data before we execute this code, will it still contain those data after the call to con.close()?

Yes or No ?

The answer is: It depends!

If this code were executed against an Oracle database VITAL_DATA will no longer contain data since Oracle implicitly calls commit() if your connection has left over changes.

It is about here people starts arguing with me and say I'm crazy! No way that is possible, because all developers who believe in the goodness of transactions and its ACID properties would state that nothing should be committed to a database without an explicit call to commit() when not running in auto-commit mode - anything else would be a sin.

Well, I guess Oracle like to be sinful, and it is even documented.

/Page 3-14, Oracle9i JDBC Developers Guide and Reference/ contains the following text:

If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit <code>COMMIT</code> operation is executed.

I heard from an Oracle tech-guy that this behavior is a left over from how the old OCI library worked - whether that is true or not I don't know; but it sure is a surprise for most developers I have shown this too (including my self the first time I bumped into this issue).

After discovering this a couple of years back I went to look in the JDBC spec to see who is to blame for this behavior.

The fun part is that I have not been able to find anything about the behavior of close() in JDBC besides the following text from the /JDBC 4.0/ spec:

When auto-commit is disabled, each transaction must be explicitly committed by
calling the Connection method commit or explicitly rolled back by calling the
Connection method rollback, respectively.

The javadoc for close() states:

Releases this Connection object's database and JDBC resources
immediately instead of waiting for them to be automatically released

From a naive ACID believing person I would say Oracle are wrong on this, but notice how the specification only mentions how the transaction behaves ? It does not explicitly state that close() is not allowed to commit the data, only that it should release resources (which it does!)

Thus from my perspective Oracle is walking on an edge here, but apparently without breaking the spec. Note that it might also occur on other databases, but it has never occurred for me on the other major databases I have worked with.

Lessons learned ? Always explicitly (or declaratively) commit or rollback your transactions!

Showing 1176 to 1180 of 1249 blog entries