Published: February, 15th 2008 HiveMind Utilities
General utility modules for HiveMind Logo

Quick Start

This little guide will help you find your way through HiveMind Utilities setup and usage.

If you have problems, refer to the FAQ first, then you can ask on HiveMind Utilities forums.

  1. Installation
  2. Examples
  3. HiveMind Utilities Modules
  4. Configuration for JDBC DataSource
  5. Configuration for Hibernate 3 Session
  6. Configuration for iBATIS SqlMapClient
  7. Configuration of transactions handling
  8. AdapterBuilderFactory usage
  9. Mapping SQLExceptions to more specific runtime exceptions
  10. Externalizing properties outside the war file
  11. Dependency Injection in POJOs
  12. Exporting a service as a Web Service
  13. Defining and using Event Channels
  14. Where to go next?


Build HiveMind Utilities

First of all, you have to install the el4ant build system (I suggest you install it on a directory named "hivemind-dev", as I did).

Add a new entry to your PATH environment variable to point to hivemind-dev/ant/bin. Make sure that you have JDK 5 in your path too.

Then you have to extract the HiveMind Utilities source distribution under this directory (it will automatically be named hivemind-utilities).

Make sure you have all the required libraries in hivemind-dev/hivemind-utilities/lib (check out the jarslist.txt file in this directory).

From a shell (Unix) or a Command prompt (Windows), change to hivemind-dev/hivemind-utilities directory and type:

  • ant -f bootstrap.xml [only at first time install]
  • ant compile jars create.wars

This will build all HiveMind Utilities modules and application samples. jars can be found in hivemind-dev/hivemind-utilities/dist/lib and wars in hivemind-dev/hivemind-utilities/dist/j2ee.

If you want to have a list of all available ant targets, just type ant -p. Many targets are provided by the el4ant build system, some are provided by HiveMind Utilities plugins for el4ant. For more info on "basic" el4ant targets, please refer to el4ant documentation.

Build HiveMind Utilities using Eclipse

Thanks to the el4ant build system, you now can generate Eclipse setup for all HiveMind Utilities modules out of the box!

As soon as you have built HiveMind Utilities once (as described previously), you will be able to use Eclipse to work with HiveMind Utilities samples or even modules. For this, you will need to follow the simple steps below:

  1. Launch Eclipse ;-)
  2. Set the workspace directory (menu File->Switch Workspace...) to hivemind-dev/workspace
  3. Import all HiveMind-Utilities modules (menu File->Import..., option "Existing projects into Workspace", then "Select root directory", "browse...", select hivemind-dev/hivemind-utilities directory, all HiveMind utilities modules and samples will be listed and selected by default, remove selection of hivetranse.itest.db module which is not a real source module, then "Finish")
  4. That's all!

If you want to use CheckClipse to run CheckStyle controls on HiveMind Utilities source code, you'll additionnally have to:

  1. Install the CheckClipse 2.x plugin in your Eclipse plugins directory
  2. Setup CheckClipse as described in the el4ant documentation

Note: due to some bugs in el4ant Eclipse support, you'll have to manually disable CheckClipse controls for sample, utest and itest modules (CheckStyle control is disabled for these modules under el4ant build system).

Disclaimer: I use Eclipse 3.2 with CheckClipse 2.1. I did not test the steps above with other versions.

Build your own projects

The only thing to do is to have all necessary libraries (as defined in hivemind-dev/hivemind-utilities/lib/readme.txt) accessible to your build system during compilation and war creation.


The HiveMind Utilities packages include a few simple examples (web applications) showing how to use the various HiveMind Utilities modules.

These examples need a database to work with. They have been tested with MySQL 4 but should work with about any SQL DBMS (SQL instructions for creating the database can be found in hivemind-dev/hivemind-utilities/hivetranse/itest.db/sql, this directory contains subdirectories for each DBMS (currently only mysql and postgresql).

To create schemas for another DBMS, you may perform the following steps:

  1. Create a new subdirectory with your DBMS name (eg oracle, sqlserver...) in hivemind-dev/hivemind-utilities/hivetranse/itest.db/sql
  2. Create 4 sql files named as follows (you can use those existing for mysql or postgresql as examples):
    • create-db.sql
    • create-schema.sql
    • drop-db.sql
    • insert-data.sql
  3. In hivemind-dev/hivemind-utilities/etc, create a new file named (where "dbms" is the name you have used for the directory in step 1); you can use as an example
  4. From a shell (Unix) or a Command prompt (Windows), change to hivemind-dev/hivemind-utilities directory and type (note: "dbms" is the same as in previous step):

    ant sql.execall -Dsql.db=dbms

All web examples are based on struts (controller) and velocity (view) and have been tested on jakarta Tomcat 5.0.

If you don't like struts or velocity, you can freely adapt these examples to your preferred environment. This should be a simple task because most of the interesting stuff of using HiveMind Utilities is not in the web part but in the modules themselves, in particular the various hivemodule.xml files are of interest.

HiveMind Utilities Modules

The HiveMind Utilities project provides several HiveMind modules in addition to sample application modules.

  • hiveutils: provides several utility classes used by other modules, a few utility classes for web applications, plus some services for end-developers (AdapterBuilderFactory, PropertyFileSymbolSource, ObjectBuilder...) This module is independent of any other module, but almost all other modules depend on it.
  • hivetranse.exceptions: simple jar containing exceptions used by hivetranse.core. This allows developers writing rich clients to include these exceptions in the packaged client executable (without having to include hivetranse.core if it is not required).
  • hivetranse.core: core basis for HiveTranse. It defines the TransactionService and the TransactionInterceptor. It depends on hiveutils and hivetranse.exceptions.
  • hivetranse.jdbc: provides the TransactionService implementation specific to JDBC DataSources. It depends on hivetranse.core and requires a pool of DataSources (such as Jakarta commons-dbcp).
  • hivetranse.hibernate3: provides the TransactionService implementation specific to Hibernate3 Sessions. It depends on hivetranse.core and requires all Hibernate 3.1 libraries.
  • hivetranse.ibatis: provides a factory service for iBATIS SqlMaps V2 support. It depends on hivetranse.jdbc and requires iBATIS SqlMaps libraries (NB: it does not use iBATIS DAO framework).
  • hiveevents: provides a generic and complete framework for management of events notification inside the JVM (ie, it is not competing with JMS). It is based on the main concept of event Channels that allow external components to push events to the Channel, or subscribe (either in push or pull mode) to it to be notified whenever events arise. It also provides events filtering, including a Filter based on an easy expression language.
  • hivelock.core, hivelock.shared, hivelock.default: provide a simple framework for managing security in HiveMind-based applications. Authentication and authorization are supported. A new HiveMind ServiceModel, named "user", is provided to handle services with state related to the current user.
  • hiveremoting.caucho: simple framework to "export" any service to the outside world through a remoting protocol (currently, Caucho's hessian and burlap over http are supported). The framework also includes a special factory to access such a remote service from the client side (enables 2 Hivemind-based systems to communicate with each other).
  • hivegui: provides several utilities to help you create Swing-based rich client applications that support docking. Among utilities are: tables and table models handling, menus creation, a command framework, dialog handling, message boxes management...
  • jdbc.example: sample module implementing a simple web application demonstrating the use of hivetranse.jdbc module (on which it depends of course). It is based on struts.
  • lock.example: same web application sample as before, but additionally demonstrating the use of hivelock modules.
  • hibernate3.example: sample module implementing a simple web application demonstrating the use of hivetranse.hibernate3 module (on which it depends of course). It is based on struts. It also demonstrates usage of the AdapterBuilderFactory and the "Open Session in View" pattern.
  • ibatis.example: sample module implementing a simple web application demonstrating the use of hivetranse.ibatis module (on which it depends of course). It is based on struts.
  • caucho.example: sample module implementing a simple web service application demonstrating the use of hiveremoting.caucho module. This module also includes a very simple client application.

Configuration for JDBC DataSource

Let's suppose you need to create a DAO service (MyDAO in the example) that needs access to your database (named test). We will suppose that you use MySQL and its JDBC driver.

To do so, you first need to create a DataSource service as follows (we suppose you use jakarta commons-dbcp to create a pool of DataSources):

<service-point id="MyDataSource" interface="javax.sql.DataSource">
    <invoke-factory model="singleton">
        <construct class="org.apache.commons.dbcp.BasicDataSource">
            <set property="driverClassName" value="com.mysql.jdbc.Driver"/>
            <set property="url" value="jdbc:mysql://localhost/test"/>
            <set property="username" value="root"/>
            <set property="password" value="root"/>
            <set property="defaultAutoCommit" value="false"/>
            <set property="maxActive" value="10"/>
            <set property="initialSize" value="5"/>

This configuration is completely independent of HiveTranse; you could replace commons-dbcp with another library providing pooled DataSources, but of course the arguments passed to the construct tag would probably differ a lot. Here, it is up to you to pass the right parameters to the DataSource service (JDBC driver, DB URL, username, password...)

Then you need to declare a Connection service that will later be injected into your DAO service:

<service-point id="MyConnection" interface="java.sql.Connection">
    <invoke-factory service-id="hivetranse.jdbc.ConnectionFactory" model="singleton">
        <datasource id="MyDataSource"/>

Finally you can declare your DAO service to be injected with the Connection. You would add this to your hivemind.xml module configuration:

<service-point id="MyDAO" interface="com.acme.MyDAO">
    <invoke-factory model="singleton">
        <construct class="com.acme.MyDAOImpl">

Any method of your DAO service can now use the JDBC Connection it was injected at construction time. HiveTranse takes care of making sure the Connection is correctly initialized and that your methods are executed in some valid transaction context.

Examples of configuration for transaction contexts are shown [hereafter][#start.config.transaction].

Configuration for Hibernate 3 Session

Let's suppose you need to create a DAO service (MyDAO in the example) that needs access to your database (named test). You want your DAO to work with a Hibernate Session. We take it for granted that you have already a configuration file ready for Hibernate (hibernate-config.xml in the example).

First of all, you need to declare a Session service that will later be injected into your DAO service:

<service-point id="MySession" interface="org.hibernate.Session">
    <invoke-factory service-id="hivetranse.hibernate3.SessionFactory" model="singleton">
        <config file="hibernate-config.xml">
            <property name="hibernate3.connection.password" value="${password}"/>

You have the possibility to "externalize" some properties out of the hibernate-config.xml and declare them in <property> tags and thus have the possibility to use HiveMind SymbolSource to substitute values, like in the example above.

Then you have to declare your DAO service to be injected with the Session. You would add this to your hivemind.xml module configuration:

<service-point id="MyDAO" interface="com.acme.MyDAO">
    <invoke-factory model="singleton">
        <construct class="com.acme.MyDAOImpl">

Any method of your DAO service can now use the Hibernate Session it was injected at construction time. HiveTranse takes care of making sure the Session is correctly initialized and that your methods are executed in some valid transaction context.

Examples of configuration for transaction contexts are shown [hereafter][#start.config.transaction].

If you want to use the "Open Session in View" pattern with your Hibernate Session(s), hivetranse.hibernate3 supports it. However, this feature is disabled by default. To enable it, you just need to declare the following in your hivemodule.xml configuration:

<contribution configuration-id="hivemind.ApplicationDefaults">
    <default symbol="hivetranse.hibernate3.DeferSessionClose" value="true"/>

With this setting, you will be able to use Hibernate lazy-loading capabilities and safely dereference lazy-loaded collections in your JSP pages without catching a LazyInitializationException at that time.

If you need, you have the possibility to define a Hibernate Interceptor (org.hibernate.Interceptor) to be used by all the Sessions for one given SessionFactory. Your interceptor may be any object (instance, service...) Typically, this is done like in the following snippet:

<service-point id="MySession" interface="org.hibernate.Session">
    <invoke-factory service-id="hivetranse.hibernate3.SessionFactory" model="singleton">
        <config file="hibernate-config.xml"

The hibernate3.example module is one simple example where the interceptor is used only for logging all calls to it by Hibernate.

If you want to use Hibernate Annotations, then put the corresponding jar in the classpath and that's all! HiveTranse offers transparent support for Hibernate Annotations.

Configuration for iBATIS SqlMapClient

Let's suppose you need to create a DAO service (MyDAO in the example) that needs access to your database, and you want to use iBATIS SqlMaps to access it.

First of all, you need to create a Connection by using hivetranse.jdbc like described in [here][#start.config.jdbc]. We will suppose you already have set a Connection service named "MyConnection".

Then you need to declare a SqlMapClient service that will later be injected into your DAO service:

<service-point id="MySqlMap" interface="com.ibatis.sqlmap.client.SqlMapClient">
    <invoke-factory service-id="hivetranse.ibatis.SqlMapClientFactory" model="singleton">
        <sqlmap config="sqlmap-config.xml" connection="MyConnection"/>

The "sqlmap-config.xml" contains your iBATIS SqlMap configuration. Please note that it should not contain a <transactionManager> tag declaration.

Finally you can declare your DAO service to be injected with the SqlMapClient. You would add this to your hivemind.xml module configuration:

<service-point id="MyDAO" interface="com.acme.MyDAO">
    <invoke-factory model="singleton">
        <construct class="com.acme.MyDAOImpl">

Any method of your DAO service can now use the SqlMapClient it was injected at construction time. HiveTranse takes care of making sure the SqlMapClient is correctly initialized and that your methods are executed in some valid transaction context.

Please note that you should not use any SqlMapClient method that deals with transaction demarcation (eg startTransaction...): transaction handling is taken care of by HiveTranse.

Examples of configuration for transaction contexts are shown [hereafter][#start.config.transaction].

Configuration of transactions handling

For HiveTranse system to work correctly and transparently, you need to make sure that the HiveTranse TransactionInterceptor has been executed before using a JDBC Connection or a Hibernate Session.

The TransactionInterceptor, applied to one of your services, will enable you to configure a "Transaction Demarcation" for the duration of the call of one method in that service (more information can be found in the javadoc). Please note that you don't have to declare a TransactionInterceptor for services that are in the call-chain.

In addition to the Transaction Demarcation, a TransactionInterceptor also enables you to declare how a transaction should be terminated (committed or rolled back) when an exception is thrown (and passes the transaction demarcation boundaries).

One configuration example follows (applicable in the same way whether you use JDBC Connection or a Hibernate Session).

<service-point id="MyService" interface="com.acme.MyService">
    <invoke-factory model="singleton">
        <construct class="com.acme.MyServiceImpl">
    <interceptor service-id="hivetranse.core.TransactionInterceptor">
            <method pattern="doSomething" demarcation="RequiresNew"/>
            <method pattern="*" demarcation="Required"/>
            <exception name="java.lang.RuntimeException" rollback="true"/>
            <exception name="java.lang.Exception" rollback="false"/>

In this example, all methods in MyService always have a transaction ready to use when they are called. The doSomething method will always start a new transaction when it is called, whereas other methods will reuse any existing transaction (or create one if none exists yet). In addition, if a method throws a RuntimeException (or any child), then the current transaction will be rolled back (or more exactly: marked for later rollback, depending on the consecutive transaction demarcations in the call context). If a method throws any checked exception, the transaction will not be rolled back. This behavior is complliant to standard EJB behavior.

But HiveTranse is not limited to EJB behavior!

You can define different behavior as you wish. You can even define a default behavior (to avoid always defining the same configuration in each TransactionInterceptor declaration). The following xml excerpt shows an example:

<contribution configuration-id="hivetranse.core.TransactionDefaults">
        <exception name="java.lang.Throwable" rollback="true"/>
        <method pattern="*" demarcation="Never"/>

In this example, we prefer to declare that any Throwable should rollback the current transaction. This configuration also declares the default transaction demarcation to be "Never".

Note that concepts of transaction demarcations in HiveTranse are taken from the Sun EJB specifications.

Exceptions Wrapping

Prior to hivetranse 0.6.1, any exception occurring during any call to the TransactionService (including calls performed by the TransactionInterceptor outside of your services) was systematically wrapped inside a TransactionException. Although this makes sense when a checked Exception is thrown (e.g. when committing a JDBC transaction, an SQLException may be thrown), it does not necessarily look natural if a RuntimeException was thrown (this can happen when using Hibernate 3 for instance).

So starting with hivetranse 0.6.1, you can set an option to disable systematic wrapping of RuntimeException by TransactionService (by default, wrapping is enabled) as follows:

<contribution configuration-id="hivemind.ApplicationDefaults">
    <default symbol="hivetranse.core.WrapRuntimeExceptions" value="false"/>

AdapterBuilderFactory usage

The AdapterBuilderFactory is a special factory that enables to use a service class that does not really implement the service interface. It is particularly useful in the following situations:

  • you have a legacy class that you would like to use as a HiveMind service, but this class does not implement any interface. HiveMind BuilderFactory prevents you from using this class as a service implementation. Now you can declare an interface with all public methods you want to use from the legacy class (possibly all of them) and then define your service implementation in HiveMind through the AdapterBuilderFactory.
  • you have a legacy class that implements an interface but which methods all throw checked exceptions which you do not consider recoverable, and thus would better be unchecked. You can declare an equivalent interface but remove all offending code{throws} clauses and then define your service implementation in HiveMind through the AdapterBuilderFactory.

AdapterBuilderFactory takes exactly the same arguments as HiveMind BuilderFactory plus additional arguments to declare how thrown exceptions should be translated into different exceptions (if necessary).

The following xml snippet shows an example:

<service-point id="MyLegacyService" interface="example.AnotherInterface">
    <invoke-factory service-id="hiveutils.AdapterBuilderFactory">
        <construct class="example.MyLegacyClass">
        <exception-mapping  from="example.LegacyCheckedException" 

In the above example, example.AnotherInterface is a new interface, independent from example.MyLegacyClass. The example above is used to "convert" all LegacyCheckedExceptions into less cumbersome java RuntimeExceptions. It is possible to define several <exception-mapping> tags to define different mappings based on the original exception type, mappings are tried in the order they are defined.

Another convenient usage is foreseeable for DAO implementations. Indeed, most libraries you can use in your DAO implementations to manage persistence of your business objects (JDBC, iBATIS...) have methods that are declared to throw checked exceptions (SQLException...) Obviously you do not want your DAO interfaces to declare methods that throw such exceptions. On the other hand, it is boring to copy/paste all boilerplate code inside your DAO implementations to try/catch/throw in order to change the checked exceptions into unchecked ones. Hence you can declare your DAO interface with no checked exception, code DAO implementations with methods directly throwing checked exceptions thrown by the persistence library that you use. Finally you just have to use AdapterBuilderFactory to build your DAO implementation.

Mapping SQLExceptions to more specific runtime exceptions

When using JDBC (directly or indirectly: this is true also for iBATIS), it is always a nightmare to deal with the infamous SQLException that can be raised. The problems are:

  • SQLException is a checked Exception so you have to catch it somewhere (generally in your DAOs)
  • There are many different reasons for a database access to fail: DBMS is not running, SQL syntax is incorrect, a row you tried to insert already existed... Whatever happens, you always catch one SQLException where the only way to differentiate the reason is to analyse the ErrorCode or the SQLState. The problem gets even more complex when you consider the fact that those codes are not standardized but rather every DBMS will have its own set of codes!

That is why the hivetranse.core module has introduced (in version 0.4.2) a new hierarchy of DataAccessExceptions (all are RuntimeExceptions so you do not have to catch them if you don't need to) and new SQLExceptionMapper services to translate any SQLException into the specific DataAccessException.

Using SQLExceptionMapper along with AdapterBuilderFactory allows you to create DAOs that directly call JDBC methods (or iBATIS methods), never catch any SQLException, but which the callers never receive any SQLException either, but a DataAccessException (or subclass) instead.

To do this, you can proceed as the following describes (the description is based on iBATIS usage but is easily adapted to JDBC).

First of all, let's suppose you defined a DAO service interface as follows:

public interface MyDAO {
    public List selectMyObjects();

Here is how you can implement it with iBATIS:

public class MyDAOImpl {
    public List selectMyObjects() throws SQLException {
        return sqlMapClient.queryForList("SelectAllMyObjects", null);

Now you can declare the HiveMind service for MyDAO:

<service-point id="MyDAO" interface="com.acme.AccountDAO">
    <invoke-factory service-id="hiveutils.AdapterBuilderFactory"
        <exception-mapper mapper="service:hivetranse.core.exceptions.MySQLMapper"/>
        <construct class="com.acme.MyDAOImpl">

Please note the reference to the hivetranse.core.exceptions.MySQLMapper service. This service does the mapping of the SQLExceptions into specialized DataAccessExceptions, based on the codes returned by MySQL 4.1 DBMS. If you were using PostgreSQL 8.0, then you would use hivetranse.core.exceptions.PostgreSQLMapper instead.

For the current time, MySQL, PostgreSQL, HSQLDB and Derby are supported, but support for other DBMS is easy to add (provided that you know exactly the meaning of error codes for your DBMS). In the future, HiveTranse should support more DBMS, but for this the HiveMind Utilities count on you! If you have a good knowledge of one DBMS and you think you can help in this area, then take a look at the sql-exceptions.xml file in hivetranse.core. Please note that special integration test cases have been developped to be easily extended to check SQLExceptionMappers for new DBMS.

Now you may wonder: "Why does MyDAOImpl not implement MyDAO interface? How is it possible?". You are right, this is weird. Simply stated, MyDAOImpl cannot implement MyDAO because MyDAO defines methods that throw no checked exception, however, MyDAOImpl does throw checked exceptions (SQLException which are thrown by iBATIS -and by any JDBC API call). So this strange design is necessary for the DAO callers not to have to catch SQLException (which would never occur anyway since the AdapterBuilderFactory changes them to DataAccessExceptions).

Then you may add: "What happens if MyDAOImpl does not implement all methods of MyDAO?". Actually, two things can happen:

  • a warning will be logged by the AdapterBuilderFactory to let you know about it.
  • if one service tries to call a method from the interface, that has no implementation, then it will receive an exception (this is done by the AdapterBuilderFactory).

Finally you may tell: "I don't like this design at all. I want a better design". I have thought about this problem, we have come to a potential solution (that I don't like much in fact) which would consist in defining 2 interfaces MyDAO1 and MyDAO2, MyDAO1 would declare all methods to throw SQLException, MyDAO2 would derive from MyDAO1 and would redeclare all methods of MyDAO1 but with no throws clause. Then MyDAOImpl would implement MyDAO1, while your DAO service would be declared as implementing MyDAO2.

Externalizing properties outside the war file

It is not uncommon to need externalizing some properties (like database url, user, password) outside of a war file, so that it gets easy to deploy that war file on different environments without having to rebuild it.

HiveMind already provides SymbolSource to put some properties outside of hivemodule.xml files. However, the only SymbolSource services provided by HiveMind do not directly allow you to get properties from a property file.

In hiveutils module, there is a special SymbolSource (PropertyFileSymbolSource), that just does that: it allows you to define the path of a property file that will contain symbols to be replaced in hivemodule.xml files.

When using hiveutils module in your application, PropertyFileSymbolSource is automatically registered to HiveMind as a SymbolSource that will be used before all other registered SymbolSources.

What you just need to do is to add contributions to the hiveutils.PropertyFileSources configuration point in order to indicate which property file(s) must be used to resolve symbols.

You may just define an absolute path to your property file:

<contribution configuration-id="hiveutils.PropertyFileSources">
    <property-source file="c:/"/>

or you may decide that the path will be provided through a Java System property (almost every servlet container allows you to specify such properties in command line):

<contribution configuration-id="hiveutils.PropertyFileSources">
    <property-source property="mysettingspath"/>

If your container does not support setting system properties for a given war, or if it is not convenient to you, then you may also use SystemPropertyInitListener servlet listener in hiveutils to initialize system properties, based on context-param as defined in web.xml. Of course, you may wonder "what is the point in defining that path in web.xml, if I want to change it, I need to rebuild the war!!!" That is right, in a sense. However, some servlet containers (like Jakarta Tomcat) allow you to define such parameters in a context.xml file that is outside of the war.

To add the servlet listener to your web application, add the following to you web.xml file:


With this listener installed, any context-param which name starts with "init." will be added to the System properties (after "init." has been removed from its name). You could define this parameter in your web.xml this way:


And if you use Jakarta Tomcat, you can create a context.xml for your web application. This would look like:

<Context    path="/mywebapp" 
    <!-- Set specific properties -->
    <Parameter  name="init.mysettingspath" 
                override="false" />

For more information about context.xml, please refer to Tomcat documentation.

Dependency Injection in POJOs

If you use HiveMind, you are probably convinced of the benefits of Dependency Injection.

However, one problem with HiveMind is that:

  • you cannot inject dependencies in normal POJOs (you need to define an interface)
  • defining a HiveMind service requires quite a lot of xml

When hivegui module development was started, the need to define a lot of objects (commands, dialogs, panels, tables...) came. Many of these objects needed to:

  • access other objects, services, or configuration points
  • be easily defined outside of Java code (for easy modification of some look & feel aspects for instance)

In addition, some objects would need to have many instances while others would be better singletons that can be cached in order to make the application faster. Finally, some would not only need to access some dependencies but would also have extra arguments at construction-time.

The hiveutils.ObjectBuilder service is used just for that: creating (and optionally caching) objects, allowing dependency injection and optional runtime arguments to be passed as well.

All such objects must have a unique name and be defined as a contribution to the hiveutils.ObjectBuilderObjects configuration point:

<contribution configuration-id="hiveutils.ObjectBuilderObjects">
    <object name="modify-board-panel" cached="false"
        <inject object="object:AccountsParticipantTable"/>
        <inject object="service:hiveboard.shared.WhiteBoardUserService"/>

In the example above, an object named "modify-board-panel" is declared. This object will not be cached (ObjectBuilder will create a new one everytime it is requested that object). This object is injected 3 arguments (in its constructor), the 2 first arguments are dependencies (one other object and one HiveMind service), while the third argument must be explicitely passed to ObjectBuilder.

The following snippet shows how to get access to this object:

ObjectBuilder builder = ...;
Integer idBoard = ...;
JPanel boardPanel = (JPanel) builder.create("modify-board-panel", idBoard);

For objects that do not have specific arguments (only real injected dependencies), it is not necessary to call ObjectBuilder to access them, these objects can be also injected into any other component defined in hivemodule.xml, thanks to the new "object:" hivemind.ObjectProvider directly provided by hiveutils module. This was used in the example above where an object named "AccountsParticipantTable" was injected into the "modify-board-panel" object.

It is worth to note that ObjectBuilder also supports setter-injection, as in the next example:

<contribution configuration-id="hiveutils.ObjectBuilderObjects">
    <object name="modify-board-panel" cached="false"
        <inject name="participantTable" object="object:AccountsParticipantTable"/>
        <inject name="userService" object="service:hiveboard.shared.WhiteBoardUserService"/>

Exporting a service as a Web Service

hiveremoting.caucho gives you the possibility to export any of your HiveMind services as a Web Service, using the Hessian or Burlap protocol over http.

For that, you first need to install a special servlet in your war, by adding the necessary lines in web.xml (this is done only once):



Then let's suppose you have a service defined in your HiveMind-based application (the implementation part is not shown here, it has nothing special):

<service-point  id="SimpleService"

And you want to expose to the outside world, you just need to add a contribution to the code{hiveremoting.caucho.RemoteServices} configuration point:

<contribution configuration-id="hiveremoting.caucho.RemoteServices">
    <publish    url-path="MyService"

Please note that you can perfectly expose the same service under two different URLs and with different protocols if you want.

Now, for a client to access your exported service, if this client is also based on HiveMind, then you just have to declare the implementation of the service by using the hiveremoting.caucho.CauchoProxyFactory factory:

<implementation service-id="SimpleService">
    <invoke-factory service-id="hiveremoting.caucho.CauchoProxyFactory"
    <proxy  url="http://localhost:8080/mywebapp/MyService"

hiveremoting.caucho also provides the ability to add your own serializers and deserializers to the Hessian/Burlap protocols (if you have specific classes). It also enables user authentication. For an example, you can consult the code of the HiveBoard project.

Since HiveMind Utilities 0.4.4, it is possible to use gzip compression in order to reduce the size of exchanged caucho messages. This gzip compression can be done on both directions of transfer (client->server and server->client).

To achieve this gzip compression, first of all, you have to use the new net.sourceforge.hiveutils.web.util.GzipFilter Servlet Filter on the server side. To do so, you have to add a few lines to your web.xml:


The "gzip-threshold" parameter is the size limit (in bytes) over which output messages should be gzipped (under this limit, the output will be sent as is, without any compression). Please note that this size does not represent the full size of the http message but only its body (i.e. the actual Caucho protocol payload). If you put "0" for this setting, then all outgoing data will be compressed whatever its size.

GzipFilter will use gzip compression i{only if} the http request allows it (i.e. includes a "Accept-Encoding: gzip" header).

The "gzip-in-buffer-size" and "gzip-out-buffer-size" parameters (versions 0.4.5 and above) allow to define the size of buffered input and output streams (to improve performance during gzip compression and uncompression).

Then you have to enable gzip compression on the client side, for this you just add one parameter to the previous "publish" configuration:

<contribution configuration-id="hiveremoting.caucho.RemoteServices">
    <publish    url-path="MyService"

The "gzip-threshold" parameter has the same meaning here as for GzipFilter above. No value or any negative value will disable gzip compression. Please note that disabling gzip means also that http requests will not include the "Accept-Encoding: gzip" header, hence the server will not gzip responses either.

The "gzip-in-buffer-size" parameter (versions 0.4.5 and above) has the same meaning here as for GzipFilter above.

HiveMind Utilities 0.4.4 also enables support of secure connections through https protocol. In order to use https for your published service, there are several actions to perform:

First you must setup your servlet container for https support (this point is out of the scope of this Quick Start document, refer to your container's documentation).

Then on the server side, setup your service to be secure:

<contribution configuration-id="hiveremoting.caucho.RemoteServices">
    <publish    url-path="MyService"

This will ensure that your service cannot be accessed through http (if http is used, an error will be returned to the client).

On the client, you just need to setup the correct url to use https instead of http:

<proxy  url="https://localhost:8443/mywebapp/MyService"

When using https, you generally need to have an server certificate registered with a CA (Certificate Authority). You may also generate your own certificate with no CA (useful for testing environments). On the client side, hiveremoting.caucho has one option determining if, when connecting to a service through https, the certificate for this server must absolutely be registered with a CA or not necessarily, the default value is to be lenient (i.e. allow to connect to a server, whatever its certificate); you can change this option to have a strict check:

<contribution configuration-id="hivemind.ApplicationDefaults">
    <default symbol="hiveremoting.caucho.proxy.https-strict-certificate-check" value="true"/>

For more information on Hessian and Burlap please refer to the Caucho web site.

Defining and using Event Channels

Almost every application, from the simplest to the most complex, needs to notify components about events that occur on the system.

Unfortunately, in today's applications, almost everybody reinvents the wheel by creating her own framework (sometimes too simple, sometimes too complex) to manage events notification.

In order to remove this hassle from developers, hiveevents provides all that is necessary to manage event notification in a very easy and friendly way (as long as you use HiveMind of course).

hiveevents is based on the concepts of event channels. A Channel is a "tube" through which events of a given category pass, from supplier(s) to consumer(s). A Channel decouples consumers from suppliers and enables adding new suppliers or consumers dynamically and very easily.

hiveevents allows you to define as many channels as you want (based on the kind of events that may transit through them). Each Channel has a unique name and is defined by contributing to hiveevents.EventChannels configuration:

<contribution configuration-id="hiveevents.EventChannels">
    <channel    name="ServerEventsChannel"

A Channel can then be injected into a service by using the "channel:" ObjectProvider:

<service-point  id="AccountRepository"
    <invoke-factory model="singleton">
        <construct class="net.sourceforge.hiveboard.model.AccountRepositoryImpl">

When you got a Channel, you may supply (aka push) events to it, or register to it as a consumer. When you define a consumer you may define whether you want events to be pushed to it as soon as they are sent, or if the consumer should pull them at convenient times for it. All this is easily seen in the Channel javadoc.

In addition, code{hiveevents} introduces the notion of event Filters, so that events going through a Channel may or may not reach a consumer, based on the Filter associated to this consumer. Implementing a Filter is extremely easy, but it can be made even easier by using the predefined ConstraintFilter class that allows you to provide boolean expressions such as:

event.type.value == 1 && event.who != 0

In the sample expression above, "event" is dynamically replaced by the object that is in transit in the Channel and its properties (type and who) are evaluated.

The HiveBoard project makes heavy use of hiveevents possibilities, browsing its source code is much instructive.

Where to go next?

  • check examples configuration (code has nothing really special)
  • check source code of HiveBoard project for effective use of most HiveMind Utilities (in particular HiveUtils and HiveGUI)
  • Hivedoc
  • Javadoc