Friday, October 7, 2011

Oracle AQ with Spring

In my current project we use a Java SEDA. The MOM to support this is IBM (Websphere) MQ. Our most used object is the queue, which enables us to handle events asynchronously and by multiple consumers which greatly improves scalability and robustness.

From a Java point of view the MOM implementation is really not that important, as it is accessed via the JMS API. So whether its Websphere MQ, JBoss MQ, ... as long as it has JMS support its pretty transparent. We do use some minor MQ specific extensions (PCF) to get queue depths and the likes, but that is more from an operational management point of view.

The choice of MQ was made before I joined the project, probably because other legacy subsystems have the least trouble dealing with MQ since they are already IBM based. Although we don't benefit a lot from it possibilities, since there is no QueueManager to QueueManager communication or the likes in which MQ is really strong. But it has to be said that MQ is a solid and mature product with a lot of possibilities.

The downside is probably its price (especially if you under-use it) and that it requires specific MQ knowledge to operate and maintain a running instance. For example; moving messages from a queue to another natively on Solaris is not a trivial thing if your not into the MQ administration (no, the 'MOVE' command is not supported on MQ Solaris).

Since we are using 2 resources most of the time, this also implies that our backends are running XA transactions to make 2PC work between our MOM and RDBMS (Oracle).
A while ago someone threw the idea on the table to switch to Oracle AQ (Advanced Queues) which is the Oracle MOM implementation. I'm not going in the area of comparing MQ vs AQ, but the fact is that AQ supports JMS and is a fully fledged MOM (It also has Topics, btw), so on paper it is more then enough for our usages.

Cool detail is that JMS Connection that you obtain is actually backed by a normal (JDBC) database connection. In fact, what happens is that the AQ driver uses a datasource under the hood. If you do a Queue.publish() or a the AQ driver will translate that to stored procedure and send them through the SQL datasource you instantiated it with. This also means we could drop our XA, since we only need to enlist a single resource for both our MOM and RDBMS access.

To set this up my first idea was to look for a resource adapter (RAR) which would enable AQ via the application server (Webshere MQ also ships with a JEE RAR). At that point I did not knew how it would handle the JDBC connection sharing if connection would be made via the RAR, but anyway. Quickly I found out that there is no real AQ resource adapter available for other JEE servers then Oracle AS itself (for this I was using Glassfish btw).

There is: genericjmsra but you cannot use it "properties based" like you enter the uri/username/password of the MOM. See here for its AQ specific manual:


Oracle JMS client does not allow creation of ConnectionFactory, QueueConnectionFactory or
TopicConnectionFactory utilizing JavaBean approach. The factory creation is only possible through AQjmsFactory class provided in the Oracle jms client api. However fortunately, Oracle does support the JNDI lookup approach. We will be focusing on the JNDI approach for Oracle AQ and glassfish integration

This means you need an Oracle LDAP server in which some remote objects are published which are then looked up by the RA. So sharing the same JDBC connection for relational access and AQ will certainly not be possible this way.

Fortunately you can use the AQJmsFactory (that's the main factory which you feed a datasource and it gives you back a JMS ConnectionFactory) directly from your code, but that would require some boiler plate code as the AQJmsFactory checks that the actual connection is a direct Oracle connection.

If you are using a JDBC pool, like for example C3PO or Commons DBCP they will wrap the connections (in order to suppress closes etc) and these connections will be rejected because they are no direct instance of the Oracle connection. Thankfully a new Spring module was released at the right time and comes to the rescue with: Spring jdbc-extensions.

This is that boiler plate you want to seamlessly integrate Oracle AQ with your existing Spring managed datasource and transactions. The extension will make sure the Oracle AQJmsFactory is given a proxy which will be an instance of the Oracle connection. The proxy enables us to control what we give the Oracle AQ implementation.

For example when it tries to call 'close' we will suppress the call, since we know it will be handled by transaction manager (datasource,hibernate, jta, ...) later on. If your interested in this check the source at:
That is the custom namespace handler for the AQ Spring XML config which creates the appropriate beans to do the boiler plate.

In this first example we create a scenario in which an event is received (Q1), a database record is inserted (T1) and a second event is published (Q2). All of this should run in one transaction, so if there is a failure at any point everything should be reverted (1 message back on Q1, no records in T1, and no messages on Q2). If everything succeeds, the message from Q1 should be processed, the record inserted and a new message published on Q2.

To start I'm going to setup the two AQ queue's and their queue table:

EXECUTE DBMS_AQADM.CREATE_QUEUE_TABLE(queue_table => 'Q1_T', queue_payload_type => 'SYS.AQ$_JMS_TEXT_MESSAGE');
EXECUTE DBMS_AQADM.CREATE_QUEUE (Queue_name => 'Q1',  Queue_table => 'Q1_T', max_retries => 2147483647);

EXECUTE DBMS_AQADM.CREATE_QUEUE_TABLE(queue_table => 'Q2_T', queue_payload_type => 'SYS.AQ$_JMS_TEXT_MESSAGE');
EXECUTE DBMS_AQADM.CREATE_QUEUE (Queue_name => 'Q2',  Queue_table => 'Q2_T', max_retries => 2147483647);

On AQ each Queue needs to have a corresponding queue table. The queue table is the table where the data is physically stored. You will never talk to a queue table directly, but you can use it with DML to query them via your favorite database IDE. On each you can specifiy additional properties, on the queue table you have to specifiy which payload it will have. On the queue itself you can specifiy after how many unsuccesful dequeues the message is moved to the exception queue.

In our project we make use of an application level failover and DLQ management system with separate queueing. So we don't need this feature. There is however no way to turn this off, so we've chosen the max setting (which is Integer.MAX_VALUE). Btw; the exception queues are generated automatically, you have no control over them.

To check if everything is created:

select * from all_queues where name like 'Q1%' or name like 'AQ$_Q1%' or name like 'Q2%' or name like 'AQ$_Q2%'
The results:

Q2 Q2_T 365831 NORMAL_QUEUE 2147483647
Q1 Q1_T 365816 NORMAL_QUEUE 2147483647

Next we'll setup our Spring config. The goal is to create a message consumer that listens for messages on Q1 and processes them. Our processing will consist of inserting a record in T1 and putting a message on Q2.
 <!-- Sets up the JMS ConnectionFactory, in this case backed by Oracle AQ -->
 <bean id="oracleNativeJdbcExtractor" class=""/>
 <orcl:aq-jms-connection-factory id="connectionFactory" data-source="dataSource" use-local-data-source-transaction="true" native-jdbc-extractor="oracleNativeJdbcExtractor"/>


 <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" lazy-init="true">
  <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
  <property name="url" value="jdbc:oracle:thin:host:port:SID"/>
  <property name="username" value="Scott"/>
  <property name="password" value="Tiger"/>

 <!-- Using DataSourceTxManager, but could also be HibernateTxManager or JtaTxManager -->
 <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager" lazy-init="true">
  <property name="dataSource" ref="dataSource"/>

 <!-- You can also construct the JMSTemplate in code, but we'll do it here so its all together in one place -->
 <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
  <property name="connectionFactory" ref="connectionFactory"/>
  <property name="defaultDestinationName" value="Q2"/>
  <property name="sessionTransacted" value="true"/>

 <bean id="myMessageListener" class="be.error.jms.MyMessageListener">
  <property name="dataSource" ref="dataSource"/>
  <property name="jmsTemplate" ref="jmsTemplate"/>

 <!-- Once it is started, it will try to read messages from Q1 and let 'messageListener' process them -->
 <bean id="messageListenerContainer"class="org.springframework.jms.listener.DefaultMessageListenerContainer">
  <property name="connectionFactory" ref="connectionFactory"/>
  <property name="transactionManager" ref="transactionManager"/>
  <property name="destinationName" value="Q1"/>
  <property name="messageListener" ref="myMessageListener"/>
  <property name="sessionTransacted" value="true"/>
As you can see the magic is in the orcl:aq-jms-connection-factory which will make a JmsConnectionFactory available under the id 'connectionFactory' and using our datasource to do the AQ queueing.

Very important: if you don't want to spend half a day investigating some weird transaction behaviour (I even mistakenly thought it was a bug and pointed that out here) I suggest to read this:

In my configuration you will see that the 'sessionTransacted' is set to "true" for the JmsTemplate and for the DefaultMessageListenerContainer. This makes sense as we are running outside of a JEE managed environment and we want to have local transactions for our JMS operations. The theory behind it is however a bit more complex.

When running outside of a JEE managed environment you have the choice of letting your session interaction be part of a local transaction. This is controlled by the sessionTransacted setting (it maps directly on the JMS API). This means that if you consume messages from different objects belonging to the same session, they will be controlled in a single transaction.
For example, I create QueueSession #1 and I use it to consume a message from Q1 and consume another message from Q2. After consuming both messages, I can issue a session.rollback() and everything is brought back to its initial state. If I would have used no transactions, I would be working with an acknowledgement mode. Suppose I would have chosen CLIENT_ACKNOWLEDGE then I had to acknowledge on message level whether my message was successfully consumed. So I would have first retrieved message #1 from Q1 and then message #2 to Q2 (all via QueueSession #1). In the end I would have to do:

//system crashes here
This could of course create inconsistency as in my example messageOne was marked consumed but messageTwo wasn't. This is only a problem if your unit of work should be treated in an atomic way. If it is you should use at least local transactions.

When you want to consume/produce messages from a Queue and do interaction with another resource (RDBMS) for example you should use a distributed transactionmanager (in our case that would mean JTA). But remember that we are not dealing with different resources here, it all comes down to a single database connection. So in our case the "local transaction" is a bit "longer local" then it would normally be as it also includes all our (SQL) calls made to that same database connection as the JMS infrastructure is using.

In our case the DataSourceTransactionManager will control the local transaction, and that includes JMS operations as well as SQL operations issued via JDBC. It is that component which will call commit or rollback. there is no need for intermediate commits on the queueSession.

So basically: by setting sessionTransacted to true, no one performs intermediate commits and leaves everything to whoever controls the transaction, in our case DataSourceTransactionManager.
Make sure you use JdbcTemplate for direct JDBC access and JmsTemplate for MOM access. Make sure sessionTransacted is set to true when you should create JmsTemplate in code. Also, the DefaultMessageListenerContainer is a JMS receiver and must also be sessionTransacted for the same reason.

You might want to be tempted to remove the sessionTransacted from the JmsTemplate and DefaultMessageListenerContainer if you are running in an JEE environment. The JMS API says that the values to sessionTransacted and acknowledgementMode are ignored in such case.
While this is true in general, it is not true in this case. The Oracle AQ will not properly detected that it is running in a JEE JTA environment if you are using anything else then Oracle AS. If you remove the property, then the driver will perform intermediate commits and your transaction will be broken. So also in JEE mode you will have to leave this set to true!

But don't worry, in the JTA case your datasource will then be XA enabled and the transactionmaanger performing commmits will be the JtaTranasctionManager. As far as the AQ driver is concerned it sees no difference (all transaction handling an coordination is done at an higher level).

Also, I'm using a DataSourceTransactionManager here, since I only require direct JDBC access.
If you would be using hibernate, you could use HibernateTransactionManager. You could then do AQ, plain JDBC access and work with hibernate's SessionFactory at the same time.
If you would still have another resource (maybe a 2nd RDBMS) and still want XA, you can simply plugin the JTA transaction manager without any problem (its just a matter of switching configuration).

For the Java messageListener part, this is all standard:

public class MyMessageListener implements SessionAwareMessageListener<Message> {

 private DataSource dataSource;
 private JmsTemplate jmsTemplate;

 public void onMessage(Message message, Session session) throws JMSException {
  //Message received from Q1 via 'messageListenerContainer'
  TextMessage textMessage = (TextMessage) message;
  System.out.println("Received message with content:" + textMessage.getText());

  //Insert its content into T1
  new JdbcTemplate(dataSource).update("insert into T1 values (?)", textMessage.getText());
  System.out.println("Inserted into table T1");

  //Publish a message to Q2
  jmsTemplate.send(new MessageCreator() {
   public Message createMessage(Session session) throws JMSException {
    TextMessage textMessage = session.createTextMessage();
    return textMessage;
  System.out.println("Sended message to Q2");

 public void setDataSource(DataSource dataSource) {
  this.dataSource = dataSource;

 public void setJmsTemplate(JmsTemplate jmsTemplate) {
  this.jmsTemplate = jmsTemplate;
I then created a small forever blocking test case to quickly fire up the application context so that the DefaultMessageListenerContainer could start looking for messages on Q1.

@ContextConfiguration(locations = { "classpath:/spring/aq-test.xml" })
public class OracleAqTransactionResourceTest extends AbstractTestNGSpringContextTests {

 private DataSource dataSource;
 private JdbcTemplate jdbcTemplate;

 public void setup() {
  jdbcTemplate = new JdbcTemplate(dataSource);

 public void testSingleTransaction() {

 private void blockUntillReadyOrTimeout() {
  while (true) {
   try {
   } catch (InterruptedException e) {
    throw new RuntimeException(e);
After launching the test, I inject a message into Q1 (I use Oracle SQL developer):

    queue_options       DBMS_AQ.ENQUEUE_OPTIONS_T;
    message_properties  DBMS_AQ.MESSAGE_PROPERTIES_T;
    message_id RAW(30);

      msg.set_text('testing 123');
        queue_name => 'Q1',
        enqueue_options => queue_options,
        message_properties => message_properties,
        payload => msg,
        msgid => message_id);
And off we go:

Received message with content:testing 123
Inserted into table T1
Sended message to Q2
In oracle we see that the message is present on Q2 (at least its queue table):
And that a record is inserted into T1:

You are free to play with some transaction scenario's, as creating multiple (possibly nested) transactions, let them rollback etc. I performed 5 scenario's and they all worked fine.

PS. make sure you use at least spring-jdbc 1.0_M2 (or up) since we discovered a small bug in M1 which could cost you some time to investigate :)

Tuesday, July 12, 2011

webOS app development for the Pre2 with JAX-RS and Atmosphere

Recently I tried the development possibilities for my Palm Pre2. As it goes, this smartphone runs some modified linux distro (aka webOS). By default you can only access the 'window manager': the graphical UI which makes your phone, well, 'your phone'.

So the first thing you need to do if you want to get deeper into the OS is enabling developer mode. This mode allows you to install custom applications, gain console access and the likes.
To put the phone in developer mode, see here at the bottom of the page: webOS 2.0 Devices

The first thing I tried after enabling developer mode was preware. From their wiki:

Preware is a package management application for the Palm Pre and the Palm Pixi.
Preware allows the user to install any package from any of the open standard package repositories on (or any other location that hosts an open standard package repository).
Preware relies on a custom written service developed from community research which allows the mojo app to talk to the built-in ipkg tool.

Do note that this is a community product and is not part of the 'official SDK' neither do you need it for development. You can basically use it to:
  • Install/deinstall apps from open repositories (a lot of apps are already available via the preware repo)
  • Install updates/check phone info
  • Gain terminal access to your pre
  • Install the novacom driver, which will also be required for the SDK (preware can install it automatically for you)
  • Probably more, see their site

Since I was going to try this on Linux (Ubuntu Maverick), I was already preparing for a long night of debugging, catting var/log/messages, resolving and finding missing dependencies, compiling my kernel for some support that would be missing, searching the web for usb problems and so forth.However, as it turned out, none of this was required (no, it really wasn't). Basically, you first enable developer mode, download preware, extract it, plug your Pre2 into the USB (select 'just charge' on the phone) and run preware:

~/Desktop/Palm Pre 2$ java -jar WebOSQuickInstall-4.2.0.jar

It will ask you to install novacom on first run and after that you'll get the main screen.
Launching the terminal looks like this:

I used preware to access the console of the phone linux distro via the build in terminal. After that you can probably install or configure an SSH server on the phone. I did not explore this route any further since I only have one phone and I don't want a lot of services draining my battery, lulzers hacking my phone, or needing to hard reset my phone every week.

Secondly I also installed a terminal app on the phone itself via preware.Currently I have problems in my car when its connected via bluetooth. I can see the phonebook, the network connection etc, but I cannot make phone calls. So when next time I'm stuck traffic I'm planning to do some on the fly debugging to find out whats wrong. The terminal app I installed was 'SDLTerminal', the other ones; terminal and terminus didn't work for me (at least not on a install/run lazy fashion)

What the SDK is concerned, it is really easy and it works perfect on my ubuntu. The steps can be found here. As Java developer I'm using it via eclipse, although other possibilities exist, see their site.

  • Get the latest Eclipse, they advice you to use the Web developer profile
  • Install the webOs plugin via eclipse
  • Install the Aptana plugin via eclipse (optional)
  • Restart eclipse

Install the SDK I skipped: Java (allready had Java6), ia32-libs (already installed for some other 32bit comatibility I required), novacom (already installed by preware). I also installed the latest version of virtualbox (v4), and then removed it again to install 3.2, since the SDK must have a virtualbox version: >=3.0 && <=3.2. Now, after you type:
You'll see virtualbox popping up and booting webOS (the image came along with the SDK):

Virtualbox takes your NIC into account, and sets up a NAT connection. The latter means that your emulated webOS can go out on your network; packets get the source address from your host nic, but does not have an ip on your network. So by default you won't be able to access your webos vm FROM the network. If you want that, the easiest thing is probably to switch the network adapter configuration in virtualbox to "bridged" instead (or setup portforwarding).

To get a console to the webOS vm, you have two options:
  • The novacom driver acts as middleware both for you phone connected via USB as well as webOS running in the VM. So you can access the console of the vm webOS via the terminal option in preware, just as you would to to access it on the pre.
  • Even though NAT is used by default, there are some port mappings made in the virtualbox image configuration. This means that certain ports on the host are forwarded by virtualbox to the VM, which does allow you to connect to the vm without any extra setup when using NAT. But only for these ports, and only from your machine. For example port 5522 is mapped to the webOS VM on 22.
Do note that latter is a difference with the 'production' webOS image on the pre2: there is no such ssh daemon running by default, as I addressed previously. There are other differences between the vm and the one on the phone; like no camera, gravity meter etc. For example, ssh'ing to your webos VM;
ssh root@localhost -p 5522
(There is no password, just hit enter)

Next I followed this howto to create my first app. If you are lazy, you could just:
  1. Startup the emulator (commandline: 'palm-emulator')
  2. Startup your webos-plugin-enabled eclipse
  3. File->New->Palm webOS->Hello World application
  4. Run->Run
  5. Look on your virtualbox webOS VM, you'll see your application running
To run the appp on your pre2 instead, just plugin the pre2 in the USB, select 'just charge' and
  1. Run->Run
  2. Look on your pre2
Finally, I was eager to find out how hard it is to extend the hello world app and let it do something cool. Following the moto 'go hard or go home', my idea was to extend the example to send a photo (either taken directly or selected from FS) in JSON format to a RESTful webservice via ajax. There would also needed to be a web page that queries the same webservice using Ajax push (Comet). So, the flow should look like this:
  1. Startup the app in webOS
  2. Click a button which brings the user to a select/take picture menu
  3. Select (or select freshly taken) picture
  4. In the app the picture thumbnail and the path on the device should be shown
  5. After hitting a send button, the image is send to the RESTful webservice
  6. A possible browser window connected to the server will get the newly uploaded picture automatically pushed
To do this I first created a small RESTful webservice via JAX-RS. I used Athmosphere to do the comet part.
public class ImageService {

 private static final String THE_IMAGE_LOC = "/tmp/theimage.jpg";
 private Broadcaster topic = new JerseyBroadcaster();

 public Broadcastable uploadImage(FileContent fileContent)
   throws IOException {
  FileUtils.writeByteArrayToFile(new File(THE_IMAGE_LOC),
  return new Broadcastable(fileContent.getData() + "END", "", topic);

 public SuspendResponse suspend() {
  return new SuspendResponse.SuspendResponseBuilder()
The web.xml:

The browser will call the 'suspend' method which will block on the topic untill new data is available. When an image is uploaded, the broadcast will trigger the suspend method to resume for possible connected clients. In the result they'll find the image in base64 format. Atmosphere also comes with a JQuery based plugin, which makes it easy to make an Ajax connection to the webservice. Via the plugin you can easily switch between comet types or even websocket. In this case I've used long polling.

As you can see I directly injected the base64 string in an HTML img tag. I also applied some superior scaling algorithm to make the images a bit smaller.

At this point, there were some problems:
  • On my setup this was not doing long polling, but streaming. In case of long polling a new connection is made once data has been returned from the server.
    In my case the connection remained open forever (==streaming). Maybe I need some server side config for long polling? I could not find this anywhere...
  • Every time the callback is called, you get the newly written data AND the previously written data back.
    So when sending a second image, the response still contains the first image. My guess is that this is a side effect of streaming?
    To get around this I'm keeping a 'lastRead' variable as you can see.
  • The callback is called multiple times for different chunks of the received data. This means that I must find a way to denode the end of a transmission.
    Apparently there are some characters added to the end (CRLF I assume) so I have could used these, but to lazy to find out, so I added a token myself (namely "END").
For the JS imports, this is all I used:
<script src="js/jquery-1.4.2.js" type="text/javascript"></script>
<script src="js/jquery.atmosphere.js" type="text/javascript"></script>
The atmosphere plugin came bundled with JQuery 1.4.2 so I used that version, no idea if it also works with newer ones. Btw the jquery.atmosphere.js plugin I found it here (for the download link of this app see below).

Now, for the Pre2 application: the applications that plugin into webOS are written in Javascript (yes, that 0 == "" language). For the record, you can also develop them in C/C++ if you need speed and very low level access to the hardware. The developer site explains (in a very annoying and hard to use layout) that there is a broad javascript platform which contains all the components you need to build applications fast. I was quite impressed by the amount of components/services/widgets available, it is like a JQuery/prototype on steroids. There is even test support built in. If you know JQuery or prototype a lot of these things will seem familliar.

All of this allows you to develop via javascript on a higher level. I knew the Ajax thing was going to be easy, so the first thing I tried out was how to read a (binary) file and transform it to base64. Apparently webOS (2+) uses "Node.js", so thats piece of cake. The hard part was to learn that you can only access the node functionality from a headless service rather then an application (not 100% sure on this, but thats how it looked like). The next annoying thing to find out was that I could not get the eclipse plugin to package my service along with my app. So I have to manually package and upload the service to the phone. The service code looks like this:
var fs = IMPORTS.require('fs');

var ReadFileAssistant = function(){
} = function(future) {  
   var file = fs.readFileSync(this.controller.args.filePath);
   var fileBase = file.toString("base64",0,file.length)
   future.result = { reply: fileBase};
I used a hello world service sample (from the SDK docs&samples), removed the hello world stuff and added my code in place. The "buildpackage" file contains the commands you need to build an ipk from it and get that on your phone/emulator. You can run the service, but that is not required. As it is headless it won't show you anything, besides the entry page of the application. From the moment it is installed you'll be able to access the service from any application (at least when it is made "public"). Btw; apparently a service must be package inside an application.

So in the end I will have two applications: one with the service inside and one with the UI. But this is only because eclipse withheld me from packing them together (and packing the UI manually together with the service would take to much time for testing).

Next we have the UI part which is made out of HTML:
<div id="main" class="palm-hasheader">
 <div class="palm-header">Demo app</div>

 <div x-mojo-element="Drawer" id="drawerId" class="drawerClass" name="drawerName">
  <div class="palm-group">
   <div class="palm-group-title" x-mojo-loc=''>Image:</div>
   <div class="palm-list">
    <div class="first row">
     <div class="palm-row-wrapper textfield-group" x-mojo-focus-highlight="true">
      <div x-mojo-element="ImageView" id="ImageId" class="ImageClass" name="ImageName" align="center"></div>

    <div class="last row">
     <div class="palm-row-wrapper">
      <div id="File" style="font-size: 10px" x-mojo-element="TextField" />
<div id="Select" name="Select" x-mojo-element="Button"></div>
<div id="Send" name="Send" x-mojo-element="Button"></div>
<div id="result" class="palm-body-text" />
The "assistent" then, which forms the logic behind the view:
FirstAssistant.prototype.handleSelect = function(event) {
 var self = this;
 var params = {
  defaultKind : 'image',
  onSelect : function(file) {
   self.controller.get('File').innerHTML = file.fullPath;
 Mojo.FilePicker.pickFile(params, this.controller.stageController);

FirstAssistant.prototype.handleSend = function(event) {
 var that = this;
 var filePath =  that.controller.get('File').innerHTML;

    method : "readFile",
    parameters : {
     "filePath" : filePath
    onSuccess : function(response) {
     var contentToSend = {filePath:filePath, data: response.reply};
      type : "POST",
      url : "",
      data: Object.toJSON(contentToSend),
      cache : false,
      success : function(result) {
       that.controller.get("result").innerHTML = "File Sent!<br/>"+ result;
      error: function(result){
       that.controller.get("result").innerHTML = "File sending failed.";
    onFailure : function(response) {
     that.controller.get("result").innerHTML = "FAILURE:"
       + response.reply;

FirstAssistant.prototype.setup = function() {
 var libraries = MojoLoader.require({
  name : "mediacapture",
  version : "1.0"
 this.mediaCaptureObj = libraries.mediacapture.MediaCapture();

 this.controller.setupWidget("Select", {}, {
  "label" : "Select",
  "buttonClass" : "",
  "disabled" : false
 Mojo.Event.listen(this.controller.get("Select"), Mojo.Event.tap,

 this.controller.setupWidget("Send", {}, {
  "label" : "Send",
  "buttonClass" : "affirmative",
  "disabled" : false
 Mojo.Event.listen(this.controller.get("Send"), Mojo.Event.tap,

 this.controller.setupWidget("File", {}, {
  "disabled" : true
     this.attributes = {
         modelProperty: 'open',
         unstyled: true
     this.model = {open: false}

function FirstAssistant() {
 var libraries = MojoLoader.require({
  name : "mediacapture",
  version : "1.0"
 this.mediaCaptureObj = libraries.mediacapture.MediaCapture();

FirstAssistant.prototype.activate = function(event) {

FirstAssistant.prototype.deactivate = function(event) {

FirstAssistant.prototype.cleanup = function(event) {
The first function opens the file picker that allows you to select a file (an image in our case) from the file system. This filesystem to which the picker default goes is mounted under '/media'. This is also the place where normally all the user data is located. I was in luck that this file picker automatically shows a button to take a new picture, so with this I had everything in one. The result (see here) in case of an image contains the 'fullPath' to the image. Next, I set the path in the textfield, and set it on an image viewer which will show the image. Finally I open the 'drawer' so that everything becomes visible.

The second method uses JQuery to do a POST Ajax call to the RESTFull webservice. It first calls the service to load our image and transforms it to base64, the rest is normal JQuery usage. Some notes:
  • The the webOS application javascript imports (mojo/mojo-loader) have prototype (the JS framwork) build in. I don't know which version or how exactly, but you can easily do Ajax using prototype syntax. Since I'm more familiar with JQuery and I wanted to try if that also worked together, I did it this way (so I explicity imported JQuery in my index.html)
  • The file loaded is put into memory. Since its base64 it will probably be around 30% bigger then the initial file. I have no idea how much a JS in webOS (webkit) can handle by default, but I doubt I'll be able to send 100mb files this way.

You can download the Pre2 app together with the Java backend here: DOWNLOAD

Btw; if you want to try this app (or any other app that requires your computer as server) on your phone, and you don't have any wireless connection, you can use the USB cable. See here.
On ubuntu this works out of the box. You just type 'usbnet enable' on the phone, restart it, and ubuntu will automatically add a new NIC (called usb0). You can then reach the host computer from your pre2. If you want to reach the entire network, you'll have to iptables something like this:

iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface usb0 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

To conclude:

Developing for the Pre2 is really fun. There is a lot of information to be found on the web, and the integration widgets/services are really, really good. As demonstrated, picking an image, taking a new photo, its all there out of the box. Getting the current GPS location, thats also 2 lines of JS etc... You can access almost every part on the Pre2 with some lines of Javascript. The fact that you write something like HTML and Javascript makes that its easy to develop and very lightweight. The SDK is also very good, works out of the box on linux, and the emulator makes it easy to develop without having to use a real phone.

The dislikes: the eclipse plugin is very limited and did not work very stable. Every change you make requires to repackage the app and install it on the emulator or phone. This only takes some seconds, but thats still too much. The debugging capabilities are poor. I was hoping I could put a breakpoint in the JS file in eclipse, but that was not possible. You can put a breakpoint using the CLI and the palm-debug command, but thats not really appealing (and its very time consuming).

Some images :

Start the app. The right icon is the app in which the service lives. If you launch it you'll just get the empty screen of the application in which the service resides

After pushing the 'select' button, the file picker opens. If running on a phone you can either select an existing photo or take a new one. We choose an existing (wallpaper):

After selecting the picture it is shown on the screen. Select it by tapping 'open photo'. After that the image path is passed to our JS. It is then displayed in the image viewer scaled down to a kind of thumbnail:

Tapping 'send' will submit it to our RESTful webservice. The status is shown in text at the bottom of the screen. After some seconds the browser automatically shows the uploaded image:

Saturday, May 28, 2011

Servlet 3.0: ARP, to push or not?

With the Servlet 3.0 API out now for I while I wanted to check out its ARP (Asynchronous Request Processing) functionality. Briefly: ARP enables us to execute time consuming processing, while not holding on to our Servlet thread. So, I started doing some browsing and after a short while I ended up with more questions then I started with.

This was thanks to a lot of self explanatory words such as: comet, cometd, tektite, meteor, asteroid, cavebear, grizzly, websocket, bayeux, ... ?!?! Btw; can you spot the two words not belonging in the row? To focus back on the Servlet 3.0 ARP functionality, I learned a couple of things: first, it is NOT a solution for server side push ~ Comet (all the other terms ARE related to it). For me this took a while to understand.

If you look up any of the words I mention in combination with ARP, you will most definitely find a how-to that brings ARP in relation with server side push or at least compare them. So that made me confused, thinking that ARP was an implementation for it. Agreed, building a Comet implementation will benefit from ARP (Grizzly Comet API for example). But out of the box ARP is just not Comet and it should IMHO not be used in the same sentence.

The Servlet 3.0 ARP functionality is limited to, well, asynchronous processing on the server side.If you look in the specification, they do not mention anything about underlying connection mechanisms. Terms as 'long polling' or 'streaming' (which are vital for a Comet implementation) are not mentioned in the specification.

ARP is just a standardized mechanism for offloading your request processing in a different thread then the 'Servlet' thread. When the Servlet thread terminated, the connection is retained somewhere and it gives you a standard mechanism for getting back to the initial Servlet response from the 'asynchronous' thread to write (extra or not) data to the client.

In fact, the client is even not aware that ARP is being used.
From the clients point of view the connection stays open and it takes a long time to get the response (if you have a long running processing, of course). No changes on the client are required when you use ARP. When you use a real comet implementation however, changes are required one way or the other.

Second, I needed to cut down on my enthusiasm about what it does do.
This is what the specification says about it:

"Some times a filter and/or servlet is unable to complete the processing of a request
without waiting for a resource or event before generating a response. For example, a servlet
may need to wait for an available JDBC connection, for a response from a remote web service,
for a JMS message, or for an application event, before proceeding to generate a response.
Waiting within the servlet is an inefficient operation as it is a blocking operation that
consumes a thread and other limited resources. Frequently a slow resource such as a database
may have many threads blocked waiting for access and can cause thread starvation and poor
quality of service for an entire web container.

Servlet 3.0 introduces the ability for asynchronous processing of requests so that the
thread may return to the container and perform other tasks. When asynchronous processing
begins on the request, another thread or callback may either generate the response and call
complete or dispatch the request so that it may run in the context of the container using the AsyncContext.dispatch method"

Mmkay, first this seems weird. We start another thread to do our asynchronous processing, doing so we can release the Servlet thread.
Isn't that like <insert funny zero operation analogy here> ?

On resource usage level it is exactly the same. Using ARP you have no extra means in saving resources, like the difference between blocking and non blocking IO containers.
The advantage using ARP is that when using a thread pool for its Servlets, it might give another request a chance by freeing up a thread from that pool.
You moved the long running process to a thread from another thread pool. If this thread pool is full, it will not have any effect on (blocking) new requests.

To illustrate this: suppose you have a Servlet thread pool of 100. You fire 200 requests at it at the same time.
The first 100 requests are destined for LongRunningServlet and take a long time to process. The other 100 requests are for DoAlmostNohtingServlet, which processes very fast.
Since our Servlet pool is already full with the first 100 (slow running) requests,the other (fast running requests) are queue'd and need to wait until a thread is released to the pool.

If we would be adding ARP with an additional thread pool (lets call this one the ARP pool) of 10 threads, we would have this scenario:

A request ending up in LongRunningServlet publishes an additional processing request to the ARP pool and terminates. The Servlet thread is returned to the Servlet pool almost directly. The result is that our 100 requests for LongRunningServlet are "processed" within seconds, freeing up threads for processing the request for DoAlmostNohtingServlet. The requests for DoAlmostNohtingServlet are then also processed withing some seconds.

  • The requests DoAlmostNohtingServlet are processed in seconds, they (almost) did not have to wait for a Servlet thread
  • After a short time the Servlet thread pool is completely free again, open for serving new requests
  • In the mean time there is still a waiting queue for 90 requests for the ARP pool (10 requests are assigned to threads and are already processing).

So they will gradually get processed, but while they are processed the servlet thread pool is again completely free for new requests. To make some things clear:

The connections to the clients of LongRunningServlet are still maintained. So the client is waiting untill its request for LongRunningServlet is processed by the ARP pool.
Holding the connection (while the thread is busy in the ARP pool) does not consume a thread by itself.
ARP will behave exactly the same in a blocking IO container as in a non blocking IO container:
For a container using blocking IO, the 'Servlet thread' is the thread that got the Socket connection.

A basic blocking server would do something like this:

while (true) {
 final Socket socket = serverSocket.accept();
 //Handof the socket to a 'servlet thread'
 threadPool.submit(new Runnable() {
  public void run() {
   SomeHttpRequestObject httpRequestObject = parseRequest(socket.getInputStream());

The moral for a blocking IO Servlet container is that the socket (connection) is read in the Servlet thread.
This means that a slow connection consumes a thread without anything happening.
Suppose you have 1000 slow clients, none of these clients delivered a complete request yet, and at a given point 0 of those clients are sending data: you'll still take up 1000 threads for doing virtually nothing (in the assumption our Servlet thread pool is => 1000).

With a non block IO server, connection reading takes only resources when there is actually something to read. This reading might happen in the Servlet thread as well, but as long as a request is not completely read and there is nothing more to read at that time, the thread is returned.

So basically a servlet thread is only going to work (for a longer time) when a complete request has been read. Suppose you have 1000 slow clients, none of these clients delivered a complete request yet, and at a given point 0 of those clients are sending data: you'll use (virtually) 0 threads at time.

Now, as you see the blocking/non blocking scenario plays before the request is handed over to the acutal Servlet. So using ARP is not directly influenced by it.

To conclude: ARP allows you to offload a long running process via a separate thread pool. This releases the Servlet thread back to the pool allowing it to service new requests. By doing this your long running processes are not hogging the Servlet thread pool. The connection from the client that triggered the long running process is kept alive and the ARP API gives you the means to get back to that connection to write (additional) data to it when the long running process completed. ARP does not directly saves you any resources, but being able to offload request to a separate thread pool does give you more control about the number of threads maximum in use at a given point, without having other clients to wait to get served.

Also, ARP serves as a scaleable base for possible Comet implementations , since a long polling request (to name one of the Comet connection strategies) does not hog a thread from the Servlet pool while waiting.

Tuesday, March 29, 2011

Personal server

For personal use I need a small sized server that is reachable over the Internet. I use it to manage my personal emails, queue long downloads, serve access services (ftp, http, ...).

My requirements:
  • Reachability: it should be reachable from almost every location, even behind proxies and firewalls
  • Stability: it should be as stable as possible
  • Usability: it should be able to run any software that I like, without restrictions
  • Security: pretty clear I guess
  • Manageability: it should be manageable via different interfaces, offer possibilities to easy backup data etc
  • Pay-ability: at the lowest cost possible

Bandwidth is not a tight requirement, since it will only be serving, well just me. I might put some pages on the web server, but even then images of heavy content will be placed somewhere on the Internet. Also, high availability is also not important, since it will only be for me (as long as there is enough stability). In the end I decided to install a server at home. Doing this I can also use it as a NAS to serve content, store backups and the likes via the home network. My standard ISP contract does not include fixed IP addresses. But, they do allow each port to be accessible from the Internet, so thats good (there are ISPs that block ports < 1024 without having an option to open them). The bandwidth is not very high, but that is ok for me. In physical measured speeds, it is about 7Mbps down and 418 Kbps up. So with that in mind, I arranged the following:
  1. Registered a .be TLD by a provider that has domain management
  2. Put an old laptop, aka 'server', somewhere on my desk (who needs an UPS, if you have a laptop)
  3. Register for a free domain alias service
  4. Configure laptop with OS and other required software
  5. Configure home router to make server accessible over the Internet
The only extra cost for me is about 15euro/year for the TLD (domain+management), that's it.

Ok, so how did I wire this up:

First, the domain is a normal '.be' domain registered via a provider. The provider (aka registrar) registers the domain with the instance controlling the .be TLD zone's. They supply their own name servers as NS record, so my domain will be resolved by my provider name servers. Next, the provider gives me access to control my domain controlled by their name servers. This access is pretty elaborate, its a web interface right on top of the named zone configuration. So I have full control and can actually configure it as it was running on my own name server, sweet. Since I do not have a fixed IP address, I'm not able to point a host name in my domain directly to my home server. I use a free dns alias service (for example) that is able to update a host name entry directly when my IP address changes. Before declaring me foobar, I'll give some clarifications at this point:

  • The dnsalias is also a name provider, just as my .be domainname provider. For free, they only give you a host within their own domain (like,, ...) and limited manageability. If you pay them, you can have both though, they register your domain and you can use their dns services to update it as you want. However, they do not support .be domains, since they are not an official .be registrar.
  • I could have dropped the '.be' domain and directly used the dnsalias. Now I have two name resolv's, a host on my TLD resolves to the dnsalias which then resolves to my home IP address. By directly using the dnsalias I save one resolve. But, I like to have my own .be domain. Furthermore, the dnsalias name is not as officially mine as a .be domain is. Even not when I pay for their services. (Now I can always decide to get a fixed IP address and point directly to that from my .be domain)
  • I could also have dropped the dnsalias and update my IP directly with my domain service provider. But they don't offer an interface to do that on an automated fashion.
It took me 5 minutes to go through the free registration for a dnsalias, thats it. The fact that there are two hosts to be resolved are certainly not noticeable for normal usage. To map a host in my domain to the dnsalias, I cannot use a normal DNS 'A' record mapping that maps a host name to an ip address (or the other way around for reverse zones), since it is not possible to use a host name instead of an IP address in an A record. So luckily there exists something as a 'CNAME' record, which allows a host name in your domain to be linked with another host name. So on my domain provider I have something like this setup in my zone; IN CNAME
You can also use wildcards, so '* IN CNAME' would resolve everything in * to dnsalias service has a normal A record mapping to my home ip address wit a very low TTL. If I do a zone scan, it looks like this: 60 IN A w.x.y.z
This means that if the information is propagated, it will be only be cached for 60 seconds on the intermediaries. Not what you typically want for a busy site. My server will run a small client that updates my IP address with the dns alias service each time it changes. It does that by using an external IP check service, that returns the Internet visible ip address. When that is changed over the last time, it sends out an update to the dnsalias service (+ my account information) with the new IP address. In the above zone snippet, the address w.x.y.z will then be updated.

As operating system I choose Ubuntu (10.04.2 LTS 32bit). Its rock solid, supports all hardware on the server, has a great user support base and its pretty secure out of the box. However, I also like windows for its office and outlook for my emails. And some other windows only programs that just work better under windows. The laptop was originally installed with windows, so it had a windows XP cdkey. Also, windows XP also has nice native remote desktop support. The RDP protocol shares the clipboard, sounds, it reverse maps your hard drive over the same protocol (so on the VM you see a share of the client's hard drive from which you are connecting) and so on. Some will argue that a remote X server is maybe better, might be true, but an RDP client is available on any windows client. On ubuntu its also available by default. On other linux dists is probably a simple download. Setting up a remote X on a windows client will be more work/require more privileges I think. To resume, these are the steps I did to configure my ubuntu:

  1. Install VMware server, v2.0.2-203138, used bridge Ethernet connection for VM
  2. Install windows XP on VM, necessary software, and enabled remote desktop
  3. Install ddclient for automatic dns update
  4. Configured my physical wiress ethernet connection to autostart without logging in
  5. Install sshd. Adjusted sshd config file to listen on two ports (22 and 443) and allow it to foward (more on that later)
VMware server: On previous installs of ubuntu (and VMWare server) I never had problems. However this time the installation failed. Thanks to the community I was able to pickup a patch for that: After that VMware server installed without any problems.


sudo apt-get install ddclient
sudo nano /etc/ddclient.conf

# Configuration file for ddclient generated by debconf
# /etc/ddclient.conf

use=web,, web-skip='IP Address'
login=<your login>
password='<your password>'
I followed this guide: ddns ubuntu

Wireless Ethernet connection to autostart:The network manager is only started once you logon in X. So I needed something to connect the server to the wireless network the moment ubuntu was booted. I followed this guide. Basically it came to:

sudo gedit /etc/network/interfaces 

auto lo
iface lo inet loopback
auto wlan0
iface wlan0 inet static
wpa-driver wext
wpa-conf managed
wpa-ssid Gateway
wpa-ap-scan 2
wpa-proto RSN
wpa-pairwise CCMP
wpa-group TKIP
wpa-key-mgmt WPA-PSK
wpa-psk <the key>
The wpa-psk key is generated by this command:
wpa_passphrase <your_essid> <your_ascii_key>
With the command:
iwlist scan 
You should be able to find out the information you need from the AP you want to connect to. After that a network restart enabled my wireless on startup.


sudo apt-get install sshd
vi /etc/ssh/sshd_config
And add this:
# What ports, IPs and protocols we listen for
Port 22
Port 443
GatewayPorts clientspecified
The last option will allow the client to specify target ports to which ssh should forward packets to (by default the sshd can only forward to the host it is running on. To forward to other targets, you need the 'GatewayPorts' as mentioned above).

Ok! The only thing left was the accessibility. I could port map the RDP port from the XP VM directly to the Internet via the router. I could say that 3389 should be port forwarded to However, I only wanted to open one port (besides HTTP) to the Internet, and preferably I want to shield the windows VM completely from the Internet. Also, as I want to access my services from everywhere, sometimes places just give you Internet access via an http proxy. From those places I would not be able to connect directly to the RDP service. To solve this, I tunnel everything I need over SSH. My server exposes only 3 ports:

  • 22 (SSH)
  • 80 (http)
  • 443 (SSH)
The router is configured to port forward TCP 22,80 and 443 to the host OS on Instead of running an HTTP SSL acceptor on 443 I configured SSHD to listen on two ports simultaneously; 443 and 22, so its not a typo. When I'm connecting from a remote location (me, being the client) I just need putty. Putty can be configured to connect directly (using port 22) and make a forward tunnel. Doing this I can choose a local port on the client that maps to a port on the target. Even better, I can map it to a port on any target, local network, or even back to the Internet. So, to connect to RDP, I need a tunnel mapping from
<any local port> : : 3389
Remember: the Internet does not know (route) But this address is valid in the 'tunnel' that putty sets up. Putty first establishes a connection to and over that connection it is making a connection to , so requests to are send to the sshd, which then delivers them to the local network. The total pictures looks like:
  • I let putty connect to at port 22 (or 443, see below)
  • Putty creates a forward tunnel from localhost:xxxx (over to via the tunnel (xxxx is 4000 in the screenshot below).
In putty that looks like this:

On the client computer from which I'm connecting from, I point my RDP client to localhost:4000 and the connection is established. The nice thing about all of this, is that there is an option in putty to tunnel my tunnel over an http proxy. So if I'm not allowed to go out on 22 directly, I configure putty to talk to the proxy to send out my packets. If you enable HTTP proxying on putty, putty sends an HTTP CONNECT <targethost:port> to the proxy. The proxy will then tunnel your request further to the target.

However, sometimes proxies disallow to tunnel to a target port as '22(ssh)'. An easy trick is spawning the sshd on a second, SSL/TLS port 443. The HTTP handshake for a SSL/TLS connection is the same as it is for a proxied SSH request (its also tunneled by HTTP CONNECT). So actually the proxy thinks you are going to SSL (because of the port), but you are not. To do this, just let putty connect to port 443 instead of 22 (as shown in the first image above). Next you have to tell putty that it should use a proxy:

Before declaring me foobar (again), I'll give some clarifications at this point:

  • What I'm doing here is building a "poor man's" VPN to a certain extend. However, VPNs work with different protocols, requires a VPN client (and a VPN server). It also requires the network infrastructure to 'allow' to setup a VPN. So suppose you want to connect your (own) office with your home network, a real VPN would definitely be better and more scalable. However, this setup needs to work as lightweight as possible on any type of client (possibly not managed by me) and preferably on any type of network.
  • A really tightend up proxy will disover fast (even if you do it over 443) that you are not SSL/TLS-ing. In that case I could still add an extra module which will tunnel my ssh tunnel in an ssl tunnel tunneled over the proxy. By doing that the proxy is not able to distinguish your session from a SSL/TLS that for example has been started by a clients browser. Since I did not yet meet such tightned proxies, and this setup would require some additional client software as well, I will leave it like this until its really necessary some day.

Saturday, March 26, 2011


Some time ago I got introduced to a part of the Java text API that was unexplored territory for me: the Collator.

Languages imply more complexity then one on first sight would think (check this if you have any doubt).
The main usage of the Collator is to help us with a part of that linguistic complexity, more specifically locale sensitive comparison. It implements the collation specification defined by the Unicode

The Java Collator roughly does these things:

  • Canonicalization of canonical equivalent characters
  • Multi level comparison

Comparing Java based Strings works by comparing their Unicode code point that maps with the character. This would mean that the position of the character in the Unicode code charts specifies the sorting weight, but that is not the case. Languages might have different sorting weights for the exact same characters.

For example, if you don't know anything about the German language, you might expect that ß (\u00DF) is sorted as it was a 'b' or 'B'. That is not correct, since its actually represents the combination 'ss'. But even knowing this, a standard comparison with ß would yield false results since its code point is higher then a normal 's'. So in the end it will not be sorted as an 'ss' but it will be sorted as it was 'higher' then 'z'.

Multi level comparison solves this by offering 4 comparison levels: base letters, accents, case, punctuations. If the first level is used, only base character differences are considered. With the second level base characters as well as accents are considered significant, etc

Note: the Collator apparently does not support punctuation.

System.out.println("a equals b -> " + ("a", "b")==0 ? "true":"false"));
System.out.println("a equals à -> " + ("a", "à")==0 ? "true":"false"));
System.out.println("A equals a -> " + ("a", "A")==0 ? "true":"false"));

With collator.setStrength(Collator.PRIMARY):
a equals b -> false
a equals à -> true
A equals a -> true

With collator.setStrength(Collator.SECONDARY);
a equals b -> false
a equals à -> false
A equals a -> true

With collator.setStrength(Collator.TERTIARY);
a equals b -> false
a equals à -> false
A equals a -> false

Our first use case for which we used the Collator was for the first function; canonicalization. Unicode foresees different ways of representing certain characters. For example; ü is identified by a single code point \u00FC and thus a single character. However, it is also possible to form ü with a character + diacritical mark(°): u (\u0075) and ¨ (\u0308).

It makes sense if you think about it, on a classic typewriter you would also form ü by first printing u, go back one position and then print ¨. The ¨ is a so called invisible character on the type writer. On our keyboard we can do the same. You can press the button marked ü or you can: <altgr> + <¨> + <u> which gives you ü.

The end result (on your screen at least) is the same disregarding how you form ü, however, it is stored differently. The single character ü would be saved as: 0x00FC. If you would have formed ü by typing ¨ followed by u, it would be saved as: 0x0075 0x0308

As long as you just want print those characters, in a browser, console, text editor, ... you can (hopefully) rely on that software to display it right. However, if you are writing Java and want to do operations with character streams containing such characters it becomes tricky.

Example (°°): lets take a typical German word such as "abgaskrümmerdichtung":
String single = "abgaskr\u00FCmmerdichtung";
String combined = "abgaskr\u0075\u0308mmerdichtung";

System.out.println("Single equals combined? " + single.equals(combined));
System.out.println("Single: " + single);
System.out.println("Combined: " + combined);

The first line will say that they are not equal: Single equals combined? false
However, when both are displayed, they look exactly the same:

Single: abgaskrümmerdichtung
Combined: abgaskrümmerdichtung

Our software needs comparison on a higher level rather then pure code point comparison. It needs to be canonicalized first, by something that knows that \u0075\u0308 is in fact \u00FC. Collator to the rescue:

String single = "abgaskr\u00FCmmerdichtung";
String combined = "abgaskr\u0075\u0308mmerdichtung";

System.out.println("Single equals combined? " + (, combined) == 0 ? "true": "false"));

This will print; Single equals combined? true

A second use-case where the Collator came in handy: we were in need to map characters from ISO8859-1 to 7bit ASCII. ISO8859-1 contains several accented characters that do not exist in 7bit ASCII. Our goal is to map these characters to their canonical equivalent that is supported in 7bit ASCII. For example: "çéàëê" could be mapped to "ceaee". Of course, other characters for which no obvious equivalence exist cannot me mapped (and will be converted as '?')

Remember: Java uses Unicode and UTF16 as encoding. ASCII and the ISO8859 family are both character maps and encodings in one. Unicode and ISO8859-1 share the same code points for the first 256 glyphs. They are also compatible on encoding level: if you take the letter 'A' and save it (codepoint + encoding = ISO8859) and you decode it as Unicode/UTF8, it will still print 'A'. (this is not the case with UTF16/32). ASCII only shares the first 128 glyphs (extended, 8bit, ASCII is not compatible with Unicode or ISO8859-1).
String ascii7Characters = " !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";

Collator ascii7Collator = Collator.getInstance(new Locale("nl", "BE"));

Map<CollationKey, Character> ascii7CollationMappings = new HashMap<CollationKey, Character>();

for (char c : ascii7Characters.toCharArray()) {
   ascii7CollationMappings.put(ascii7Collator.getCollationKey(String.valueOf(c)), c);

String accented = "çéàëê";
StringBuilder ascii7 = new StringBuilder();

for (Character character : accented.toCharArray()) {
   Character canonicalizedCharacter = ascii7CollationMappings.get(ascii7Collator.getCollationKey(String.valueOf(character)));
   ascii7.append(Character.isUpperCase(character) ? Character.toUpperCase(canonicalizedCharacter): canonicalizedCharacter);

System.out.println("ISO8859-1 converted to 7bit ASCII:" + ascii7.toString());
This will print: ISO8859-1 converted to 7bit ASCII:ceaee

The String created on line 12 is of course Unicode (and encoded in UTF16, but not relevant now).
But since these characters are in the range which is equal between ISO8859-1 and Unicode, it actually does not matter. In real life we would be reading in a byte stream which is explicitly decoded as ISO8859-1:
String accented = new String(inputInIso8859_1, "ISO8859-1");

What we did here is use the collator its CollationKey and bind it to our normalized 7bit ASCII character. These key are the canonicalized form of the character, depending on the strength and decomposition values you configured the Collator with. Characters that are canonical equal will also have the same CollationKey. You can use the CollactionKey for linguistically correct sorting/searching, since it will yield the correct order based upon the Locale you initialized the Collator with (it implements Comparable).

(°)Diacritical marks are special glyphs, in that way that they are combined with other glyphs and say something about the intonation. For example: <`> can be considered a diacritical mark.

(°°)The so called code points named in this text refer to glyphs in the Unicode map. They are shown in Java escaped hexadecimal form notation, so \u + 16bit hex. The encoded form is UTF16, with the given examples this means that it is 16bit per character and the code point matches with the encoded form (since the code points in the examples are between \u0000...\uD7FF and \uE000...\uFFFF)