Monday, August 31, 2009

Alternative to Wicket Ajax serialization

In the last post, I talked about an issue I found with Wicket. The JIRA I filed with the Wicket team (WICKET-2437) was resolved as "Won't Fix". The reason is that they do not want to make the compromise of having the user synchronizing the page, what is quite understandable.

Igor (Wicket's author) suggested that I created a shared resource and point all my images to this shared resource. That is done by removing my panels and modifying the img src attribute directly. Of course that was painful as I need to serialize all my parameters to the panels as URL parameters, and then be able to parse on the other end shared resource). I tried this path and it worked beautifully.

I know that Wicket supports multiple content types for a DynamicWebResource, but I wonder if I can make some other parts of my site (which were supposed to be async) truly async making them async. For now I can stick with Wicket, which is pretty good.

Thursday, August 27, 2009

Apache Wicket and Ajax: deal-breaker?

I've been using Wicket for the past months and found something that it's a deal-breaker if you have a web application with lots of Ajax where you rely on asynchronicity. Wicket serializes client and server side requests for the same page. You understood it correctly: serialized. So, if you have multiple panels, all the requests will be serialized. The whole purpose of ajax for asynchronicity is defeated.

So, this is a big disadvantage and, although I've been a big Wicket advocate, I would not consider it unless I find a good and simple workaround.

For more context, see this JIRA I file with the Wicket team:

Update: I found an alternative. See this post.

Saturday, July 25, 2009

Wicket + Spring Security: do NOT post info to j_spring_security_check as part of the URL

This week, when working on the integration of Spring Security with Wicket, I was trying to understand the best approach to create a customized login page. Then I came across this Apache wiki page:

It suggests that you post the info as a Wicket form and, in the Wicket class, you validate the info and post it to j_spring_security_check. This looks very nice at first, but later I realized a major problem. It posts the username and password as part of the URL. Yes, it posts something like that:


What's wrong with that? If you have access log, this is the URL you are accessing, which shows up in the log files. I updated the Wiki page with this info and definitely did not follow this path.

It turned out that, after searching everything I found on the web, the solution was pretty straightforwad. I added a regular form to my LoginPage with action set to j_spring_security_check and do not intercept this request through Wicket. That simple, no validation or check in my Wicket code at all.

If you have any questions about this, don't hesitate to send me an email.

Monday, July 20, 2009

Wicket:: how to output text (like a Servlet)

I wanted to output regular text, like a servlet, rather than an HTML - which is Wicket's default. This was to be used by a PingPage, to make sure the service is up and running. This is the way you can do that:

public class PingPage extends WebPage {
public PingPage() {
getRequestCycle().setRequestTarget(new IRequestTarget() {
public void detach(RequestCycle requestCycle) {}
public Object getLock(RequestCycle requestCycle) { return null; }

public void respond(RequestCycle requestCycle) {
WebResponse r = (WebResponse)requestCycle.getResponse();
r.setContentType( "text/plain" );

PrintStream printStream = new PrintStream(r.getOutputStream());

Thursday, July 16, 2009

Wicket: link/url relative to the context path

I was trying to figure that out and it took me long enough to find how to do that in wicket that it may be worth posting how to solve this problem.

ExternalLink logoutLink = new ExternalLink("logout_link", "/j_spring_security_logout");

I was integrating Spring Security and wanted to add the logout link. There is no need to figure out the context path, as the ExternalLink class has an option to set the link as relative to the context.

You can find more info here: ExternalLink (javadoc)

Thursday, May 07, 2009

Wicket, Form and GET method

If you are using Wicket and has previous web development experience, one of the good things is that, everytime you submit, Wicket takes care of everything. However, URL after form submission is something strange as it leads you to wicket URLs that is depending on the user's session. The question is: how do you make Wicket behave as a plain form that submits using GET method?

First, I set my URL strategy to make it bookmarkable:

MixedParamUrlCodingStrategy mypageURL = new MixedParamUrlCodingStrategy(
new String[]{"type"}

Then, in the page code, I override the form onSubmit() method to do the following:

form.add(new SubmitLink("update") {
public void onSubmit() {
PageParameters parameters = new PageParameters();
parameters.add("period", Integer.toString(getPeriod()));
parameters.add("unit", getUnit().toString());
parameters.add("type", getType());
setResponsePage(getPage().getClass(), parameters);

So, after the user click on the submit link, it submits it similarly to a GET method and I get a nice URL like:


The result is that this makes URLs much more bookmarkable throughout the session.

Tuesday, May 05, 2009

Extracting SOAP attachments with Axis

It was a little hard to find this information today, so I think it would be interesting to share it here. First, what I wanted was to get the attachment for a SOAP response. The response was empty, but the attachment had a png I needed to display in my Wicket application.

It turns out that it is very simple to extract SOAP attachments. All you have to do is to write a handler that will be called during the web service call. Any number of handlers can be set in the BindingProvider. For example, this is my code to set the handlers:

MyServicePort port = mws.getMyServicePort();
BindingProvider bp = (BindingProvider) port;
Binding binding = bp.getBinding();

// Add the logging handler
List handlerList = binding.getHandlerChain();
if (handlerList == null) {
handlerList = new ArrayList();


With this code, we only need to write our handler which will handle the message and can do whatever it wants with it - including accessing attachments.

public class SOAPAttachmentHandler
implements SOAPHandler<SOAPMessageContext> {
private Collection<Attachment> attachments;

public boolean handleFault(SOAPMessageContext context) {
return true;

public boolean handleMessage(SOAPMessageContext context) {
attachments = ((SOAPMessageContextImpl)context).
return true;

public Set<QName> getHeaders() {
return null;

public void close(MessageContext context) {
// blank

public Collection<Attachment> getAttachments() {
return attachments;

The class above extracts the attachments and store then in a class variable. After invoking the web service, I can access the attachments.

Tuesday, April 28, 2009

Lack of memory

A very interesting thing happened a few days ago. After searching for a solution to an error message I was getting when running Apache Tomcat, I came across the following post:

And I found the solution to my problem in the comment #5. By accident, I saw the author of the comment and, to my surprise, I had been the author in 2006! Yes, that's right. I had absolutely no memory of writing this comment. And this solution is mentioned in many other places after I added this comment there.

Actually, I ended up using Tomcat 6 due to the nuisance of having to change a lot of stuff in my project at Amazon, but this finding was funny anyway.

Saturday, April 04, 2009

Cygwin: created files (e.g. tar) have shared icon on Windows Vista

Everytime I create a file in cygwin shell, files end up with the shared folder icon on Windows Vista and it is a hell of pain to remove this icon (unsharing a folder takes a whole lot of time). This happens with files under your Documents folder, not with files created elsewhere.

In order to fix this behavior, you can do the following in a cygwin shell:

export CYGWIN=nontsec

Or even better, edit your .bashrc and add this export to be done everytime you launch a new shell.

Thursday, April 02, 2009

Hudson Plugin 2: Adding a Post-Build action (for a Reporter/Publisher)

This is a follow-up to the Hudson Plugin 1. As told before, you create the plugin skeleton using:
mvn hpi:create
This creates a plugin skeleton through the maven-hpi-plugin. As of this writing, the latest version (1.34) generates a code that uses adeprecated way of doing so, and it is the skeleton of a Builder plugin.

Since we are building a Recorder/Publisher and we will use the recommended way of defining a plugin (through the Extension annotation), we will have to change most of the code. But it is important to understand what is the structure, so that's the value of generating the skeleton. Specially because of Hudson's choice of using Jelly for the plugin portions of the HTML page.
  • src/main/java
    plugin Java code
  • src/main/resource
    jelly files where you specify code for the configuration of plugin (global and project configuration).

Java Code
We need to create at least two classes to start testing our plugin. Since we want to see results quickly, let's do the minimum possible here. Let's remember what we want to do here: to add a new "Post-build action" where we can configure the file pattern that we will use to find files to report the test results.
  1. Subclass of hudson.Recorder.
  2. Descriptor class
For 2. there are many options, but we will create a subclass of BuildStepDescriptor<publisher>. More details below.

  1. config.jelly: jelly code that will be shown in the "Post-build action" section for our plugin
  2. global.jelly: jelly code that will be shown in the Manage Hudson/Configure System section (this file is not used here since we don't have global configuration so far)
  3. help.html: html code with help text for the "Post-build action". This is shown if you click on the ? on the right of your action.
  4. help-artifact.html: html code with help text for the plugin option that will be entered by the user. In our case, the file pattern for the report files.

Files and Details
1. Since our Descriptor class is an inner class, this is the only Java class so far:

public class DistributedTestRecorder extends Recorder {
public static String DISPLAY_NAME = "Distributed Test Report";

private final String report;

public DistributedTestRecorder(String report) { = report;

public String getReport() {
return report;

public boolean perform(AbstractBuild build, Launcher launcher,
BuildListener listener) {
return true;

public static final class DescriptorImpl extends
BuildStepDescriptor<publisher> {
public String getDisplayName() {
return "Publish " + DistributedTestRecorder.DISPLAY_NAME;

public void doCheck(StaplerRequest res, StaplerResponse rsp)
throws IOException, ServletException {
new FormFieldValidator.WorkspaceDirectory(res, rsp).process();

public boolean isApplicable(Class arg0) {
return true;
2. Jelly file with the code for our Post-Build action

<j:jelly xmlns:j="jelly:core" xmlns:st="jelly:stapler" xmlns:d="jelly:define" xmlns:l="/lib/layout"
xmlns:t="/lib/hudson" xmlns:f="/lib/form" xmlns:bh="/lib/health">
<f:entry title="Distributed Test Report pattern" field="report"
This is a file name pattern that can be used to locate the Distributed Test report files
(for example <b>**/performance/perf*</b>).<br/>
The path is relative to <a href='ws/'>the module root</a> unless
you are using Subversion as SCM and have configured multiple modules, in which case it is
relative to the workspace root.<br/>
<f:textbox />
3. HTML code for help files

(main help file)
(help file for "report" configuration field - see above in the config.jelly file)
This is the distributed test plugin help to be added later.
Yes, that's all for now. You should already have a post-build action being displayed when you launch Hudson (remember to use "mvn hpi:run" to run Hudson and do not run "mvn package" before that).

Wednesday, April 01, 2009


An interesting dependency injection framework was coded within Google called Guice (you say "juice"). The following video provides a good introduction to it:

I found the following features particularly interesting, which will probably be taken aboard by Spring in the future:
  • Provider: you can inject the provider rather than the dependency. That allows the class to instantiate multiple copies of the class, instantiate the copies lazily or conditionally. Also, if you have dependencies that have different scope (like having a request dependency in a session object), you can handle this case much better.
  • Development stages: this seems to be something for the next version, but it is pretty cool. You can specify beans to be loaded according to the stage (devel/prod), not loading unnecessary beans while you are developing.
  • Constructor listener: another feature for the next version. In short, to be able to intercept the construction of any of the dependencies to be injected.
For a comparison with Spring, check out the following link:

I guess it will be hard to come up with a framework to beat Spring, but it seems that Guice and Google products have much better political acceptance and may have better chances of making its dependency injections officially supported in the JDK and sponsored by the JCP.

Update: I found an article that supports my comment above about Web Beans + Guice where one of the readers commented:
There's been a lot of talk over the past few years that perhaps Interface 21 should push to formally make the Spring Framework a part of the JEE specs -- it seemed like it might be possible with Rod Johnson officially declaring his support for JEE 6... well it looks like "Crazy" Bob Lee and the team behind Guice may have found a back door to get themselves into the party first -- according to a new series of articles about the upcoming Web Beans, the new spec is actually influenced by a combination of Seam and Guice ... I find these articles interesting in that Google has apparently taken the JBoss approach to supporting the JCP -- that is, create an independent product to fill a whole in the JEE specs, and then use the JCP to make that product into a spec itself (take a look at the JPA for a previous example)...

Tuesday, March 31, 2009

The Grinder Framework: 10 limitations

The Grinder Framework looked very promising when I found it 2 or 3 weeks ago. I thought I had found the solution to do distributed performance testing, but after understanding how it works and testing it for sometime, I came across some limitations that made me code a client/server protocol and go from there:

  1. GUI-centric: Grinder should have been developed with an API-centric view. And then build the GUI on top of it. Not the reverse. I want to launch and run performance tests in an automated fashion. I admit that this is possible with the experimental Console API, but it was frustrating that this is something still experimental - and apparently without enough focus.
  2. Grinder does not provide a way of distributing the agents. You must distribute them and start them manually. Of course I didn't expect Grinder to provide implementations for all possible agents distributions, but I would expect at least having a way of plugging in a class to distribute the agents or scripts that could do that. That way I could start my test, have my agents distributed (according to my needs), have them started, and then actually kick off the test execution. Some implementations for common cases (like a passwordless ssh connection) could be provided.
  3. Grinder doesn't allow agents to collect additional metrics. For example, I don't want only to collect the total execution time, but some individual operations that are part of the tests (but may not be even called at all). As long as you find a way to wrap this in a Test class, you are fine, but that is not always the case - besides we may want to have some metrics hierarchy. And my use case does not include only times, but possibly collecting counts as well.
  4. Grinder has the summary of statistics for the test run, not all the agents metrics. There may be a way of doing it, but my initial tests showed that The Grinder only stores the summary of the test execution. What about collecting and making available all metrics, so you can do calculations on these metrics? Or if I want to plot a graph based on them doing some correlations specific to my case?
  5. Grinder does not allow you to use heterogeneous agents. I would like to run different agents in the same test execution.
  6. Grinder requires all the agents to have the same number of threads - and this is not specified in the code, so you can't perform more complex tests or have a different behavior depending on the number of threads.
  7. Grinder requires Python. Although Python is pretty cool and can be very useful, do I need to create a Python script even if I have my test ready to run? For example, I'd like to use a JUnit test seamlessly.
  8. Grinder does not provide an API to store the statistics. The only of seeing them is through the console. This is related to item 4, but it goes over that limitation because I need to have access to all metrics and store for further processing and an API for this is needed.
  9. Grinder does not provide a timed test resource. You can work around with the Console API, sleeping after starting the test and then stopping it. This should be part of the standard API.
  10. Grinder does not provider a way of breaking the test depending on a certain condition.

Grinder seems very useful for standard HTTP GET tests, which seems to be the most common use case, but it has serious limitations for more complex distributed systems testing.

Spring 3 and REST

After listening to Rod Johnson's interview on the JavaPosse, I was trying to catch up with the Spring 3 improvements and I found this blog entry about the upcoming REST support, which seems pretty interesting:

I really liked the workaround for HTML only supporting GET/POST. Other operations are added as a field (e.g. "_method") and Spring has a servlet filter (
HiddenHttpMethodFilter) to change to the desired method.

Introduction to REST

Good introduction to REST:

Saturday, March 28, 2009

Hudson Plugin 1: Setting up the system

This is an updated version of the tutorial written by Stephen Connoly to write a Hudson plugin. The plugin I create here was done on Hudson ver. 1.293.

Hudson is a continuous build system that I've been using for the past several months. It's been very useful, but my needs require a customized plugin. Although I have some ideas on how to create a great plugin, let's start with the following goal in mind:
  • Add a Post-Build Action, which is persisted. Example: default JUnit plugin
  • Have a trend graph for the build and project results. See an example below.
The only prerequisites to start working on a Hudson plugin are Maven 2 and Java 1.5 (JavaSE) or later. Yes, you don't need to worry about downloading or install Hudson itself, as Maven takes care of downloading the Hudson version you're developing for. You can launch Hudson with your development code using Maven, what speeds up the development greatly.

So, to set up your system, do the following:
  • Install Java SDK 1.5 or later
    Add it to your path and set up JAVA_HOME environment variable
  • Install the latest version of Maven 2
    Add it to your path and set up MAVEN_HOME environment variable
  • Install your IDE (in this example, we will use Eclipse)
Let's configure Maven to find Hudson. To do it, let's edit Maven's configuration file:
  • Find Maven's configuration file depending on your environment:
    Unix: $HOME/.m2/settings.xml
    Windows: $HOME/.m2/settings.xml
  • Add the following to your file:
    <activeByDefault />
Time to create the plugin skeleton (with Maven in the path):
mvn hpi:create
Note that groupId is the equivalent of package name and artifactId is the equivalent of project name. We used "com.sacaluta" for groupId and "performance" for package name.

Let's update the Hudson version this plugin will be for. As of this writing, the version that is set up in the pom.xml file (inside the plugin directory) is 1.279. Let's change it to 1.293.
<!-- which version of Hudson is this plugin built against? -->
And now we use Maven to make a Eclipse project out of it to open in our IDE:
cd  (e.g. cd performance)
mvn -DdownloadSources=true eclipse:eclipse

Finally, let's learn how to run Hudson from the command-line with your plugin code.
mvn hpi:run
Once we are done with the plugin development, we can package and distribute your .hpi plugin. This is the command to do this:
mvn package
DO NOT run "mvn package" while developing the plugin. If you do it and then want to run "mvn hpi:run", make sure to run "mvn clean" before that.

In the next post, we will start creating the plugin classes.

This first part of the plugin creation was heavily inspired in the Hudson Plugin Tutorial.