Tuesday, March 31, 2009

The Grinder Framework: 10 limitations

3 comments
The Grinder Framework looked very promising when I found it 2 or 3 weeks ago. I thought I had found the solution to do distributed performance testing, but after understanding how it works and testing it for sometime, I came across some limitations that made me code a client/server protocol and go from there:

  1. GUI-centric: Grinder should have been developed with an API-centric view. And then build the GUI on top of it. Not the reverse. I want to launch and run performance tests in an automated fashion. I admit that this is possible with the experimental Console API, but it was frustrating that this is something still experimental - and apparently without enough focus.
  2. Grinder does not provide a way of distributing the agents. You must distribute them and start them manually. Of course I didn't expect Grinder to provide implementations for all possible agents distributions, but I would expect at least having a way of plugging in a class to distribute the agents or scripts that could do that. That way I could start my test, have my agents distributed (according to my needs), have them started, and then actually kick off the test execution. Some implementations for common cases (like a passwordless ssh connection) could be provided.
  3. Grinder doesn't allow agents to collect additional metrics. For example, I don't want only to collect the total execution time, but some individual operations that are part of the tests (but may not be even called at all). As long as you find a way to wrap this in a Test class, you are fine, but that is not always the case - besides we may want to have some metrics hierarchy. And my use case does not include only times, but possibly collecting counts as well.
  4. Grinder has the summary of statistics for the test run, not all the agents metrics. There may be a way of doing it, but my initial tests showed that The Grinder only stores the summary of the test execution. What about collecting and making available all metrics, so you can do calculations on these metrics? Or if I want to plot a graph based on them doing some correlations specific to my case?
  5. Grinder does not allow you to use heterogeneous agents. I would like to run different agents in the same test execution.
  6. Grinder requires all the agents to have the same number of threads - and this is not specified in the code, so you can't perform more complex tests or have a different behavior depending on the number of threads.
  7. Grinder requires Python. Although Python is pretty cool and can be very useful, do I need to create a Python script even if I have my test ready to run? For example, I'd like to use a JUnit test seamlessly.
  8. Grinder does not provide an API to store the statistics. The only of seeing them is through the console. This is related to item 4, but it goes over that limitation because I need to have access to all metrics and store for further processing and an API for this is needed.
  9. Grinder does not provide a timed test resource. You can work around with the Console API, sleeping after starting the test and then stopping it. This should be part of the standard API.
  10. Grinder does not provider a way of breaking the test depending on a certain condition.

Grinder seems very useful for standard HTTP GET tests, which seems to be the most common use case, but it has serious limitations for more complex distributed systems testing.

Spring 3 and REST

0 comments
After listening to Rod Johnson's interview on the JavaPosse, I was trying to catch up with the Spring 3 improvements and I found this blog entry about the upcoming REST support, which seems pretty interesting:

http://blog.springsource.com/2009/03/08/rest-in-spring-3-mvc/

I really liked the workaround for HTML only supporting GET/POST. Other operations are added as a field (e.g. "_method") and Spring has a servlet filter (
HiddenHttpMethodFilter) to change to the desired method.

Introduction to REST

0 comments
Good introduction to REST:
http://www.infoq.com/articles/rest-introduction

Saturday, March 28, 2009

Hudson Plugin 1: Setting up the system

0 comments
This is an updated version of the tutorial written by Stephen Connoly to write a Hudson plugin. The plugin I create here was done on Hudson ver. 1.293.

Hudson is a continuous build system that I've been using for the past several months. It's been very useful, but my needs require a customized plugin. Although I have some ideas on how to create a great plugin, let's start with the following goal in mind:
  • Add a Post-Build Action, which is persisted. Example: default JUnit plugin
  • Have a trend graph for the build and project results. See an example below.
The only prerequisites to start working on a Hudson plugin are Maven 2 and Java 1.5 (JavaSE) or later. Yes, you don't need to worry about downloading or install Hudson itself, as Maven takes care of downloading the Hudson version you're developing for. You can launch Hudson with your development code using Maven, what speeds up the development greatly.

So, to set up your system, do the following:
  • Install Java SDK 1.5 or later
    Add it to your path and set up JAVA_HOME environment variable
  • Install the latest version of Maven 2
    Add it to your path and set up MAVEN_HOME environment variable
  • Install your IDE (in this example, we will use Eclipse)
Let's configure Maven to find Hudson. To do it, let's edit Maven's configuration file:
  • Find Maven's configuration file depending on your environment:
    Unix: $HOME/.m2/settings.xml
    Windows: $HOME/.m2/settings.xml
  • Add the following to your file:
    <settings>
    <profiles>
    <profile>
    <id>hudson</id>
    <activation>
    <activeByDefault />
    </activation>
    <pluginRepositories>
    <pluginRepository>
    <id>java.net2</id>
    <url>http://download.java.net/maven/2</url>
    </pluginRepository>
    </pluginRepositories>
    </profile>
    </profiles>
    <activeProfiles>
    <activeProfile>hudson</activeProfile>
    </activeProfiles>
    <pluginGroups>
    <pluginGroup>org.jvnet.hudson.tools</pluginGroup>
    </pluginGroups>
    </settings>
Time to create the plugin skeleton (with Maven in the path):
mvn hpi:create
Note that groupId is the equivalent of package name and artifactId is the equivalent of project name. We used "com.sacaluta" for groupId and "performance" for package name.

Let's update the Hudson version this plugin will be for. As of this writing, the version that is set up in the pom.xml file (inside the plugin directory) is 1.279. Let's change it to 1.293.
<properties>
<!-- which version of Hudson is this plugin built against? -->
<hudson.version>1.293</hudson.version>
</properties>
And now we use Maven to make a Eclipse project out of it to open in our IDE:
cd  (e.g. cd performance)
mvn -DdownloadSources=true eclipse:eclipse

Finally, let's learn how to run Hudson from the command-line with your plugin code.
mvn hpi:run
Once we are done with the plugin development, we can package and distribute your .hpi plugin. This is the command to do this:
mvn package
DO NOT run "mvn package" while developing the plugin. If you do it and then want to run "mvn hpi:run", make sure to run "mvn clean" before that.

In the next post, we will start creating the plugin classes.

This first part of the plugin creation was heavily inspired in the Hudson Plugin Tutorial.