Tuesday, March 31, 2009

The Grinder Framework: 10 limitations

The Grinder Framework looked very promising when I found it 2 or 3 weeks ago. I thought I had found the solution to do distributed performance testing, but after understanding how it works and testing it for sometime, I came across some limitations that made me code a client/server protocol and go from there:

  1. GUI-centric: Grinder should have been developed with an API-centric view. And then build the GUI on top of it. Not the reverse. I want to launch and run performance tests in an automated fashion. I admit that this is possible with the experimental Console API, but it was frustrating that this is something still experimental - and apparently without enough focus.
  2. Grinder does not provide a way of distributing the agents. You must distribute them and start them manually. Of course I didn't expect Grinder to provide implementations for all possible agents distributions, but I would expect at least having a way of plugging in a class to distribute the agents or scripts that could do that. That way I could start my test, have my agents distributed (according to my needs), have them started, and then actually kick off the test execution. Some implementations for common cases (like a passwordless ssh connection) could be provided.
  3. Grinder doesn't allow agents to collect additional metrics. For example, I don't want only to collect the total execution time, but some individual operations that are part of the tests (but may not be even called at all). As long as you find a way to wrap this in a Test class, you are fine, but that is not always the case - besides we may want to have some metrics hierarchy. And my use case does not include only times, but possibly collecting counts as well.
  4. Grinder has the summary of statistics for the test run, not all the agents metrics. There may be a way of doing it, but my initial tests showed that The Grinder only stores the summary of the test execution. What about collecting and making available all metrics, so you can do calculations on these metrics? Or if I want to plot a graph based on them doing some correlations specific to my case?
  5. Grinder does not allow you to use heterogeneous agents. I would like to run different agents in the same test execution.
  6. Grinder requires all the agents to have the same number of threads - and this is not specified in the code, so you can't perform more complex tests or have a different behavior depending on the number of threads.
  7. Grinder requires Python. Although Python is pretty cool and can be very useful, do I need to create a Python script even if I have my test ready to run? For example, I'd like to use a JUnit test seamlessly.
  8. Grinder does not provide an API to store the statistics. The only of seeing them is through the console. This is related to item 4, but it goes over that limitation because I need to have access to all metrics and store for further processing and an API for this is needed.
  9. Grinder does not provide a timed test resource. You can work around with the Console API, sleeping after starting the test and then stopping it. This should be part of the standard API.
  10. Grinder does not provider a way of breaking the test depending on a certain condition.

Grinder seems very useful for standard HTTP GET tests, which seems to be the most common use case, but it has serious limitations for more complex distributed systems testing.
Post a Comment