Today marked the first day of one of London’s newest and most anticipated technology conferences, QCon. Today was the first tutorial day of the conference, with a morning session on JMX and an afternoon session on Test Driven Development. The crowds weren’t huge, but they had come from all over Europe and as far as Saudi Arabia. I had a great time at both sessions. Here’s my rundown of the two.
Session I: JMX with Simon Brown
I wasn’t sure exactly what to expect out of this seminar as I was only vaguely familiar with JMX before today, however this turned out to be a really interesting session. JMX is a set of Java management extensions that allow an application to report back various different metrics to the JConsole management console or a bespoke console/application. These can be things like how many concurrent users are accessing a system component, how much memory is being used by the components, the Java heap size, or the CPU utilization of each java process within a JVM. It can also return values of properties or variables and allow users to action class methods via the console (e.g. to start or stop a service).
Dan stated that JMX is best used to instrument coarse grained architectural components in this way. He instruments Java applications for London-based banking clients using JMX primarily so that they can be monitored in production.
Simon’s blog can be found here: http://www.simongbrown.com/blog/
Session II: Test Driven Development with Erik Doernenberg
Unfortunately, Martin Fowler was not on hand for this tutorial as promised but it was still an interesting session. We’re all pretty familiar with Test Driven Development by now– the practice of writing unit tests before authoring source code– and really I had been curious to see how Fowler addressed the topic and pick up tips for SQS Agile training.
Erik started with the premise that primarily a design technique that allows developers to focus on delivering only what’s required and thinking about how best to model their application. Naturally, it has the additional benefit of producing lots of unit tests! Fortunately this session went a little further than your average TDD session. We started with the basics (writing tests, then producing source code). We quickly advanced to the more interesting topics, and one became a recurring theme: state verification testing versus behaviour driven testing (i.e. the TDD classicists vs. the mockists). Older techniques such as using stubs or making state-based assertions are popular and still have their place. However newer techniques such as mocking, which make assertions about an object’s implementation, can also be useful. There was some discussion of the pros and cons but Erik was careful not to place himself too firmly in either camp. The outcome was, as with many things, that the context should drive which to use and when. Martin Fowler has an interesting article on this.
Another purpose of this tutorial was to talk about software patterns that lend themselves to TDD. A natural one to open with is the red-green-refactor pattern, which simply involves writing failing tests, then making them pass by developing the appropriate source code (i.e. causing the status of your build to go from red to green), and finally refactoring to improve on your code. Erik recommended checking in the code once everything was green and then giving yourself a strict time limit to refactor so that you didn’t waste time going nowhere. The object mother pattern also came up, which can be used to generate elements of tests as needed. The dependency injection pattern also reared its head, it being the practice by which a class’s dependencies (more accurately resource providers) are injected via the constructor. This makes the class inherently more testable as dummy objects can be easily passed in to it.
Erik also recommended not testing private methods through public ones that may interact with them, but instead to use protected methods instead of privates and keep your tests within the same package (this would put them within the same scope). Test coverage came up, and Erik said he thought it was of particular value for teams learning TDD and commented that the aim was for it to be increasing in such cases. For experienced TDD teams, he was not so sure of the value. He was certain that it was not a good thing to become too obsessed with it and not to advocate 100% coverage.
Some other points:
- Copying test code is a no-no in Erik’s view. Test code should vary greatly and copying could mean you mistakenly end up with a very homogenized test code base.
- easyMock is an alternative to jMock which may improve on it to some degree.
- We briefly discussed domain driven design and ubiquitous language. UL basically states that the business language should match the development language. An interesting thing came up: what do you do when the business language is not English? This can cause problems as IDE’s and programming languages don’t lend well to non-English languages, though it was mentioned that you can get around Java’s reserved word restriction on “class” by placing an umlaut over it, i.e. “cläss.” It certainly does look cool, as Erik pointed out!
- Symmetry in design- that natural symmetries should appear in application code. A simple e.g. would be that if a class has a setter for a property it should also have a getter.
- We were all entertained when Erik’s MacBook crashed mid-presentation.
Lecture slides will eventually be posted on Erik’s blog: http://blogs.doernenburg.com/