Last updated : 05/03/2003; 21:28:35

Joe's Jelly

Joe Walnes rambling on and on about Software Development, Java and Extreme Programming.

NAVIGATION >>

/WIKI /WEBLOG /THOUGHTWORKS /OPENSYMPHONY /EMAIL

:: ELSEWHERE
xp developer wiki
c2 xp wiki
the server side
javaworld
sd magazine
jakarta

:: BLOGS
Ara Abrahamian
Mathias Boegart
Mike Cannon-Brookes
Paul Hammant
Aslak Hellesøy
Darren Hobbs
Patrick Lightbody
Charles Miller
Brett Morgan
Rickard Öberg
Joseph Ottinger
Mike Roberts
Chris Stevenson
James Strachan

:: INVOLVEMENTS
SiteMesh
QDox
MockDoclet
OSCore
OSUser
PropertySet
RichTags
Marathon
SiteMesh C++
Alt-RMI
Jelly MockTags
more...
:: BACKLOGS
February 2003
Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28  
Jan   Mar



Click here to visit the Radio UserLand website.

Click to see the XML version of this web page.

joe@truemesh.com

:. 05 February 2003

  7:07:20 AM  

We first set off by requiring a 95% coverage before commit. This was a bit a pain in the ass but it was basically doable. This made sure all features in the system had tests, it's simple not possible to commit anything without tests when you set the bar as high as this. But this is doesn't necessarily it's test-first development, it's just tested development. [jutopia]

I love test coverage tools. Another team at work was working on deciphering and robustifying a hairball project by slowly retrofitting tests and refactoring. Each day they managed to push the Clover bar a tiny bit higher and it served as a goal. Today we'll get it to 20%, 30%, 90%....

Unfortunately coverage tools can also be misleading. It's easy to write code that covers the lines of code but doesn't actually test them. A complementary approach is to use code mutation tools such as Jester - The Test Tester.

Jester takes a class and a testcase and runs the tests to check they all pass. It then makes tiny changes to the class under test, one by one, and reruns the tests each time. If the lines of code are tested properly, every little change should cause at least one test to fail. If the tests still pass it notifies you that you are not testing that line properly.

Although it's harder to use Jester and not typically something I'd do everyday, it has told me some very interesting things no other tool has pointed out about my code. Aside from pointing out lines of code that my tests execute but don't actually test, it has also pointed me to flawed algorithms that can be greatly simplified.

Ivan Moore has written some good stuff on the subject here and here.


Web-design by John Doe from open source web design.