Agile Development : Better software through craft
Updated: 01/03/2003; 2:57:47 PM.

 

Agile Development

Subscribe to "Agile Development" in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.

 
 

Friday, January 03, 2003

Peter Van Dijck in Ease uses the term "Sovjet Design (Soviet Design)".  I love that term!!

There is an additional point.  Process-centeredness is based on a mechanistic view of the world that holds, at its worst, that the skills of the individuals involved in the work doesn't matter so long as "they follow the process".  It is frequently the result of over-internalizing the lessons learned from past problems (otherwise known as fighting the last war).  Excellence in any endeavor is rarely the result of unskilled or moderately skilled people looking hard in the rear-view mirror.


2:57:25 PM    

A paper on TDD for web apps is online at the Agile Alliance site. Edward Hieatt and Robert Mee have written a nice experience report on how the used TDD within their world at Evant.  There were just lots of great insights in this article!!

First, how did they get started with TDD?  They sat down as a group and groped throught the process of writing the first tests together.  Now I am sure one could get mentoring and training, but I wonder if anything is better than fighting through it on your own.  What a great model for kicking off the team effort!

Second, TDD allowed them to practice what Ron Jeffries calls emerging design leading to emerging architecture.  It allowed them to start with EJBs, then move off of them with minimal impact and high confidence.  It allowed them to move into JSPs and then out of them with minimal impact.  It forced them to confront some of the harder TDD problems, like how do we test javascript effectively.  It forced them to explore separation of concerns between servers and clients from the beginning. It forced them to create an acceptance testing framework that also served as an EAI integration framework when that was required.

Third, it allowed them to minimize documentation on the development side by forcing them to document tests well enough to serve as documentation.

Let's look at those emergent architecture points just a bit more. First what is emergent architecture? Eric Sheid has a good site for emergent architecture linkage in the context of IA.  But from my standpoint, emergent architecture is the result of TDD  plus the "You aren't gonna need it,"  (YAGNI) principle plus the "Do the simplest thing that could possibly work" (DTSTTCPW) principle. 

Let's get rid of the problems that probably shouldn't have been problems had YAGNI and DTSTTCPW been in force from the beginning.  It is likely that the EJB problem would not of occurred because EJBs violate both principles. 

For every other point, one can argue that a well executed predictive system development process (BDUF) could have prevented the problem by "considering it from the beginning".  The underlying assumption in that statement is that the engineering involved in refactoring is greater than the engineering involved in up front infrastructure work. But that equation only works if the emergent architecture is ignoring critical customer requirements.  In any case where the stories have been identified and prioritized, the critical requirements will emerge and be dealt with.  And the added benefit of the constant refactoring is the team acquires the confidence to do serious refactoring in ways that are efficient and effective.

Commitment to emergent architecture means taking away the blame game (why didn't the customer, the requirements analyst, the development team, the management, the testers, the vendors, etc. foresee this problem and do something about it).  Commitment to emergent architecture does not mean ignoring requirements, does not mean not thinking about design choices, does not mean making poor design choices.  The commitment simply means that we will make the best choices we can based on what we know "right now".  It means that we trust the participants to make needed changes as they are needed with high confidence that the result is both operationally equivalent and runs.


10:41:32 AM    

Tuesday, December 31, 2002

Stefano Mazzochi (via Sam Ruby) states the case eloquently for small increments of code in an open source project.

"The hardest and best the code is, the more harm it creates to the
community; this is because people will rather use the software rather
than extend it. Normally, if more than one blackboxware submission is
donated, the community will ask for a complete refactoring. (see
Xerces2)"

This is a great insight.  Every time this has happened on a project that I am working, the incentive to stay away from that code has been high.  When a large chunk of code drops in (I would argue even if it is relatively easy to understand) it creates a large barrier to understanding and thus to extension.  Now this may be a virtue if you don't want extension:), but if the expectation is that things will be changing over time then incrementalism is a good friend.

"The good old Software Engineering practices they teach you in college
are bullshit: making architecture decisions without continous
reversibility is expensive because design constraints change too much.
Those who want to apply hardware engineering practices miserably fail.
Open source is here to prove that such a "messy" way to do code is
actually the only one that works and scales."

This is another key insight.  Architecture as an emergent property rather than a static set of constraints or characteristics.  This is particularly important where adaptation is more important than optimization (see Jim Highsmith  here). I like that! Architecture should follow design whenever possible to allow the system to evolve rapidly to where it wants to go.  Build it often, build it to run, make sure you can always go back to what ran previously. 

 


10:05:46 AM    

Thursday, November 14, 2002

Joel Spolsky has an excellent piece on Leaky Abstractions, if you haven't read it, go read it now.

First, one of the great things about agile methods is the focus on early deployment of real user capabilities.  If anything will surface the leaky abstractions in the toolset, early and often testing is key.

My favorite bits:

Abstractions fail. Sometimes a little, sometimes a lot. There's leakage. Things go wrong. It happens all over the place when you have abstractions.

All products/frameworks/tools make assumptions and those assumptions get driven up into the abstractions upon which the products/frameworks/tools are based.  We cannot predict the system failures that will occur from any arbitrary assembly of software parts, because it is nearly impossible to understand how the abstractions will leak, particularly between the piece parts.

Why do leaky abstractions matter in web services land?  Remember that standards and tools can only shield you a bit from the harsh unabstracted world.  And remember that a good deal of the glue-ware in existence is specifically designed to be the bandage for leaky abstractions that are bleeding all over the place:).


9:21:03 AM    

Tuesday, November 12, 2002

In from Wes Felter, if you think you might be interested, submissions welcome until December 13, 2002 for the O'Reilly Emerging Technology Conference 2003 running April 22-25 2003 in Santa Clara.
9:16:50 AM    

Monday, November 11, 2002

I have written a short article based upon an excellent article in the November Communications of the ACM by Phillip Armour.  I love his metaphor of software as a knowledge creation/extraction activity, which he then links to many common estimation problems in software land.
11:00:03 AM    

Tuesday, November 05, 2002

strategy + business (free subscription required) has a nice longish article on trust and social network analysis within corporations.  First on trust.

“People have at their very fingertips, at the tips of their brains, tremendous amounts of tacit knowledge, which are not captured in our computer systems or on paper,” says Professor Stephenson. “Trust is the utility through which this knowledge flows.”

Bingo #1!! Information sharing slows to zero as trust degrades.  This is critical to understand in any agile development effort.  Agility requires that both customer and developer be able to share a lot of information.  Without trust, such sharing is difficult to understand or implement.

 Such social scientists as Francis Fukuyama, Mark Granovetter, and Robert Putnam have made strong cases that high-trust societies have an enormous competitive advantage over legalistic societies, in which suspicion of people is a cultural value, because the transaction costs go down. In high-trust organizations, transaction costs are similarly lower.

Bingo #2.  Corporate cultures usually come down on either as legalistic or as trust-based.  If your corporate culture is not trust-based, then agile methods are likely to be met with a great deal of resistance.

Now Social Network Analysis.  This is a lot fuzzier.  Prof. Stephenson is a prime mover behind NetForms International, which appears to provide tools and services related to Social Network Analysis. Now I can easily understand how such analysis might help visualize the eco-systems inside a company, it is not as clear to me how much help comes from techniques that require the kinds of commitments these do to form a static picture that is eroding from day one.  I am not a big fan of static view in agile worlds, and doubt that trust is a static value:)


10:28:08 AM    

Saturday, November 02, 2002

Platforms have been in the blogs a bit in the past couple of months, and after this week they are in my mind as well. Trying to decide which platforms to understand, support, and follow is a key problem for anyone in this business.  I recommend all these links for a good range of thought on what platforms are, and how platform developers think differently than application developers.

Brent Sleeper jumps in this week with his take on platforms.  Ray Ozzie, Joel Spolsky, and Dave Winer recently discussed platforms (Joel, Ray's first response, Joel's response, Dave's input, Ray's platform piece). 

The platform vendor (whether a single vendor or a consortium of vendors) has a vested interest in my having NO MORE THAN limited success.  If I create an application on top of the platform that is too profitable, they will try as hard as they can to pull that revenue stream back into the platform which means I have to respond to the customer faster than the platform vendor to remain viable.  If my needs at the platform level don't generate enough revenue, I will hear "its coming in a future release" until I run out of money:). 

So where are my bets in the short term?  Well, I am betting on platforms that are globally transparent (licensing and pricing models that scale globally) meaning that I can leverage work being done around the globe.  I don't think those will be single vendor proprietary platforms, but that is just me:). 


12:34:04 PM    

© Copyright 2003 Craig Johnson.



Click here to visit the Radio UserLand website.

 


January 2003
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  
Dec   Feb