Sonntag, 8. Juni 2003

Craig Andera on why AOP is broken & Why I surprisingly agree and what I am doing about it

Craig Andera has some interesting thoughts around AOP and specifically mentions the stuff that I have been doing in that area. And he say that it doesn't work and never has, because services are never truly orthogonal and have various interdependencies. In essence he's saying (I guess) that because the interdependencies just create a whole new level of complexity, the AOP approach is broken and it's better to generate explicit code instead of using interception techniques. I partially agree and always put a warning at the end of all of my talks around this issue: There is a limited set of use-cases for which an aspect'ish approach is useful. Security, logging, monitoring, billing, transaction enlistment, and a few others.

One of the biggest problems is service-order. You need to run the decryption and signature verification services before you can even evaulate a header that any other service can use. And even then, when you have something like a transaction-enlistment filter, do you open the transaction before or after a logging service wants to write something to a database? Does the logged data need to stay in the logging store when a transaction aborts? Yes? What if the log is used for billing? No? What if the log is used for diagnostics?

However, being explicit when chaining services together doesn't make things any better than using interception:

try
{
   handleServiceA(msg);
   handleServiceB(msg);
   handleServiceC(msg);
}
catch( Exception e )
{
   // do proper handling
}

is just as broken. I don't think it fundamentally matters much how code gets woven into the call chain. Setting up contexts is just one issue. What's even more difficult is to find a way to deal with errors in the presence of cooperating aspects (or, in more general terms, interception services). What's clear is that there's no way around interception-driven services in a web services world. It's all pipeline-based and, even worse, the pipelines are distributed pipelines of pipelines. It's too simple to say "it's broken, get over it". That doesn't help solving what is an actual problem.

A promising approach is to make aspects/interceptors act like resource managers and coordinate their work using a very lightweight 2PC protocol ("AC" guarantee only; no "ID"). Using 2PC for this approach allows interceptors/aspects to coordinate their work and know about each other before any work actually gets done. I have discussed these issues with a couple of people in depth we put some code together that essentially implements a little, in-memory "DTC" for that purpose. We call it a "WorkSet" instead of a transaction.  There's still some work to be done there, but I think I'll be able to post an example in a little while. Maybe around TechEd Europe time.


4:40:43 PM      comments []

Sam on Perf

Sam outs himself as a fan of low-level performance optimization. That's all good and fair, but often micro-optimization just takes way too much time with way too little of a result for the overall application throughput and its scalability. For distributed apps, the true optimization happens during the architecture phase. Or, as my friend Steve Swartz put it during our "Scalable Apps" tour: When you are stuck in a traffic jam with a Porsche, all you do is burn more gas in idle. Scalability is about building wider roads, not about building faster cars.


5:56:09 AM      comments []