Pete Wright's Radio Weblog
Musings on anything and everything, but mainly code!

 

 

02 June 2004
 

Extreme Programming Adventures in C#

I was lucky enough to pick up 4 new books for work yesterday and started to read work my through them on the commute home. One day later and I'm halfway through Extreme Programming Adventures in C# by Ron Jeffries. ExPAC (as I'll call it now to save my fingers) is one of a line of new look super slick titles from Microsoft press focussed around development best practices with .NET and this particular one is turning out to be a stunner. Ron is one of the original advocates of eXtreme Programming and in the book shows how eXtreme Programming is applied to not only learn C# and the .NET framework, but also to develop a complete application.

The book reads almost like a tech-heavy blog, and Ron has a fantastic writing voice that keeps you awake and on your toes no matter how deep he goes. I've done a lot of research into XP in order to build up Edenbrook's own internal development methodology, but its still fantastic to virtually sit alongside someone that's been there and done that as they work their way through a project. Unlike many authors, Ron also chose to expose his mistakes as well as his successes to illustrate both the strengths and weaknesses of XP along the way.

If you want a weighty but light hearted look into just what XP is all about and how it can be applied to a project, you owe it to yourself to check this book out. In fact, even if you aren't interested in XP, there's some great development best practices in this book, including some fantastic tips and code to show how to unit test a graphical user interface (he uses mock objects, something I'd come across in testing data tiers, but to see it applied to a GUI is, while obvious in hindsight, a great eye opener).

You can find out more about the book, Ron and eXtreme Programming in general at www.xprogramming.com

 

UPDATE: The book is actually an edited version of Ron's online diary/article series at www.xprogramming.com. So, if you want to learn all about eXtreme Programming from a master, without actually supprting him financially by buying his book (SHAME ON YOU..SHAME!) then check out the Adventures in C# section of that site.

 


11:58:42 PM    comment []

Typed Datasets and the object model

Every few months a friend and I go back over the old "ADO.NET is super cool versus ADO.NET sucks argument". My friend, lets call him Bob, is a staunch advocate of the former and a strong believer that we should never waste our time writing data access code. Instead code generators should do all the work for us. I'm firmly in the ADO.NET RULES camp, and detest the idea of code generators for using it. ADO.NET at a low level offers so much flexibility in how you work with your data, including accessing the appropriate data sources, that I fear a code generator would place me at too great a distance from the code to be able to produce an effective, efficient data layer.

This is all beside the point though. Today's discussion centered around Typed DataSets. Bob has inherited an application that uses them to deal with a fairly complex relational database at the back end. One element of the database can likened to part of the Northwind database. In Northwind you have Orders, Order Items and Products. An order contains order items, and each order item references a single product. In addition, Northwind includes a Supplier table. Suppliers sell Products.

In Bob's inherited application he has two typed datasets, one to work with, for the sake of an example, Order Items and their associated products, and another typed dataset to work with Suppliers and their Products. Sounds fair enough. Bob spoke to me though frustrated that he couldn't pass a Product (be it a Product from the Order Items dataset, or a Product from the Suppliers dataset) down to the business tier.

Two things struck me about this. First up, Bob's object model was wrong. When you create a typed Dataset in Visual Studio you are subclassing DataSet, DataTable, DataRow and a few other System.Data objects to give yourself nice strongly typed equivalents that you can rely on the compiler to syntax and type check for you at compile time. If you have two typed Datasets in the application, each containing the Products table then you have effectively produced two (in fact it's a heck of a lot more than two) identical classes in your code. From a pure OO view, that's not good. From an eXtreme Programming point of view that level of replication in the code is simply abhorant. As Kent Beck warns us in the Code Smells chapter of Martin Fowler's wonderful book "Refactoring",

"Number one in the stink parade is duplicated code. If you see the same code structure in more than one place, you can be sure that your program will be better if you find a way to unify them." - Kent Beck, Refactoring published by Addison Wesley.

The second thing that struck me was that there should be no problem passing an instance of either of the duplicated classes down to a business tier. After all the classes just subclass System.Data.DataTable and co and so all you'd need to do is pass in the typed DataTable as a generic DataTable object. Bob was having none of this though. He was using Typed DataSets to benefit from the compiler catching field spelling mistakes and other such wonders. He'd rather catch these errors at compile time than run time, and any solution I could suggest (creating an instance of a typed dataset and "Merge"ing it with the untyped DataTable) did not fit his requirements. "Surely there must be some way to achieve this, surely it's a common problem", Bob pleaded.

I spent the afternoon thinking about this and mulling over the inevitable comments that surfaced that this was yet another example of the uselessness of ADO.NET. Was I missing something? I've not actually used typed datasets in a full production application, preferring instead the flexibility afforded me by untyped datasets. I thought about the problem long and hard.

The solution that I kept coming back to told me that Bob's model was wrong. The best solution, from a code clarity and elegance point of view at least would be to have a single typed dataset with just one definition of the Products class. Sure there would be times where the typed dataset would be less than full, but there's not much overhead in that. Admittedly though, when you create the typed dataset, the various other tables defined in the XSD are created empty, but there are ways around that. If you only want to work with the one table, then just create an instance of the typed table class. What's the issue there.

An alternative solution that occurred to me would be to manually edit the typed dataset class and extract the Products class definition into a standalone class. Then in each typed dataset just subclass the extracted class. Then you have no problems at all with passing down a typed table to a business tier method at all.

Personally though I'd rather stick with untyped datasets all the way. If you want a table, create a DataTable. If you just want to read and process a bunch of data then use the firehose DataReader. If you want to process and work with a bunch of related tables, fire up a DataSet. Why on earth would you want to bind your code, the most fragile aspect of your application to a database schema that could change with just a flick of SQL Server Enterprise manager?

Just my opinions mind you and I'd really welcome some comments on this. Have you found no problems working with Typed Datasets in your enterprise grade multi-tier applications? Do you find Typed Datasets too restrictive? Do you create multiple datasets with the same tables in? Tell me, I'm really curious to know.

 


6:18:21 PM    comment []


Click here to visit the Radio UserLand website. © Copyright 2004 Pete Wright.
Last update: 01/07/2004; 16:45:12.
This theme is based on the SoundWaves (blue) Manila theme.
June 2004
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      
May   Jul