Samstag, 7. Juni 2003

Ingo has WSE 2.0 and is obviously all excited about it.
9:05:44 AM      comments []

On 400 level sessions and scores

Samer Ibrahim writes "I believe that a 400 level session should present 400 level material regardless of how many people have never wrote a single line of code in their entire life.  That's not my problem and that's not fair to those of us who are here to get an edge.  Find 100-200 level sessions instead."

My WEB404 session at TechEd US was probably a 500 because I really had lots (too much) of code. The downside of doing 400 level sessions at an event with a very broad audience spectrum is that you are getting killed in the feedback and scores after the talk, no matter what you do. Either you're too shallow for some or you are too deep down in the bits for others. Now, what needs to be understood is that speakers will often scale back on content if they feel that the content is too deep for the audience they have, just because it'll kill their average score. There's lots of competition behind the scenes on that.

What was new at this TechEd was that the written comments are now available to MS in "softcopy", which means that they get printed up with the numbers. And if you have only 10 people in an audience of 300 who write "Thank you, this session was really helpful for me", you feel like you have done your job right and MS sees that too, which is of much higher importance for "us" external speakers than the average score.

So, here's a hint: My understanding is that the scoring system is still open over the weekend at www.mymsevents.com. If you attended a session that you found helpful and on which you haven't given a score so far, do so and don't forget to write a comment stating what you liked or what you would like to see improved. That's especially true for sessions with deep and focused technical content and lots of people in the audience. These will typically get comparatively bad scores, because it's nearly impossible that the content is absolutely relevant for 400 or 600 people in a room at a conference like that. So, if you think that the speaker did a good job, say so. You'll be heard.

(I should add that I am fairly happy with my scores already and I am not begging ;)


5:38:04 AM      comments []

Andres observes that Steve and I are in agreement on very many things, including what to put on slides in talks covering services, layering and tiering.  ;)


4:27:44 AM      comments []

Clemens - I attended your two sessions, AOP and "I dont know you could do that". Excellent stuff. Couple of questions I have:
1. I heard, and I might be wrong(and please correct me if I am), that you have serious issues with .Net Remoting. Is that true, and if it is, why? 2. In an app where you want to cache objects, would you use com+ object pooling, or are there better ways to cache your objects?
And last, but not least, have u written any papers? And can you tell me any good book to go deep into the stuff that you talked about?

Thanks Ali / Ali Khawaja • 6/6/03; 5:53:13 PM

1. I don't have serious issues with Remoting as such. I am just saying that it is the successor of Automation and not of the full blown DCOM model. Hence, it is useful in all the scenarios (mostly on-machine) where Automation is useful in the unmanaged world. Once you go across machines where security plays a role and when you need an appropriate hosting and process model for your objects, there is Enterprise Services. Whenever you see a need to add a custom channel sink to Remoting for authentication, authorization, encryption, or signature, there is a fair chance that you are using the wrong technology set. Whenever you think you need to write a custom host for you app in order to tune the thread pool and up the number of available threads for Remoting, you are using the wrong technology set. There's nothing fundamentally wrong about Remoting -- there's just a limited set of use-cases where it is applicable. My issue with it is only how many people are using it and how it is being portrayed as the successor to DCOM, which it is not.

One thing is important to keep in mind: The COM transport sits on top of Microsoft RPC, which is, in turn, the core technology stack that essentially powers most call-level communication between the components of Windows and hence has had full kernel support ever since the NT kernel saw the light of day. RPC supports virtually all network protocols as well as shared-memory marshaled L(R)PC [read!] for on-machine calls. Remoting sits on top of the CLR and on top of the Framework, which, in turn, sits on the Win32 user-level API. That's a wholly different ballgame.

Enterprise Services has a very elegant solution for mixing the two models in that it uses Remoting to do almost all marshaling work (with two exceptions: QC and calls with isomorphic call sigs) and then tunnels the serialized IMessage through DCOM transport, which means that you get full CLR type fidelity while using a rock solid transport that has been continuously optimized ever since 1993. I understand that some people consider a 10 year old protocol boring; I just call it "stable". Also I see people complaining about COM being hard to deploy, because it requires use of the registry and distribution of proxies. Admittedly, there's some truth to that, but in the end, you will also have to deploy and customize config files for Remoting and distribute proxies there. That's true for any RPC-type technology and is, as per current practice, even true for most Web Services. For distributed systems of any scale, "xcopy deployment" is a sweet dream. There's work to do.

2. Yes. Enterprise Services object pooling is great to pool object instances and guard access to limited resources.

Finally, I have written a book on Enterprise Services, which is, for a variety of historic reasons, in German. However, I am talking to a publisher for translation and once that happens I will definitely rev it so that it incoporates all of my "current" thinking (of course).


4:18:36 AM      comments []