It's Like Déjà Vu All Over Again
"You could probably waste an entire day on the preceding links alone. But why take chances? We also give you Paul Snively..." — John Wiseman, lemonodor
Hun Boon Teo @ 05/12/2002 11:00 PM. Talk about philosophy, I can only say that a free man on the street does not mean he is free whereas a locked man behind a bar could be as free as the bird in the sky.
End of the day, combination of perception, business decision and pragmatism will decide the approach you adopt or philosophically speaking, to each his own , live and let live :-) .
Well put! My perception is that I stand to gain nothing by limiting myself to Microsoft or Sun technologies, therefore I don't.
9:44:59 PM
just a few strings. How did version 2.0 of Pivia's software allocate any more memory?
So far it doesn't look like we're leaking. We delete all the new stuff.
The problem is that the allocations occur at all. They should be rare.
My original design called for reaching a steady state of no allocation.
Since this was considered overkill, folks said a few strings can't hurt.
Then a few more, and a few more. Oh heck, let's do what we want.
Okay, now you've got hundreds of allocations per request. Clever.
Not surprisingly, from my perspective, we now see lock contention.
Of course, we can't tell exactly where without a thorough analysis.
But first thing, we can get a handle on the unnecessary allocations.
The number of threads running should not degrade speed per thread.
I've got folks allocating heap-based strings just for smallish appends.
And the free use of STL collections crept in numerous places as well.
After we replace their allocators, we'll measure the impact size of that.
The reward for being sloppy is getting to do it twice when it's slow.
Pretty funny. Your experience (when generalized) parallels mine: people say you're being a purist, an "architecture astronaut," and then go build a system that has some kind of unacceptable property that could have been avoided with the "overkill" architecture.
The moral, for those who didn't catch it: sometimes it's cheaper and more efficient to stop, think, and do it right the first time. You wouldn't think that would need explication. But history has demonstrated repeatedly that nothing could be further from the truth.
9:30:27 PM
As a joke, I started telling folks this object style was Turing-complete.
You can get anything you need done by making and unmaking objects.
Oh dear God. Now someone will feel compelled to go off and actually implement a Universal Turing Machine this way.
And then someone else will feel compelled to demonstrate that it can all be done at compile time with template metaprogramming. Your compile times will be excruciating, but your runtime will simply send the value of some struct's "value" member to cout.
9:21:15 PM
Thanks, but I doubt it: I don't have as much to say as I thought I did (or rather, my feelings are still so amorphous and vague that I'm reluctant to recapitulate them—same thing). John Wiseman actually did a vastly superior job summarizing Python criticism than I expect I could given my work with it to date.
If I understand this terminology, then IronDoc is a hierarchical db.
(Or, it would be if it were not still vaporware at the present time.)
Normally I call IronDoc a structured storage system. Same thing.
It's not an object database, because I see no need for object-based.
I think class-oriented persistence frameworks are tedious and ugly.
Instead, I like a system that presents mechanisms for persistence.
Then you can build whatever you want from graphs and hierarchy.
You can always add object-based framework layers on the very top.
But I think it's idiotic to lace all data storage with code dependency.
Why would you tie data integrity to fragile class and object issues?
I can't wait for IronDoc. Literally. So I use e4Graph and MetaKit. All joking aside, I very much look forward to a side-by-side comparison. I have to confess that I'm now quite taken with e4Graph/MetaKit; converting—even if I really want to on the basis of high expectations of reliability and performance—will be a hard sell. Maybe once IronDoc and Mithril are both done...
Early on I described IronDoc to folks as a way to "compile" XML.
Basically it aims to be an efficient binary form of what XML does.
So there's little inherent impedance involved in any data translation.
And IronDoc is designed to perform well despite arbitrary scaling. Obviously linear plain text formats like XML are not scale friendly.
Clearly. As you and I have discussed before, it's amazing what you can accomplish with zlib at compression level 3. Of course, there's no accomodation there for searching and the like, so there's still a need for IronDoc.
9:11:45 PM