|
|
Monday, May 06, 2002 |
Google's bias is a temporary anomaly
Dave Sims, my editor at oreillynet.com, has blogged something that's been nagging me for a while now. Playing devil's advocate, Dave wonders about how and why Dave Winer and Tim O'Reilly have become Google's authoritative sources on (among many other things) Michael Eisner. Writes Dave:
Tim's blog is interesting, but even though he signs my paycheck, I can't convince myself that his comments on Eisner's testimony are the most important source on Eisner on theWeb. [Dave Sims]
It's a great point. I am, for example, the seventh Jon on Google. Ego-gratifying though that may be, I don't kid myself as to why. My web footprint just happens to be larger than that of many more famous and accomplished Jons, including two US Senators (1, 2, 3, 4).
Another example: I recently discovered that Glenn Fine, who went to junior high school with me, is now the Inspector General of the US Department of Justice. He's nowhere near the top of Google's list of Glenns. However Glenn Fleishman (a brilliant guy, I hasten to add) is Google's fourth Glenn.
These are anomalies. I don't, however, think Google's doing anything wrong. Rather, I'd say we're living through an odd historical moment in which web pundits, simply by virtue of abnormally large web surface area, wield disproportionate influence.
In the long run, the problem is not with Google, but with a world that hasn't yet caught up with the web. I'm certain that in 10 years, US Senators and Inspectors General will leave web footprints commensurate with their power and influence. I hope that future web will, however, continue to even the odds and level the playing field.
9:59:35 PM
|
|
Direct SOAP calls from Flash MX
In an InfoWorld article published today, I predicted that users of Flash MX would soon find a way to call web services directly, without going through the ColdFusion-based gateway. Jeremy Allaire wrote me to point out that this has already started to happen.
As Jeremy points out, the sandbox limits this approach to desktop apps that embed the interpreter. Or (Flash experts please chime in) to browsers that locally source the .SWF file? Anyway, it was inevitable. Give people an XML-capable scripting engine and there's no limit to the uses they'll put it to.
5:34:36 PM
|
|
How easy should web services be?
Sam Ruby and Jeffrey P Shell, in separate items (1, 2), raise questions about how easy the exportation of web services can or should be.
Sam cites an article by David Bau which notes that auto-generation of WSDL may promote brittleness. The reason is that automatically mapping classes to exported services fails to separate interface from implementation. "One of the goals of Web services technology," writes Bau, "is to break the gridlock of tight coupling by providing tools to simplify, clarify, and rationalize the information flow between systems." I suspect such tools will be hard to come by. Meanwhile, it's certainly true that vendors who tout features like WSDL auto-generation are overstating the case. The classes you'll want to annotate for such treatment should, in many cases, be abstractions of those that implement behavior. Creating such abstractions is, of course, more work -- and the kind of work that doesn't easily yield to automation.
Jeffrey P Shell, meanwhile, discusses the tradeoffs between Zope, which in effect publishes everything to the web, and Radio, which externalizes only that which passes through a services wrapper (i.e., scripts in the "/web services" directory). Writes Jeffrey:
The way that Radio seems to promote for Web Services is easy. And simple. But you're effectively writing scripted adapters and bridges into other code, something that could turn out to be a maintenance nightmare. The way that Zope has promoted for years (even though it's not a terrific Web Services player, sadly) says that "Objects and their methods are already published on the web. No gateway scripts needed.", but with that comes the price of maintaining security declarations similar to C#.NET's ASPX's declarative Web Service calls.
Both articles remind me that overhead and complexity are subtle issues. I don't think there is One Right Way. During a bootstrapping phase, a certain amount of tight coupling may be necessary to push things over the activation threshold. Will brittleness result later? Yes, but it's easier to make a popular-but-brittle system more flexible, than to make an unpopular-but-flexible system more popular.
3:05:08 PM
|
|
Blogspace under the _Macroscope_
Jon Schull makes a convincing case that my recent OReillyNet article should have been called Blogspace under the Macroscope rather than Blogspace under the Microscope. A biological psychologist and entrepeneur, his interests in the biological roots of group information processing and in the need for tools to visualize those information flows would surely have informed my own reflections on these topics.
Happily, he has instituted a weblog to which I am now subscribed. There I found, among other connections, one to the TouchGraph software I mentioned a few days ago. Elsewhere I found a connection through Tim O'Reilly (who knows everybody interesting, it seems) by way of e-books and digital rights.
Writes Jon Schull:
I want to develop a perspective and a community which could help some of the most brilliant minds of the era grow and cross-fertilize their disciplines, and apply some of the most exciting ideas in contemporary science to some of the most pressing problems of our day.
That's something I'd love to watch happen, learn from, and perhaps play some role in.
1:35:10 PM
|
|
Backlink display in Radio
David Watson has a service that can be used to fetch referral information into Radio pages. "I don't believe it'll work outside my firewall without redirects," writes David. That seems to be true. From outside his firewall, the URL http://www.watsondesign.comsoap/urn:rcsproxy/getReferers?site=0100887&group=radio1 works in my browser, but not by way of a Frontier tcp.httpClient call. When I use the redirect reported by my browser, though -- i.e. scratchpad.s = tcp.httpClient (server: "24.154.119.166:8080", path: "/soap/urn:rcsproxy/getReferers?site=0100887&group=radio1") -- it does work.
As David points out, his use of The Mind Electric's GLUE, a Java-based SOAP toolkit, means that this SOAP-style web service is also callable using HTTP GET, a really useful paradigm which seems to blunt the argument that SOAP necessarily disrupts the fabric of the web.
Since Radio sites are dynamically updated but statically served, the behavior seen at disenchanted and elsewhere -- that is, per-item referrals shown after items -- ultimately calls for a solution that fetches referral data from the community server, correlates it by referred-to page over time, and then updates the site accordingly.
What kind of web service this could be is an interesting question. Undoubtedly many Radio users would prefer not to install software to achieve backlink display. A lot of the steps in the process could well be centralized. As the Radio referral data is public, some central service could amalgamate it, so that the task for the Radio client would reduce to including per-item fragments. The Radio pages would still need to be regenerated by the client, though, and this starts to get into a level of overhead that pushes you into the realm of a dynamically served site.
A per-item JavaScript snippet would be an attractive alternative. Not completely universal, but a reasonable solution for most browsers, and far simpler on the whole.
This kind of loosely-coupled solution might or might not involve SOAP. The truth is that a third-party backlink-display system for Radio users might use nothing more than HTTP GET, both to fetch the data in the first place, and then to answer JavaScript calls. But since HTTP GET and SOAP are in fact quite compatible, I'd argue that building a SOAP API is not a waste of time. It would create a richer set of integration possibilities, without precluding the simple and basic approach.
1:10:52 PM
|
|
© Copyright 2002 Jon Udell.
|
|