|
Saturday, September 19, 2009
|
|
|
Radio User Land was an innovator in the blog field, but now they are packing it in. So I've had to move. The new location for my blog is at: http://toddhoff.com/blog/
Please come by and visit me at my new home. No home warming gift is necessary :-)
2:12:06 PM
digg
reddit
|
|
|
|
Wednesday, December 10, 2008
|
|
|
Google Chrome's Agile Design and Development
With Google Chrome doing something strange for a Google app, exiting beta, BayCHI's December talk by Glen Murphy, Google Chrome's designer and an engineer on its front-end team, becomes a little more topical. Glen gave a very good presentation. Nothing revelatory, but I thought there's a lot to learn from how they organized their development team, especially for those looking for successful agile projects inside big companies.
- The reason for building Chrome was to create a platform that can run increasingly complex web applications in a browser as well as desktop applications run in an OS. To make this a reality they identified two primary goals: speed and reliability. All included features had to be fast and reliable.
- Building compelling web applications requires making features feel fast. The user should be able do everything they need to do fast. We'll see how this impacted the OmniBox design.
- The need for reliability is what made them decide on the isolated tab model. Content is organized around tabs and each tab runs in an isolated environment so the failure of one part won't bring down another. The idea is each tab could be considered a separate application.
- The team had a single organizing principle: content, not chrome. The idea here is that content, what people care about in your typical web app, is wrapped by too many layers. You have an OS, browser application, web application, and then content. Each of these layers distances the user from the content and imposes another layer of interface to navigate. Chrome wants to collapse the layering. This vision is what guids them when making design choices.
- The content vision led them to some places you might not otherwise go. That's the power of a central organized vision. It's generative. It helps you creatively form something that feels like natural, like that's the only way it could have been implemented. You see this in the combo box, tabs, coloring, and lack of many traditional browser features.
- One goal was no dialogue boxes. Didn't work out in practice, but through user testing they found nobody paid attention to dialogues so it's better to figure out how not use them. One audience member asked if they had removed the evil Fire Fox dialogue box that appears after a crash on startup asking if you want to restore previous sessions. He seemed quite peeved that FF wouldn't start until this dialogue box was appeased. The FF team assured him in 3.1 it has been removed. I was never annoyed with dialogue box, what annoyed me is that it pointed out a design weakness that your open windows were not treated as a collection that had convenient operations on the collection. Chrome solves this problem by not remembering your open windows (boo) and using tabs as the central content organizing metaphor.
- User testing found that most people don't use most of the commands available on the browser so they do away with them. 85% of people use back, 50% use reload, 33% open a book mark. So do away with the rest and focus on the content.
- The team is multi-disciplinary: designers, users, and engineers. This allowed them develop a tight feedback loop where changes happened in Chrome quickly because there wasn't a layering of responsibilities that required long cycles to work through. Engineers who are designers can be trusted to make good implementation decisions. Designers that are engineers can be trusted to keep in mind how plausible it is to build responsible features. So there were no large design documents. There were no wire frames or prototypes. Chrome started very simple (a few draggable tabs) and evolved over countless iterations of feedback from the team, Google employees, and user studies.
- Engineers attended usability tests so they could see how people were using the product in real-life. It turned out even 1 pixel changes in tab heights, for example, could radically change usability because it effected the ability to select a tab. These are the sort of things that you get from real life.
- Usability studies were used as a data driven approach for solving arguments about how features should be implemented. What angle is best for tabs, for example? They generated over 1600 different versions and tested them.Yet, they also used their gut for decisions like what color the frame should be (blue). No testing was done.
- Their goal was to have Chrome feel natural and intuitive. So they went with native Windows conventions and implementations where ever possible. Which is why I assume the Mac version will take some time to implement. They just aren't porting a single look and feel. Chrome will have to change to match how the Mac works so it feels natural on the Mac.
- The OmniBox design brings all the goals of natural, intuitive, speed and reliability together. This is the single box at the top of the browser that allows you to enter a URL or a search term and Chrome will do the right thing. There's no difference between navigation and search. Through user testing they found multiple boxes for different kind of searches just made people randomly try different boxes, so they went with one box. The design though is not obvious. When multiple terms are searched for it's relatively easy to tell if it's a URL or a search term. They look for top level domains etc. When a single word is entered, like "pie" it becomes impossible to tell because there could be a local machine called pie that someone wants connect to, not a search for pie. Since 99% of people are really searching for pie they make search the default action. In the background they check if there's a machine called pie on the network and if there is they'll offer that as an option, remember their selection, and make that the default interpretation going forward. A very nice solution. It takes a lot of often unappreciated work to bring this seamless a feel to features.
- Another design point of the OmniBox is what options are displayed in the drop down box. The found that people don't pay attention to drop down boxes and most people only go to 10 or 15 different sites a day. So the autosuggest function isn't a surrogate search function. Search results are not displayed in the drop down, that what the tab is for. Autosuggest is designed to shorten what users want type which shortens their work. Only a few of the most plausible options are displayed in the drop down list. This feature is was developed in a tight loop which allows the feature to behave naturally to the user. No big spec was developed over several months of tedious meetings. Instead, what was natural was discovered through use. What wouldn't work and was quickly iterated until the solution converged.
- If people on the team show an aptitude in a certain area their interests are fostered. People aren't locked into a single role.
- Without detailed design docs QA is difficult. Developers write extensive unit tests. The QA people are engineers and would watch code changes, read the changes, and write tests to the code. As new features are implemented developers are expected to announce them so everyone stays synced. This is a very difficult way of working for QA.
- Development and design happen simultaneously. Changes are made to an actual working product. When developing new features the thought is that using the real thing is the only way you truly know if it works or not.
- An emphasis was put on "surfacing" features. It's not like tabs are original with Chrome. Far from it. What Chrome does is surface tabs as a first class feature. When Chrome starts two tabs are created so people know the tabs are there. When a new link is opened it opens in a tab. There's a + sign next to the tabs so it's obvious how to create a new tab. Tabs can be selected and dragged. Tabs are the top of the frame instead of below several browser toolbars. This is part of the content not chrome ideal. So using tabs is natural and obvious in Chrome and that's a very conscious part of the design.
- Chrome is Open Source but they aren't sure what that means yet or how it will work. It's hard to be open and control everything at the same time.
- Google has lots of groups like usability, graphics, security, QA, etc. It's hard to compete with all those resources even though the core Chrome team is probably pretty small. Google strangely doesn't like to say how many people are in a group. It's that whole open-closed tension again.
- Extensions are on the roadmap, but it's not clear how to do extensions in way that meets all their goals for Chrome. Adding 10 layers of toolbars to the browser hides the content. How cant that be done cleanly?
The word "agile" wasn't mentioned in the presentation, but they are describing a very agile way of working. Small multi-talented groups working in tight iterations implementing features based on vision and customer feedback. Working code is valued over documentation. The result is a quality product with a very comfortable and natural feel. Thanks Glen for the excellent talk.
12:30:33 PM
digg
reddit
|
|
|
|
Tuesday, October 14, 2008
|
|
|
Web 2.0 Won't Die Because it Excites Young Minds
With the recent financial crisis we're continually exhorted grow up and drop this Web 2.0 nonsense. Move into more dignified niche revenue opportunities. Stop wasting everyone's time with this new-age hippie free ad stuff. There's no time for such foolishness. Be serious. It's as if I can hear my Grandpa whispering in my ear. Well, Web 2.0 ain't going anywhere because it excites young minds.
I attend meetups around Silicon Valley and I'm amazed at the youth and vitality I see at the Facebook and other Web 2.0ish events. People are excited. It's not just about early exits and large cash. People are genuinely excited about the tech, even if nobody is quite sure how it works or what to do with it--yet.
A historical parallel exists: the discovery and practical application of electricy. A microcosm of the excitement electricity generated in young soon-to-be scientists can be found in the life of Hans Christian Oersted. Oersted in the 1800s was ready to follow in his father's footsteps as a respected Danish pharmacist. But the new phenomena of electricity captivated his thoughts and he shifted careers. At that time the wonder electricity would become wasn't obvious at all. Studying it was a risk as there were no practical applications of electricity, but minds were drawn to it because they sensed in electricity something new, different and interesting. And in 1820 Oersted discovered electricity and magnetism were a unified force. Until that time they had been considered to be different forces. As a Kantian philosopher Oersted assumed there where deep unifying links behind phenomena, so he was able to find the unification of electricity and magnetism when the more conventionally minded did not.
Fast forward to Web 2.0 and the constant heap of disdain shoveled on making "stupid" zombie applications on Facebook. The first electrical devices were simple too, devices like buzzers and telegraphs. These simple devices were made possible by understanding the nature of electricity and magnetism. With that knowledge it was possible to translate electrical potential into magnetic and kinetic energy. As understanding deepened, the miracles worked with electricity came to define the 20th century and make it different than any time before.

While Facebook zombie apps may not seem impressive, they are similar to the buzzer in that they show practical applications of phenomena still being researched and unified. Only this time it's not using electricity to turn a clapper on and off as in a buzzer, it's working out viral marketing, viral distribution, viral program design, viral loops, social networks, lifestreaming, sites as platforms, platforms as APIs, data portability, monetization strategies, mobile applications, friending, long tails, and so on.
Web 2.0 isn't going anyway. There's a deep sense something is going on here and young minds want to figure out what it is. Our James Maxwell, who found four equations codifying every aspect of electromagnetism, has yet to be found for Web 2.0, but that won't stop the young from looking and being unapologetically excited about it. When historians define the 21st century, the roots of the miracle technology may just have started in silly zombie games.
7:00:40 AM
digg
reddit
|
|
|
|
Thursday, September 04, 2008
|
|
|
The Lifecycle of a Typical New Product Announcement
Look at enough new product announcements and there appears to be pattern. The same sorts of articles are posted on every product. So why not jump ahead of the curve? When a new product comes out see which of the following you want to sign up for:
- Rumor of X's Imminent Release. Oh Joy!
- X
Has Just Launched! Live blogging now.
- How X Will Change Everything
- The Real Reason Behind X
- X First Impressions
- Warning: X has
Serious Issues (performance, security, privacy, crash, design,
licensing, etc)
- X Who Wins and Who Looses
- X FAIL
- Why X Sucks
- X is Better Than Everything Before and After Forever
- Why Y is
Really Better than X
- The Story Behind Project X
- X Will Get
These New Features Eventually
- Company Y Announces Support for X
- Indepth Review of X Here First
- X Looks Good But Not Yet Ready
- What X Means for the Plans of Company Y
- How You Can Make Old
Product Z Work Like X Now
- X Over Hyped and Under Performs
- X is
Now Bigger than Product Y
- Why Did We Ever Care About X in the First
Place?
- I Wasted an Hour of My Life Using X
- X: The Video
- What X Means for the Future of Humanity
- Tips for Using X
1:48:24 PM
digg
reddit
|
|
|
|
Sunday, July 13, 2008
|
|
|
Do Small iPhone Screens Lead to Small Minds?
In On a Small Screen, Just the Salient Stuff they make an interesting observation that web browsing on an iPhone can actually be superior to its big screen cousins:
A quick trip to Web sites like Facebook,
Twitter, Zillow or Powerset, all of which have been redesigned to take
advantage of the iPhone, makes it clear that bigger is not necessarily
better when it comes to exploring cyberspace. By stripping down the Web
site interface to the most basic functions, site designers can focus
the user’s attention and offer relevant information without
distractions.
This sounds great at first. Dump clutter. Get me just what you think I need. Be ruthless. Edit the world for me. The problem is you know what will show up fist are the most popular and profitable items. So in aggregate we end up seeing much less of the world that we would on a big screen . It's the small world phenomena where new nodes are more likely to attach themselves to existing popular node and we'll see that same power law type friend and follower figures we see on Twitter, Facebook, and FriendFeed. A few people have many 1000s of followers. Most have close to none. And the world is far smaller than it would have otherwise been because it is being viewed thrugh a small screen. To control information flows you'll just need to control the first small screen full of information because like for Google search results, few people venture past the first page.
So small iPhone screens mean the rest of us are banished even further out in the long tail galapagos.
2:22:48 PM
digg
reddit
|
|
|
|
Monday, July 07, 2008
|
|
|
Are Web Icons a Modern Form of
Illiterate Communication for the Dumbest
Generation?
How do you communicate with
an illiterate population? That's a problem I hadn't thought of before,
but on a recent trip to Europe I was fascinated to learn how medieval
towns and merchants solved the problem of how to communicate with a
population that couldn't read. Their solution was to use elaborate
symbols that reminded me a lot of the iconography developed for
websites and other computer devices. I couldn't help putting this
together with the idea of Mark Bauerlein's new book The Dumbest
Generation: How the Digital Age Stupefies Young Americans and
Jeopardizes Our FutureComplex Store Signs in Salzburg Austria

Another example of using pictures
to communicate with non-readers is the amazing Salzburg street market
pictured on the left. This is a very long street with markets running
seemingly forever on either side. Imagine yourself a worker who
couldn't read. How would you what stores were available just looking
down the street? You couldn't know so the elaborately descriptive store
signs evolved so people could tell what a store sold. Here's the sign
for a McDonalds:
German Maypole's Use Pictures to
Represent Town Services
 Many German towns feature a maypole in the
town square. In addition to being big and beautiful, a maypole
communicates to an illiterate population what services can be found in
the town with a picture symbolizing the service. Take a look at the
maypole
in Munich. It's gorgeous. Look closely and you'll see
pictures of beer barrels which would tell you Munich has a beer
available. And oh boy is that true! If there's a bakery you'll see a
picture of a baker. If there's a wood cutter you'll see a picture of a
wood cutter.
It's all picture based so you can just
look and immediately understand what you'll find in a
town.
Scan a webpage, an OS GUI,
or a cell phone interface and I think you get a very similar feel to
the ancient maypole symbols and store signs. I can't help but wonder if
over time text will drop out as people stop readining and we develop
ever more intricate graphical symbol systems to communicate instead of
relying on text? Everyhing old is new
again.
6:45:35 PM
digg
reddit
|
|
|
|
Sunday, May 25, 2008
|
|
|
Why Stressed Out-of-Control Americans Won't Carpool
Gas now looks like it will be expensive until the sun burns dark. SUV and truck sales have flopped while sales of the tiny cars we've always sneered at have pulled a Robert Downey Jr. and have become stars once again. So why don't we American's do the smart and logical thing and carpool? Because we Americans need to feel like we are in control. Without that control we'll stay in our cars all lined up one-by-one in endless traffic jams even if at first it doesn't make rational sense. But this strange affliction does make sense and once we understand why we can design a mass transit system Americans are more likely to embrace, namely: A People Pod Pool of On Demand Self Driving Robotic Cars Automatically Refueled from Cheap Solar.
The question of why don't we carpool was asked by a commenter in
FuturePundit article American Car Drivers Cut Back Distance Traveled. When you read how the question is asked you'll wonder why you don't carpool either. Now, what's your answer to this?
In the short run, I'm fascinated by the potential for carpooling. I
don't understand why someone would switch jobs or homes in preference
to carpooling (unless they wanted to anyway). It's easy, it's fast, it has no capital cost - 9% of Americans
already do it. Modern telecom makes it easy to match people up - it
used to be based on work site communication, but no more. It could reduce fuel consumption for an individual by 85% (4 people
in a Prius), or for the nation by 25% (50% of US fuel consumption is
light vehicles, and carpooling can be used for more than commuting) in
a period of months, if we got serious. Also, car-sharing (igocars, zipcar) could share scarce PHEV/EV's -
the average car is only used 1 hour per day, so 5M PHEV/EV's could be
used by 50M people.
My first reaction was well don't I feel like an oily dipstick. It's all so clear. So sensible. So reasonable. Carpooling is the future. Carpooling is smart, responsible, and good. Don't you want to be good?
But I don't want to do it. I don't want to carpool. There, I said it. I don't hate the environment (as evidence of my virtue I both compost and recycle!). And I don't want to see mother nature stripped and turned out into the cold lonely night. But as one of those ugly Americans I feel deep in my plush leather seats and fine German engineering that I would rather starve my characteristically overweight American self into the normal weight range rather than give up and share MY car!
Yes, I am well aware that this is totally irrational and irresponsible. I won't be the first or last time you notice this about me. Could there be some deeper psychological reasoning behind my madness? Let's hope so because a lot of people don't seem to like carpools and they don't like mass transit either. The Metro, a local San Francisco Bay Area weekly, published a wonderful article Fueling the Fire, on how we need to cure our car addiction using the same marginalization techniques used to "stop" smoking.
A telling quote shows how difficult going cold turkey off our cars will be:
Mitch Baer, a public policy and environment graduate student at George Mason University in Virginia, recently surveyed more than 2,000 commuters in the Washington, D.C., area. He found that people who drove to work alone were more emotionally satisfied with their commute than those who rode public transportation or carpooled with others. Even stuck in traffic jams, those commuters said they felt they had more control over their arrival and departure times as well as commuting route, radio stations and air conditioning levels. Commuters said that driving alone was both quicker and more affordable, according to the study. "They will have a tougher time moving people out of their cars," Baer said. "It's easier for most people to drive than take mass transit."
The key phrase for me is: people who drove to work alone were more emotionally satisfied. How can people jostled in the great pinball machine that are our roadways be emotionally satisfied? That's crazy talk. Shouldn't we feel less satisfied?
We Feel Good in Our Cars Because We Are in ControlSolving the mystery of why we feel satisfied while stuck in traffic turns on an important psychological clue: the more we perceive ourselves in control of a situation the less stress we feel. Robert Sapolsky talks about this surprising insight into human nature in Why Zebras Don't Get Ulcers.
Notice we simply need more "perceived" control. Take control of a situation in your mind and stress goes down. You don't actually need to be in more control of a situation to feel less stress. If you have diabetes, facing your possibly bleak future can be less stressful if you try to control your blood sugars. If you are a speed demon, buying a radar detector can make you feel more in control and less stressed as you zoom along the seldom empty highways. If you are bullied, figuring out ways to avoid your torturer puts you more in control and therefor less stressed.
Figure out a way to control and an out of control situation and you'll feel happier. That's what I think we are accomplishing by driving alone in cars. In our car we have complete control. Cars are our castles with a 2 inch air moat cushion. Most cars are plusher than any room in your average house. Fine leather, a rad sound system, perfect temperature control, and a nice beverage of choice within easy reaching distance. In our cars we've created a second womb. The result is we feel more control, less stress, and more satisfaction, even when outside, across the moat, a tempestuous sea of stressors await.
Our Mass Transit System Must Supply Perceived ControlGiven the warm inner glow we feel from being wrapped in the cold steel of our cars, if you want people to get out of their cars and onto mass transit you must provide the same level of perceived control. None of our mass transit options do that now. Buses are on fixed schedules that don't go where I want to go when I want to go. Neither do trains, BART, or light rail. So the car it is. Unless a system could be devised that provided the benefits of mass transit plus the pleasing characteristics of control our cars give us.
With Recent Technological Advances We Can Create a New Type of Mass Transit SystemNew technologies are being developed the will allow us to create a mass transit system that matches our psychological and physical needs. Just berating people and telling them they should take mass transit to save the planet won't work. The pain is too near and the benefits are too far for the mental cost-benefit calculation to go the way of mass transit.
The technologies I am talking about are: Inexpensive solar with $1/watt solar panels. Our mass transit must of course be green and cost effective. Breakthrough battery could boost electric cars. Toshiba promises 'energy solution' with nearly full recharge in 5 minutes. Personal transportation pods. A reusable vehicle that can take anyone anywhere they want to go. Self driving vehicles. We are making great strides in creating robot cars that can drive themselves in traffic. Already they drive better than most humans can drive (low bar, I know).
Mix these all together and you get a completely different type of mass transit system. A mashup, if you will.
Create a People Pod Pool of On Demand Autonomous Self Driving Robotic Cars Automatically Refueled from Cheap SolarMany company campuses offer a pool of bicycles so workers can ride between buildings and make short trips. Some cities even make bikes available to their citizens. The idea is to do the same for cars, but with a twist or two.
The cars (people pods) can be stored close to demand points and you can call for one anytime you wish. The cars are self driving. You don't actually drive them and are free to work or play during transit. Different kinds would be available depending on your purpose. Just one person on a shopping trip would receive a different car than a family. The pods would autonomously search out and find energy sources as needed to recharge.There's no reason to assume a centralized charging and storage facility. When repair was needed they could drive themselves to a repair depot or wait for the people pod ambulance service.
The advantages of such a system are: Perceived control. You have a personal "car" you control the destination for, the interior environment of, and your own actions inside. This gets over the biggest hurdle with current mass transit options. Better regional traffic flow. The autonomous cars could drive cooperatively to smooth out traffic jams. Traffic jams are largely caused by people speeding up and slowing down which causes ripples of slowness up and down the road. And automated system could prevent that. Go where you want to go. It would be used because people can go to exactly where they need to go and be picked up exactly where they need to leave from at exactly the time they wish. None of these are characteristic of current systems. Leverage existing road ways. Creating light rail and trains is expensive and wasteful (except for the high speed point-point variety). They don't extend to where people live and they don't go where people go. So it creates a multi-hop mess out of every trip. We already have an expansive road system that goes where everyone wants to go. Using the road infrastructure more efficiently makes a lot more sense than creating hugely expensive partial solutions. And since these cars would be eco-friendly, most arguments against using cars fall away. Cheaper delivery. One force keeping truly distributed manufacturing and retailing from blossoming is high delivery costs. A $2 item is simply too expensive to buy remotely and ship because shipping costs more than the product. An automated transportation system would make this model more affordable. Live where you want to live. Most mass transit systems are based on trying to socially reengineer our current suburbian and exurbian living pattern into a high density live-work pattern. While this should be an option, most mass transit proposals assume this pattern as a given and can't deal with current realities. For the foreseeable future people will not give up their houses or their lifestyles. The People Pod approach solves the mass transit problem and the "difficulties" of having to change a whole populace to behave in a completely different way for less than compelling reasons. Still can own your own car. This isn't a replacement for the current car culture. It's leveraging the car culture. You can still own and drive your own car. Nobody is trying to steal your car away from you. Cleaner and safer. Mass transit is disliked by many because it is perceived as dirty and unsafe. The pods would be safe and clean. Road safety. Our new robot overloads will make our lives safer. Hopefully, possibly, maybe...
It's a Usable Mass Transit System so People Might Just Use ItAfter a lot of reading on the topic and a lot of self-examination on why I am such a horrible person that I don't want to carpool or use mass transit, this is the type of system I could really see myself using. It doesn't try to change the world, it uses what we got, and gives people what they want. It just might work.
12:58:26 PM
digg
reddit
|
|
|
|
Wednesday, March 19, 2008
|
|
|
Secret Teachings: The Programmer's One Inch Death Punch
How do programmer's get better at their job? Few programmers think of programming as a true profession. For most it's just a job. They go to work, do what's asked, and go home. They don't read. They don't attend conferences. They don't train. In fact, the training business for software is pretty much dead and has been for years.
Other professions have different attitudes. Professional athletes train constantly to improve their skills. Even doctors and accountants have stringent continuing education requirements so they stay current.
What can we programmers do? Let's first look at how expert performers are created in the first place. That might help us figure out how we can become better. Time in an article called The Science of Experience has a lot of potentially fruitful ideas:
- The number of years of experience in a domain is a poor predictor of attained performance.
- Rather than mere experience or even raw talent, it is dedicated,
slogging, generally solitary exertion, repeatedly practicing the most
difficult physical tasks for an athlete, repeatedly performing new and
highly intricate computations for a mathematician, that leads to
first-rate performance. And it should never get easier; if it does, you
are coasting, not improving.
- They key is "deliberate practice," by which is meant the kind of practice we hate,
the kind that leads to failure and hair-pulling and fist-pounding. You
like the Tuesday New York Times crossword? You have to tackle the Saturday one to be really good.
- Great performance comes mostly from deliberate practice but also from another activity: regularly obtaining accurate feedback.
- Experts tend to be good at their particular talent, but when something
unpredictable happens, something that changes the rules of the game
they usually play, they're little better than the rest of us.
- Entire classes of experts — for instance,
those who pick stocks for a living, are barely better than novices.
(Experienced investors do perform a little ahead of chance, his studies
show, but not enough to outweigh transaction costs.)
- Researchers found that élite skaters spent 68% of their sessions
practicing jumps, one of the riskiest and most demanding parts of
figure-skating routines. Skaters in a second tier, who were just as
experienced in terms of years, spent only 48% of their time on jumps,
and they rested more often.
- Experience is not only insufficient for expert performance; in some
cases, it can hurt. Highly experienced people tend to execute routine
tasks almost unconsciously. Experience in a particular task frees space in your mind for other
cognitive pursuits, wondering what's for dinner, answering your cell,
singing along with Justin Timberlake, but those things can distract
you from the accident you're about to have.
- Experience can also lead to overconfidence: a study in the journal Accident Analysis & Prevention found that licensed race-car drivers had more on-the-road accidents than controls did.
None of this trips my BS meter. It makes a lot of sense and jibes with experience. We all know people with 10 years resume experience in X who couldn't program their way out of a paper bag while someone with just 1 year of experience continually delivers quality results. And I know I've grown the most when truly challenged to solve new problems.
Given the science of experience article I think Coding Dojos seem like a potential solution to the programmer training dilemma. A coding dojo is "a meeting where a bunch of coders get together to work on a programming challenge." It's a form of deliberate practice, which is "not the same as experience gained while doing your job. It is when you
actually seek out experiences that will stretch your skills just the
right amount, and give you feedback that enables you to learn."
For thousands of years martial arts have been taught in a deep mentoring relationship using a long progression of increasing difficulty and challenge. The path from a white belt to a black belt is long, it's hard, but in the end you learn. You learn through constant practice and challenge.
It would be interesting to consider how a similar infrastructure could be setup to train programmers.
3:53:40 PM
digg
reddit
|
|
|
|
Friday, February 29, 2008
|
|
|
Web 2.0 Suicide Monitoring Using Twitter and Emotional Presence
People on anti-depressant drugs--like Prozac--are supposed to be closely monitored for suicidal thoughts that could indicate the drug is having a "paradoxical result." While many feel better on anti-depressants others drop fast and dark into an even worse suicidal depression. Paradoxical isn't quite the word I would use, but we must keep everything clinical.
Monitoring allows a doctor to detect if a patient is entering the paradox zone. If so, treatment can be changed and further harm avoided.
I was thinking one potentially Web 2.0 way to monitor people's internal subjective state--their feelings and emotions--on an unnaturally frequent basis would be to combine Twitter with emotional presence and a bot that would notify a doctor if certain downward emotional trends were detected.
I've been doing some work on a Jabber IM client lately, and I've done some work using the Twitter API, and I've done quite a bit of research on emotion (patent pending), so a mashup of these services seems a pretty natural way of helping people stay alive through their dark times.
In IM (Instant Messaging) your presence is broadcasted to your contact list so everyone knows what you are doing and you're availability to others. Using your IM client tells everyone you are available. Don't use your IM client for awhile and and everyone will learn you are away. Pickup the phone, mark your presence as "On Phone" and everyone's IM client will associate your name with a cute little phone icon. And when you close down your IM client everyone will learn you are now unavailable.
There's also an idea of emotional presence, often represented by emoticons. If you are happy or sad or angry you can broadcast your emotional presence in the same way you can broadcast your physical presence. Select an option that matches your current feelings and the whole world will instantly know how happy you are that it's Friday and a long weekend awaits.
Now let's extend the emotional presence to indicate presence information for thoughts of suicide. I don't know what these would be, but I'm sure doctors could work up something. Say you have a fleeting thought of suicide you could quickly change your emotional presence to indicate your new state. More severe thoughts could have different icons. And so on.
Now let's bring in Twitter. Twitter is a microblog. Its purpose is to share brief bits of what is currently happening in your life. That's the perfect match for emotional presence. You could also indicate with each post how you are feeling. These responses can be directed to a channel using the "@reply" syntax in Twitter. Doctors could follow those posts for their patients by briefly taking a look at how they are doing. Or a specially created bot look for certain trends and notify a doctor if a negative trend developed.
This would allow a doctor to intervene much more quickly than they could otherwise and the information they are making their decisions on would be much more accurate because it's harder for people to fudge on their self-reports when they are in the moment. With the perspective of time we all do a lot of self-editing, but in the moment you are more likely to be honest.
Clearly privacy is an issue. Users need to be able to select who sees what kind of presence information. But that's necessary anyway and exists in some form now as privacy lists. The type of information to block or allow simply needs to be extended to more granular types of data.
What's great about this approach is Twitter is everywhere users want to be. On
cellphones, browsers, IM, and desktop applications. Users will
always be in touch with their emotional presence and doctors can always follow their progress.
Wouldn't it be wonderful if we could read a lot less about people on anti-depressants committing suicide? It just always seems so wrong that people who are trying to get help end up dying.
9:54:48 AM
digg
reddit
|
|
|
|
Monday, December 31, 2007
|
|
|
The New Mass Transit: People Pod Pool of On Demand Self Driving Robotic Cars who Automatically Refuel from Cheap Solar
Our traffic in the San Francisco Bay area is like Dolly Parton, 10 pounds in a 5 pound sack. Mass transit has been our unseen traffic woe savior for a while now. But the ring of political fire circling the bay has prevented any meaningful region wide transportation solution. As everyone scrambles to live anywhere they can afford, we really need a region wide solution rather than the local fixes that can never go quite far enough.
Commuters are Satisfied Not Carpooling
You might think we would car pool more. But people of the bay don't like carpools and they don't much like mass transit either. In the Metro, a local weekly, they publsihed a wonderful article Fueling the Fire, on how we need to cure our car addiction using the same marginalization techniques used to "stop" smoking.
A telling quote shows how difficult going cold turkey off our cars will be:
Mitch Baer, a public policy and environment graduate student at
George Mason University in Virginia, recently surveyed more than 2,000
commuters in the Washington, D.C., area. He found that people who drove
to work alone were more emotionally satisfied with their commute than
those who rode public transportation or carpooled with others.
Even stuck in traffic jams, those commuters said they felt they had
more control over their arrival and departure times as well as
commuting route, radio stations and air conditioning levels.
Commuters said that driving alone was both quicker and more affordable, according to the study.
"They will have a tougher time moving people out of their cars," Baer
said. "It's easier for most people to drive than take mass transit."
The key phrase to me is: people who drove
to work alone were more emotionally satisfied. How can people jostled in the great pinball machine that are our roadways be emotionally satisfied? That's crazy talk. Shouldn't we feel less satisfied?
In Our Cars We Feel Good Because We Are in Control
Solving the mystery of why we feel satisfied while stuck in traffic turns on an important psychological clue: the more we perceive ourselves in control of a situation the less stress we feel. Robert Sapolsky talks about this surprising insight into human nature in Why Zebras Don't Get Ulcers.
Notice we simply need more "perceived" control. Take control of a situation in your mind and stress goes down. You don't actually need to be in more control of a situation to feel less stress. If you have diabetes, facing your possibly bleak future can be less stressful if you try to control your blood sugars. If you are a speed demon, buying a radar detector can make you feel more in control and less stressed as you zoom along the seldom empty highways. If you are bullied, figuring out ways to avoid your torturer puts you more in control and therefor less stressed.
Figure out a way to control and an out of control situation and you'll feel happier. That's what I think we are accomplishing by driving alone in cars. In our car we have complete control. Cars our are castles with a 2 inch air moat cushion. Most cars are plusher than any room in your average house. Fine leather, a rad sound system, perfect temperature control, and a nice beverage of choice within easy reaching distance. In our cars we've created a second womb. The result is we feel more control, less stress, and more satisfaction, even when outside, across the moat, a tempestuous sea of stressors awaits.
Our Mass Transit System Must Supply Perceived Control
Given the warm inner glow we feel from being wrapped in the cold steel of our cars, if you want people to get out of their cars and onto mass transit you must provide the same level of perceived control. None of our mass transit options do that now. Buses are on fixed schedules that don't go where I want to go when I want to go. Neither do trains, BART, or light rail. So the car it is. Unless a system could be devised that provided the
benefits of mass transit plus the pleasing characteristics of control
our cars give us.
With Recent Technological Advances We Can Create a New Type of Mass Transit System
New technologies are being developed the will allow us to create a mass transit system that matches our psychological and physical needs. Just berating people and telling them they should take mass transit to save the planet won't work. The pain is too near and the benefits are too far for the mental cost -benefit calculation to go the way of mass transit.
The technologies I am talking about are: Mix these all together and you get a completely different type of mass transit system.
Create a People Pod Pool of On Demand Autonomous Self Driving Robotic Cars that Automatically Refuel from Cheap Solar
Many company campuses offer a pool of bicycles so workers can ride between buildings and make short trips. Some cities even make bikes available to their citizens. The idea is to do the same for cars, but with a twist or two.
The cars (people pods) can be stored close to demand points and you can call for one anytime you wish. The cars are self driving. You don't actually drive them and are free to work or play during transit. Different kinds would be available depending on your purpose. Just one person on a shopping trip would receive a different car than a family. The pods would autonomously search out and find energy sources as needed to recharge.There's no reason to assume a centralized charging and storage facility. When repair was needed they could drive themselves to a repair depot or wait for transportation.
The advantages of such a system are:
- Perceived control. You have your own person car that you control the destination for, the interior environment, and your own actions. This gets over the biggest hurdle with current mass transit options.
- Better regional traffic flow. The autonomous cars could drive cooperatively to smooth out traffic jams.
- Go where you want to go. It would be used because people can go to exactly where they need to go and be picked up exactly where they need to leave from at exactly the time they wish. None of these are characteristic of current systems.
- Leverage existing road ways. Creating light rail and trains is expensive and wasteful (except for the high speed point-point variety). They don't extend to where people live and they don't go where people go. So it creates a multi-hop mess out of every trip. We already have an expansive road system that goes where everyone wants to go. Using the road infrastructure more efficiently makes a lot more sense than creating hugely expensive partial solutions. And since these cars would be eco-friendly, most arguments against using cars fall away.
- Cheaper delivery. One force keeping truly distributed manufacturing from blossoming is high delivery costs. A $2 item is simply to expensive to buy remotely and ship because the shipping costs more than the product. An automated transportation system would make this model more affordable.
- Live where you want to live. Most mass transit systems are based on trying to socially reengineer our current suburbian and exurbian living pattern into a high density live-work pattern. While this should be an option, most mass transit proposals assume this pattern as a given and can't deal with current realities. For the foreseeable future people will not give up their houses or their lifestyles. The People Pod approach solves the mass transit problem and the "difficulties" of having to change a whole populace to behave in a completely different way for less than compelling reasons.
- Still can own your own car. This isn't a replacement for the current car culture. It's leveraging the car culture. You can still own and drive your own car. Nobody is trying to steal your car away from you.
- Cleaner and safer. Mass transit is disliked by many because it is perceived as dirty and unsafe. The pods would be safe and clean.
- Road safety. Our robot overloads will make our lives safer. Hopefully...
Funding:
- Current transportation budgets. There's lots of money that could be redeployed from existing less than successful approaches.
- Advertising. The outside of vehicles could contain advertising as could the inside, especially from the internal search system. Imagine wanting a new place to eat and asking the pod to suggest one. That's prime targeted marketing. Social networks and massive multi-player games could also be created between pods.
- Efficiencies. The plug-in cars are electric and efficient and low maintenance. That will save a lot of money.
- Up sells. Individuals could buy their own pods and trick them out. Also, people could pay for a higher class of pod from the pod pool.
- Licensing. Technology used in making the pods could be sold to other manufacturers. Create a standardized market so competition and cooperation can erupt.
- Sponsorship. Companies could buy rights to play music, stock the food locker, use their equipment, etc.
- Naming rights. The rights to name parts of the system could be sold.
Implementation:
- Challenge prize. Maybe someone
with a vision and a dream can put up a $50 million prize to get it
going. Something like the Xprize.
- Government funding. Don't laugh, it might happen.
- Startup. I'm available if interested :-) With a large enough challenge prize this is a viable model.
After a lot of reading on the topic and a lot of self-examination on why I am such a horrible person that I don't use mass transit more, this is the type of system I could really see myself using. It doesn't try to change the world, it uses what we got, and gives people what they want. It just might work.
11:24:47 AM
digg
reddit
|
|
|
|
Tuesday, December 18, 2007
|
|
|
Agile Owes More to Aristotle than the Renaissance
There's an interesting historical parallel between Agile software development, traditional large organizational software development and Aristotle and the new Renaissance politics of Machiavelli. Of course the part of Agile development is played by the great Greek genius Aristotle and the part of the evil giant organization development group is played by Machiavelli. Could it be any other way?
Machiavelli's The Prince, the longest please hire me resume ever, revolutionized politics by turning the long accepted ideas of Aristotle into compost. Aristotle thought the state was founded on friendship and trust. To have a state you need a bond. That's how a gang becomes cohesive and turns into a team. That's how soldiers stand side by side together in battle against constant challenge and danger. And the basis of that bond, the basis for the state, must start with friendship and trust. The state can never be sustained by fear. The state succeeds based on personal morality.
Sound like Agile now? You thought I was just crazy when I started out, didn't you?
In a sure made for TV thriller, Machiavelli flat out contradicts Aristotle. This was a big deal in big M's time, for Aristotle was simply known as The Philosopher, and was assumed right about pretty much everything. It took some big ones to disagree with Aristotle.
Machiavelli argues the basis of the state is fear of the Prince and the system of coercion the Prince creates to ensure the continuance of power. A Prince says if you do X here's the punishment or if you don't do Y here's the punishment. There's no weak minded friendship or trust or need for a unifying bond in big M's world. The Prince to stay in power must exercise power. What holds a people together is fear, fear of the Prince.
Does this sound something like your usual software development group? Orders radiate down from the top and you are expected to follow orders under pain of death march.
Aristotle's notion of political science is empirical (again, like Agile). A study of all the Greek city states was performed with the goal of finding what worked in all those cases so an ideal of how a perfect state is to be run could be established. In my rather strained analogy I'll say this is something like Scrum. An ideal algorithm of how to run a project rather than an exact prescription of every detail.
Machiavelli is also empirical in nature, but he is more hard cruel world based. He wants to stay flexible because what works in one situation wont work in another. This sounds a bit Agilish, but it's not. What matters to big M is establishing and maintaining order. The Prince must do whatever it takes to stay a Prince so you can't be wedded to any quaint notion of an ideal. Personal morality is quite separate from a Prince's morality. The Prince must be free to act in anyway necessary while people must act in strict accordance with the Prince's wishes or there is punishment. This is very similar to the notion that a corporation can justify actions by appealing to a fiduciary responsibility that an individual could never get away with.
Who said history wasn't relevant to today? If you look irrationally hard we can see the same struggles faced today writ large in our past.
5:08:52 PM
digg
reddit
|
|
|
|
Monday, November 19, 2007
|
|
|
Can Game Theory be Used to Loosen Website API Limits?
Let's say Twitter limits me to getting only 20 tweets at a time through their API. But I want more. I may even want to do something so radical as download everything. Of course Twitter can't let everyone do that. They would be swamped serving all this traffic and service would be denied. So Twitter does that rational thing and limits API access as a means of self protection. As does google, yahoo, and everyone else.
But when I hit the limit I think, but hey it's Todd here, we've been friends a long time and I've never abused you. Can't you just trust me a little? I promise not to hurt you. I never have and won't in the future. At least on purpose, accidents do happen. The problem is Twitter doesn't know me so we haven't built up trust. We could replace trust with money, as in a paid service where I pay for each batch of downloads, but we're better friends than that. Money shouldn't come between us.
And if Twitter knew what a good guy I was I feel sure they would let me download more data. But Twitter doesn't know me and that's the problem. How could they know me? We could set up authority based systems like the ones that let certain people through the airport security lines fast, but that won't scale and I have feeling we all know how that strategy will work out.
Another approach to trust is a game theoretic perspective for assessing a user's trust leve. Take the iterated prisoner's dilemma problem where "tit for tat" is a surprisingly simple winning strategy. We start out cooperating and if you screw me I'll screw you right back. In a situation where communication is spotty (like through an API) there can be bad signals sent so if people have trusted before then they'll wait for another iteration to see if the other side defects again, in which case they retaliate. It seems like if services modeled the API like a game and assessed my capabilities by how we've played the game together, then capabilities could be set based on earned and demonstrated trust rather than simplistic rules.
A service like Mashery can take this approach a step further because they can assess how a player plays in a wider playing field. If someone vouches for you to a friend then you will likely get more slack because you have some of the trust from your friend backing you. This doesn't work in one on one situation because there's no way to establish your reputation. Mashery on the other hand knows you and knows which APIs you are using and how you are using them. Mashery could vouch for you if they detected you were playing fair so you get more capabilities initially and transit the capability scale faster if you continued to behave.
Of course we get the situations like on Ebay where people spend eons setting up great reputation only to trade their reputations in for cash in some fabulous scam. That's what happens in a society though. We all get more for the price of some risk.
10:19:05 AM
digg
reddit
|
|
|
|
Wednesday, October 31, 2007
|
|
|
The Internet's Immune System in Arms Race with Digital Rights Management
A popular image of the internet it to think of it as a vast interconnected intelligent organism, like this gorgeous map of the internet. If the image is true then who is the heart, the brain, the liver, the nervous system, or the alimentary canal? I'm sure you have candidates for each role :-) The immune system though I think is us, we the citizens of the internet. And we as the internet's immune system react to DRM as an invading organism: destroy ! destroy! destroy!
The parallels between the arms race between invaders and the immune system and DRM and netizens are uncanny. The job of an immune system is to detect foreign invaders and destroy! them.
This is harder than it looks. Distinguishing between an
unwanted visitor and your liver takes some careful work. But once your immune system figures
out what's you and not you, it attacks.
Some organisms have figured out a way to beat the system by waiting a while for the immune system to detect them as foreign and then rapidly evolving by changing their DNA. This causes the immune system has to start the whole process all over again. We were all brought up on microevolution. Evolution happens by small point/insertion/deletion changes. But the scientist Barbara McClintock discovered something radical, that genes could jump around chromosomes. This means genetic changes could be radical and immediate, not small and slow. Of course Ms. McClintock was ostracized by the scientific community for speaking a heresy. But modern science finally caught up to her, many decades later, and she won her Nobel prize.
Why this matter is because invading organisms jump their genes around to cloak themselves from our immune systems. Then in effort to keep up the immune system does the same thing. It hits the big randomizer button in the hope that it can find a way to kill the invader. It's like chasing each other through mirrors.
DRM, the internet, and netizens follow a very similar arms race. DRM systems are constantly changed and upgraded to evade we the immune system. But the massive distributed immune system that is the internet community starts simultaneously evolving and attacking the DRM until someone, somewhere finds a way to beat it. Then the process starts all over again. Shouldn't we be smart enough to think ourselves out of this arms race and stop acting like little Darwin machines?
1:46:01 PM
digg
reddit
|
|
|
|
Wednesday, July 25, 2007
|
|
|
Announcing My New High Scalability Site
The site is http://highscalability.com/. And oddly. it talks about building highly scalable web sites and other systems. People always have questions about how to build a bigger and better website. The information is out there, but it's not easily accessible and often it doesn't make sense if you don't already know what you are doing. So, having some experience in the area, I thought the topic was worth its own site. Please take a look if you have the time. And if you have some expertise please consider contributing. Scalability is a giant field, so the more folks the better.
11:02:46 AM
digg
reddit
|
|
|
|
Thursday, July 12, 2007
|
|
|
Gordon Ramsay's Lessons for Software Take Two
Gordon Ramsay began a new season of tasty lessons for gourmet software chef's trying to run their own restaurant, err software development group. You may want to read what we learned in the first season of Gordon's tough love school of restaurantary.
What new chicken nugget size bits of wisdom have we learned so far?
- Always put the nights earnings in the bank.
- Don't cook everything on the same grill. Use separate pans or flavors from last weeks fish will be in today's veggie medley.
- Don't double book. You can't turn more than one table an hour.
- Pride and arrogance are what stops you from learning from the provocative lessons dancing in front of your face.
- Lazy ways of cooking taint everything you make.
- Don't piss off the locals. If the locals aren't happy then you won't make money during the off season.
- Create vivid experience based learning sessions. To reduce pride in a charge take them to a bullfighting ring and have them fight an angry bull with nostrils flaring for vengeance. Arrogance fades when a giant multi-horned bull tries to kill you. And once the arrogance melts some learning can occur.
- Don't be too clever. Use local ingredients and cook cleanly and simply. Let people taste the food.
- You know you are doing well when you aren't stressed; you are communicating; everything seems easy; dishes come back clean; customers leave happy; and you make money.
- Eat your own dog food. Eat at your own restaurant and experience what it's like from your customer's perspective. Do you like what you experienced?
- Run your pub, not your kitchen. It's easy for a cook hideout in the kitchen. You own the business, so run the business, let the chef cook, and let your people do their jobs.
- Don't try to be something you are not. If you are a pub then act like a pub. A pub serves well cooked, simple, traditional food. Don't be a fancy restaurant if that's not your market.
- Working a lot of hours doesn't mean you are doing a good job. It just means you are working too hard.
- Don't hord junk. Keep what you need to do your job and get rid of all the clutter that keeps you from your main purpose.
Do any of these lessons apply to building software in a team? I think some of them do. But you're the chef.
Hey, Pat Kennedy caught the Gordon Ramsey bug too in his post <A HREF="http://www.gurtle.com/ppov/2008/02/04/gordon-ramsey-is-a-great-consultant/">Gordon Ramsey is a great consultant</A>. Pat's say Gordon exhibits several key traits:
- confident – the meek might inherit the earth but they’re rubbish at getting the job done
- experienced – having done it all before he knows what he’s talking about and everyone knows it
- well rounded – it’s not just about the cooking, to run a successful restaurant you need to know about every aspect of the business
- eager to teach – he’s not a pompous prat who refuses to share his knowledge and experience, he gives it willingly
And he:
- Demonstrates simple ‘tricks of the trade’ that can be the difference between staying afloat or going under.
- Breaks the dire straits situation down to individual problems which make answers become simple.
- Gives the outsider’s perspective is a big advantage.
- Identifies actions having the greatest impact in the shortest possible time.
- Gets people skilled-up and self-confident.
Ramsey may be a total a**hole, but that doesn't mean we can't learn from him.
9:26:42 PM
digg
reddit
|
|
|
|
Monday, July 02, 2007
|
|
|
The Golden Spike Pattern J Wynia wrote an interesting post It's Time to Decouple Your Development Process where he talks about one of the most useful patterns a group of developers can use, it's a pattern I know as the Golden Spike Pattern. The name comes from the 1869 ceremony that drove the final spike completing the transcontinental railroad in the US. The railroad linked the eastern US with the western US, finally meeting up at Promontory in Utah. Constructing a transcontinental railroad is a complex time consuming business. They made it work by researching a general route for the railroad, acquiring the necessary property, and then having two sides building their parts of the railroad independently, The metaphor behind the pattern name isn't subtle. The idea is groups don't have to be in lock step dependency to create a complex system. All they need is a general plan and an agreement of where to meet up. In software a meet can be an API, schema, library, protocol, or a test suite. Once that meet up point is established all the different sides can go about there business without interdependencies. From a lean manufacturing point-of-view this is a powerful way of increasing overall system flow because it's dependencies that cause wasted time. Each side can work out their own issues in their own way, as long as they meet up. And how would you like spend a decade building a railroad and not have it meet up at the right place!
11:31:14 AM
digg
reddit
|
|
|
|
Saturday, June 30, 2007
|
|
|
Copernicus is My Favorite Pattern
In interviews it's common to ask "What's your favorite and least favorite pattern?" My usual answer for favorite pattern is "keep separate things separate." It's a bit meta which allows me to talk about a few design principles that are dear to me. My least favorite pattern is the wretched visitor pattern because it binds together different parts of an application that have no business even knowing about each other. It creates a BLOB.
After having read "It Started with Copernicus" by Howard Margolis I am going to have a new favorite pattern: the Copernicus Pattern. I will always hate the visitor pattern, so my answer to that question is not likely to change :-)
The premise of this book is that Copernicus' discovery of the heliocentric model of the solar system started a fire storm of scientific invention, not because of the discovery itself, but because it spread the idea that we puny humans could think and make big discoveries about the universe using nothing but our tiny brains. Copernicus gave people permission to tackle big challenges and the confidence that they could expect to meet them.
As evidence Margolis lists the major scientific discoveries made before 1600 and the major scientific discoveries made after 1600. Copernicus published his "discovery" that the Earth revolves around the Sun in 1543. In the list of pre-1600 major discoveries there is: nothing. Zip. Nada. After 1600 the pace of scientific discovery blossoms, we discover: the distinction between electricity and magnetism; law of free fall; Galilean inertia; Earth is a magnet; theory of lenses; laws of planetary motion; various discoveries from the telescope, like sunspots; laws of hydrostatic pressure; and synchronicity of the pendulum. All these discoveries were made by Stevin, Gilbert, Kepler, and Galileo all followers of Copernicus.
Copernicus discovered discovery using what Margolis calls "around the corner" reasoning. Around-the-corner-reasoning is a habit of mind where you go beyond what is directly seen and take the unexpected step. Stubborningly pursuing a problem even when no clear solution is in sight. Before 1600 most things that can be discovered by direct ways of thinking were discovered. After Copernicus around-the-corner-reasoning helped us start discovering how the world really worked. I highly recommend the book if you want to see a full development Margolis' argument. My already long delayed point follows a similar line of reasoning...
What is important about patterns is the idea of patterns and not any particular pattern itself. Many have pointed out the weaknesses of patterns. Patterns can seem like trivial well understood solutions to common problems. Patterns can appear overly formal and lead to complex systems of frameworks that complicate more than they help. Patterns can pollute your code and lead you away from doing the simple thing.
For me finding and using patterns is essential because patterns are simply software systemizations. A pattern is that bit of around-the-corner-reasoning that can create massive simplifications and improvements in your system, if you would just take the time to understand your system, see what's really going on, and think a bit how it might be improved. Programming isn't just writing one damn line of code after another. Patterns are the idea that you can continually find ways to make your system better. If that's using the publish-subscribe pattern, resource acquisition is initialization pattern, or whatever, it doesn't matter. What matters is the idea that a system evolves over time to have inefficiencies and that the system can be brought under control again by systematically applying patterns of improvement.
4:29:33 PM
digg
reddit
|
|
|
|
Wednesday, June 20, 2007
|
|
|
All the world is a DOM. The rise of Identity Based Programming.
In the past few years we've seen a huge rise of successful systems built following a Document Object Model (DOM) type of architecture. By that I mean: open systematized models of complex domains that are easy for applications to specialize and extend in a cooperative manner.
This approach has quickly taken over the tradition libary + statically compiled language paradigm in amazing products like Eclipse, Aspect Oriented Programming (AOP), DOM + Javascript + DHTML, and in Content Management Systems (CMSs) like Drupal.
While we must pay homage to Smalltalk for its unified image based development environment, the most familiar example of the DOM architecture runs inside your favorite browser, using the dynamic trio of DOM + Javascript + DHTML.
The olden days of interface writing is represented by the applet. You combine all your libraries into a single application and download it to an interpreter. This is basically the same as compiling to an image and running the resulting executable on your favorite OS.
Look how different is your browser's DOM based approach. The DOM provides a really rich and functional model of a document, your web browser, which is made available to all applications running in your browser. Multiple libraries can be directly loaded into the browser and build on layers of each others functionality. Applications can add new methods, events, and properties directly to the model. And powerful meta tools like jQuery are creatable because of the dynamic power and regularity the browser environment provides.
One of the most stunning success stories is Eclipse's DOM type architecture of an IDE. If you would have told me you could make a world class development platform by defining a plugin architecture and then weaving together a myriad of plugins from a bevy of developers, I would have said you are crazy. But it works and Eclipse is now the premier IDE on the market. They succeeded by creating an IDE model, which in my world is a DOM, and then made the IDE buildable by composing parts formed from a generic plugin architecture. That this works so well was surprising to me and very enlightening.
Another interesting application of the DOM type model is in CMSs like Drupal and Joomla. A CMS is like a big web site construction kit allowing you to create your personal idea of a website from the work of 100s of other people. Drupal makes this possible through their module system that allows modules to be composed together. Drupal's role is to provide an open model of what it means to be a CMS, a set of standards about how everyones modules can play nicely together, and mechanisms for composing the modules together in meaningful ways.
For example, forum systems allow readers to make comments on posts. Usually that comment system only works for forum posts, but Drupal takes it one step further and allows the ability to create a comment module that can be attached to any other part of the system, like blogs or polls.
Taking extension yet another step beyond is the Content Construction Kit (CCK) module for Drupal. CCK allows you to create new content types and extend existing content types with new features in a type specific manner that is understandable by Drupal.
For example, let's say I am creating an event system and I want my event to support geolocation. With geolocation we can show events on Google maps and do other cool operations like calculate how far apart events are. Traditionally I would build location attributes and functionality into my event system and this would take a long time, even using a third part library. I would have to change the database, make a module, make it configurable, add theming, and so on.
With CCK I don't have to start from scratch anymore. I can fold into my event the nifty preexisting location module that already exists. CCK will add the location fields, hooks, and widgets into my event in such a way that Drupal can see them as one thing, even though they aren't one thing.
I can take this even further. Let's say I want people to be able to comment on my event, so we weave in a comment module. I also want people to rate the event, so we weave in a rating module. Next I want people to able to submit events like Digg and vote them into a front page, so we weave in the voting module.
A very complex event has been created by weaving in different aspects together in a unified whole. This may sound a bit like AOP but it is much more powerful because what is being weaved together happens within a complete CMS system model. New content types are natively able to take advantage of CMS features like theming, installation, configuration, upgrading, security, user permissions, and database configuration. It's not just AOP's attachment of a bit code at an insertion point, you are creating a niche in a complete interdependent ecosystem.
So what is an event really if it's defined primarily by weaving together different aspects? Here's where we get to the "Identity Based Programming" (IBP) part of our tour. Why IBP? Because all programming ideas these days have to have 3 letter acronyms where the middle letter is preferably a "D" for driven or "O" for oriented. I am really bucking the trend here using "B" for based.
An object is usually said to have identity, state, and behavior. I have always contended an object simply has identity, state and behavior arise from relationships with other objects. For example, the date of an event is usually considered an attribute of an event object and this is part of its state. But in reality the date is in a 1-1 relationship with the object and thus isn't part of the object. This is the same for all attributes of the event, like location, comments, votes, and ratings, along with their associated behaviors. What ties the state and behaviors together is the identity of the event they are in relationship with.
And here's where DOM type architectures and IBP meet. A DOM sets up the system model, the extension points and facilitation protocols. The process of composing and gluing them together to create new applications is what I am calling Identity Based Programming. Applications in Eclipse, your browser, and Drupal are all built this way.
You can say I am not brining anything new to the party and you are absolutely right. I just think it's very interesting to see this approach evolve separately and almost independently in several different application spaces. This approach is fundamentally different than what has gone before, but it's very difficult to pin down exactly what is different. I just thought I would take a stab at defining what is different and to me one of the most important ideas is that identity is what holds all this independent parts together when traditionally we have tried to force these parts to be one whole object.
Biological life works by accretion. Parts keep being added on to existing systems rather than being thrown away and redesigned from scratch. In your brain you'll still find the brain of the lizard, mammal, and the primate. Haman brains were made by adding on. With Identity Based Programming we see the same process happening in building software.
Update: Greasemonkey allow users change the UI of an application directly. You can't extend the back-end functionality but you can change the front-end so it's more like what you want. It goes beyond just changing colors. You can weave in new widgets and do things like add comment boxes and links where you want them, not just where the designer put them.

12:31:35 PM
digg
reddit
|
|
|
|
Tuesday, May 15, 2007
|
|
|
Morning Scrum Meetings are Like the American Family Dinner
In the US family sit-down dinners are a cherished tradition. Many Americans have fond and powerful memories of sitting down around table with their family at night and eating together. I realize other cultures have very different dinner traditions, but I
was struck how much like a family dinner is the morning scrum meeting.
In a family dinner there is a sense of community. You are surrounded by people who support you, who are there for you, and who will help you when times turn tough. During the day you battle the world and then you come home to the comfort of the family circle that is the dinner table. You gather together. It's a time to talk to each other, reconnect, and figure out how everyone is doing. Every gathering is a ritual act of building community.
Scrum meetings create a very similar sense of family and community. Without a regular meeting we are just individuals carrying out tasks like machines. It's being part of a team that helps elevate work to life.
9:40:29 AM
digg
reddit
|
|
|
|
Wednesday, April 18, 2007
|
|
|
Top 10 Things to Do Now that Your Blackberry has Crashed
WNBC reported a major outage affecting 100% of Blackberries in the US. What might dedicated crackberry users do with all this unscheduled downtime?
10. Solve world hunger. You now have the time.
9. See a movie all the way through. No interruptions.
8. Go for a run. Without your crackberry you weigh less and you'll be able to run farther and faster.
7. Contemplate the transitory nature of the universe. If an essential
service like the crackberry can fail, what else in your life might fail
you?
6. Have a drink. Surviving off the grid is stresseful. Did my team win last night? What time is that meeting at corporate? How is my portfolio performing? Did Jughead really sleep with Veronica? Gaping holes are bound to open up in your digital life without instantaneous answers to important questions like these. So just relax. Have a pop or two.
5. Keep twirling your thumb wheel. You want to be in tip top shape once it's back up. You'll be way ahead of the other kids who have spent their time less productively.
4. Play hall hockey. Crackberries slide really well on the floor. Get two teams together, setup two goals, and see who can make the most goals with your new puck. If you are alone find a lake and see how far you can skip your crackberry. I bet you can't slice more than 5 hops.
3. Remember the 5 stages of grief: denial, anger, bargaining, depression, acceptance. Let's help you dial through the stages quickly. Yes, your network is down. Mistakes happen. It's part of life. The digital gods cannot be appeased, so don't even try. There are no tasty binary cookies or digital flowers that can patch panel this one. You feel depressed now, see (2). The last stage is bunk. Deny deny deny. That's how we do it. Acceptance is for losers.
2. Schedule an appointment with your therapist to help you cope with your loss. But, wait, your crackberry is down! Noooo! The irony of it all!
1. Make a pair of glasses using two tubes from empty toilet paper
rolls. Viewing the world outside the small window of your crackberry
can be disorienting at first. You'll want to transition slowly to view
the world in full resolution. Every hour slice an inch or so from each
roll so you'll gradually see more and more of the real world.
7:52:51 AM
digg
reddit
|
|
|
|
Tuesday, March 20, 2007
|
|
|
I Thought of Twitter for Eclipse, Too Late, Drats!
A few years ago I had the idea of a Twitter like add-in for Eclipse. Too late now I guess. He who codes first wins! The idea was pretty simple. Eclipse would broadcast out to all team members every major action that was completed in Eclipse. So when you created a new class, a new method, etc everyone on your team would know. Likewise, if you were starting something new or wanted everyone to know how close you were to done you could just broadcast it to everyone else's Eclipse instance. And if you had a question you could just blast it out.
The idea was to reduce the cycle of feedback to almost nothing. Information flow dominates development time and the more you can reduce blocking on a project the faster you'll move.
How it would work, for example, is if I read that you created a SerialLine class and I knew of one already, I could tell you immediately that one already existed in package XYX and you could reuse it. Or if I was interested in knowing how we handle regular expressions in a project I could just type the question in. This kind of informal communication is less scary to team members. Many people are very concerned about their image so they won't ask a question on an email list. They are afraid of looking dumb or weak. The quality of communication is inversely proportional to the risk of ego destruction. Programming is more macho than football in many ways. So quick, short bursts in Eclipse would make communication flow more freely. The result should be pretty fast convergence on solutions because of the tight team work involved. Developing becomes more of a team sport than isolated outposts throwing Morse code at each other.
IM has filled this role to a large extent so I stopped pursuing the idea. But I still think the integration with Eclipse would be powerful. It would help everyone on a project build a mental model of the software as it was being constructed.
3:48:20 PM
digg
reddit
|
|
You Can't Twitter at Relativistic Speeds
Twitter is entraining the technorati on an unbreakable hedonic treadmill. The treadmill gorges itself on an infinite supply info mediated dopamine hits. Addiction, divorce, 12 steps, and the grief cycle are sure to follow . But what really should concern twitterites is their global stream-o-conscious will shatter once we travel in space at near light speed.
Let's say you're accelerating towards Vulcan in your new Mercedes X Series Space Coup and you type in your latest bon thought: I really need to upgrade my materializer. The pate was runny. Your thoughts will stream out at a constant speed of 186,000 miles an hour and nobody will hear you! And you will not hear them! You will ache. It will 1 millisecond without a info mediated dopamine hit. Then another. And then another. Until you go entire days without sharing the barely conscious thoughts of the twitter-sphere. Then you are in hair pulling, drano drinking withdrawl. Oh what a glorious future it will be!
I do see a market in relativistic hermitages however. In time no place on earth with be safe from ads or phones or other information radiators. The only safe place to hide will be in a space capsule near the speed of light. Only then will you be alone with the strange sensation of your own thoughts.
12:45:47 PM
digg
reddit
|
|
|
|
Monday, March 12, 2007
|
|
|
What if Cars Were Rented Like We Hire Programmers?
Imagine if you will that car rental agencies rented cars like programmers are hired at most software companies...
Agency : So sorry you had to wait in the reception area for an hour. Nobody knew you were coming to today. I finally found 8 people to interview before we can give you a car. If we like you you may have to come in for another round of interviews tomorrow because our manager isn't in today. I didn't have a chance to read your application, so I'll just start with a question. What car do you drive today? Applicant : I drive a 2002 Subaru. Agency : That's a shame. We don't have a Subaru to rent you. Applicant : That's OK. Any car will do. Agency : No, we can only take on clients who know how to drive the cars we stock. We find it's safer that way. There are so many little differences between cars, we just don't want to take a chance. Applicant : I have a drivers license. I know how to drive. I've been driving all kinds of cars for 15 years, I am sure I can adapt. Agency : We appreciate your position, but we can only take exact matches. Otherwise, how could we ever know if you could drive one of our cars? Applicant : Oookay. I've driven a Taurus before. You probably rent those, don't you? Agency : Indeed we do. What year did you drive? Applicant : It was 2005...but I don't see how that ma... Agency : Oh sorry, we use the 2006 model. We can't possibly let you drive a later model. Applicant : But, but they aren't that different. Surely if I can drive a 2005 I can drive a 2006. Agency : Sorry, sir. Our requirements clearly spell out that you must be able to drive a 2006 model. Applicant : I've driven a 2006 Escort. Do you rent those? Agency : Ah, excellent, you are in luck. We have one in stock. Applicant : Great. Can I rent it? Agency : No, no, no. We have to go through our interviews now. I'll go try and find the first person. Interviewer#1 : Sorry I was late, I was in a meeting I couldn't get out of. I like to ask technical questions to get a feel for your competency as a driver. What color has the middle wire feeding into the distributer cap? Applicant : What? What does that have to do with driving? Interviewer#1 : If you have experience as you say driving an Escort then you would certainly know the color of that wire. Applicant : I know how to drive. Why don't you ask me questions about driving? Interviewer#1 : I assure you I am. Are you this way with everyone you rent a car from? Nevertheless, I'll ask another question. What is the total weight of an Escort just after it has been washed, but before it has been dried? Applicant : Hand dried or blow dried? Interviewer#1 : It doesn't matter. Applicant : I know. Interviewer#1 : Well then. Thank you very much. We are done. I'll find the next person. Interviewer#2 : Sorry I am late. They never told me I had an interview today. I see on your application that you've driven a lot of different cars and you have a lot of experience driving. It's a shame you only drive that 2006 Escort, that's what we use here. So, how would you fit a SUV through the eye of a needle? Applicant : What? What does that have to do with driving? I know how to drive! Please ask me some #$*&! questions about driving! Interviewer#2 : Sorry, I have a meeting to go to. Let me get the next interviewer. Interviewer#3 : Do you have an exact itinerary of where you will drive and park? Applicant : Not really. I just thought I would drive around and explore. I know I plan on going to the tech museum downtown. Interviewer#3 : I believe that's on first street. That's good. It's on our approved list of streets. Have you ever driven first street before? Applicant : Hm, let me think, no, don't think so. But I am sure I can find it. One city street is pretty much like any other, so it shouldn't be a problem. Applicant : Oh I am sorry, our policy is you can only rent you a car if you've driven on an approved street that you've driven on before. We just can't take a chance that you won't be able to drive on new and different streets. Applicant : I don't believe this. I know how to drive, navigate, diagnose and fix minor problems, ask for help, find out anything I need to find out, and learn anything I need to learn. I know everything I need to know to rent this car because I've done it successfully a hundred times before! Interviewer#3 : How excellent for you. But it's policy. We need the exact experience to be sure. No exceptions. You may be very skilled, but you don't have the specific skills we require...that will be all. Agency : Sorry, but interviewers #4 - #8 were called to an emergency off site with upper management to reformulate policies on policy formation. Applicant : Bows head forward, looks at the water spot on the desk, and sighs. Agency : We might or might not let you know in a couple of weeks if we'll rent you a car. Applicant : But I need a car now! Agency : Very well. It was close, not everyone wanted to rent you a car, but we will rent you a 2006 Escort. How much did you pay for your last rental car? Applicant : I don't see how that matters. What are you charging? Agency : We like to know what you paid before so you get a fair rate. Applicant : I paid market rates. Agency : Sorry, we must know how much... Applicant : Gets up and walks out of the interview room in total frustration, wondering how anyone ever rented a car at this agency.
9:58:46 PM
digg
reddit
|
|
|
|
Monday, February 12, 2007
|
|
|
The Input-Output Political Philosophy
Be conservative in what you do and be liberal in what you accept.
9:36:01 AM
digg
reddit
|
|
|
|
Monday, February 05, 2007
|
|
|
If You See the Buddha in Your Scrum, Say Hi
Around 2,500 years ago the Buddha may have created one of the first and longest lasting intentional communities, that when looked at in a certain light, looks an awful lot like a modern agile organization.
In Karen Armstrong's excellent book, Buddha, we learn soon after the Buddha became enlightened a rich merchant gave him some land and built a few huts so the Buddha and his followers could make a permanent camp. This organization was called a Sangha and was the forerunner of the Buddhist monasteries we see today. Until this time the Buddha and his followers were always on the road and took shelter wherever they could when the monsoon season rained in. Few would travel on the monsoon muddied roads so the Sangha became a place where Buddha and his followers could wait out the monsoons and continue their studies. Soon Sanghas were being created everywhere as appreciative students donated land to the Buddha's cause.
Whenever you get people together you need rules of governance. How should people interact? Who should be in charge? What are the duties and obligations of someone who has joined? What are the Buddha's answers to these questions?
I was surprised. The Buddha advocated decentralized administration and individual authority and responsibility. Since Buddhism is based on taking personal responsibility for finding ones own enlightenment, this organization makes sense, but for some reason I thought he would be more of a waterfall guy than an agile guy.
When one particularly nasty crisis involving different factions hit the Sangha, the Buddha refused to take sides. He thought the solution must come from combatants themselves for it was the egotism of the parties involved that made it impossible for each side to see each others point of view. Based on the idea hatred is never appeased by more hatred, he said treat both sides with respect, but that they had to work it out for themselves. Both sides could be defused with friendship and sympathy. And eventually they did patch things up.
The Sangha had no central authority and there was no real organization. All the members of the Order were equal because the Buddha refused to be a ruler that controlled everything. The Buddha was never much concerned about having a central leader because he taught every person was responsible for themselves. The Buddha thought coercion was against the spirit of the Order and that if one wanted to live in a way different than the Order then they are perfectly free to leave. Monks must make up their own minds and not be forced to follow anyone else's directives. People who left and came back were to be accepted with open arms.
What members of the Order did share was the same lifestyle and the same teachings. Every six years the different groups would come together to recite a common confession of faith, called the "bond." It's purpose was to bind the different groups together. What they pledged was:
Refraining from all that is harmful. Attaining what is skillful. And purifying one's own mind; This is what the Buddha's teach.
Forbearance and patience are the highest of all austerities; And the Buddhas declare that Nirvana is the supreme value. Nobody who hurts another has truly "Gone Forth" from the home life. Nobody who injures others is a true monk.
No faultfinding, nor harming, restraint, Knowing the rule regarding food, the single bed and chair; Application in the higher perception derived from meditation -- This is what the Awakened Ones teach.
OK, it's not exactly the Agile Manifesto, but there's a shared spirit. This pledge was important because it's all that united the different Orders together.
What did the Buddha think would ensure the survival of the Order:
- Be mindful, spiritually alert, energetic and faithful to the meditative disciplines that alone can bring you to enlightenment.
- Avoid such unskillful pursuits as gossiping, lazing around, and socializing.
- Have no unprincipled friends and avoid falling under such people's spell.
- Do not stop half way in your quest and be satisfied with the mediocre.
- Become self reliant so you need not rely on any authority.
I am not trying to say there's a one-to-one correspondence between the Buddha's organizational style and Agile, but there are interesting parallels. I am not saying the training for finding enlightenment is the same as producing software, but there are interesting parallels. And this organization has survived almost every other for over 2,500 years and is still going strong. That's not a bad model.
2:58:25 PM
digg
reddit
|
|
|
|
Saturday, February 03, 2007
|
|
|
Smackdown #2: Scrolling Crushes Paging After 2000 Years of Dominance
Scrolling is now enjoying a historical renaissance over 2000 years in the making. Once upon a time all books were lovingly drawn on papyrus scrolls. Jewish Rabbis would have read the Old Testament from a scroll. Early Christians, perhaps as way to differentiate themselves from Jews, preferred a different book form, the codex. The codex is the same book style we use today: two sided pages held together with a binding. As Christianity rose to power the codex rose with it and scrolls fell out of popular use.
Fast forward 2000 years into the future and scrolls are once again becoming the presentation form of choice. Why? Because web tech makes scrolling better than paging. But that wasn't always the case. Early web design continued the codex form. If you read most of the advice on how to design early web sites (circa 1994) the codex form was still king. Web pages were supposed to be cut up into little chunks and readers slogged through the text stream one slow click at a time. Small pages were faster to load, scrolling was new to most people, and scrolling in web pages was clumsy. So it was thought most readers would not scroll. Pages were the better design.
All that has now changed. ClickTale, a web site usability service, has found people are scrolling and that web designers are now designing pages to feature scrolling. The User Interface Engineering folks have also found long pages are now what all the cool kids are doing. The tipping point came for me when mouses started sporting scroll wheels. Scrolling became as easy as bending a finger and just as quick. Single clicking through text was tortuously slow by comparison. And fast network pipes broadbanded concerns over slow load times into a quaint cautionary tale of the past.
What is old has become new again. It's a fascinating quirk of history that technology has brought us right back to one of our earliest forms off mass information distribution.
1:04:32 PM
digg
reddit
|
|
|
|
Sunday, January 28, 2007
|
|
|
Thursday, January 18, 2007
|
|
|
Efficient Team Interaction Protocol: ACK Three Times for Every NAK (The Rule of Three)
Which interaction in a design meeting do you think will turn out the best results?
Scenario 1: Alpha Geek A: That is the stupidest idea I've ever seen. Only an idiot could think up that idea. Alpha Geek B: What you do mean? It worked on my other projects and it's based on proven patterns. Alpha Geek A: This project is much more demanding. It has to scale infinitely and cost nothing to deploy. My new O(1) algorithm for distributed miracle working will do that no problem. Alpha Geek B: The only scales you could find are on you highly squamata like epidermis. My framework, though it will take 2 years to develop, will enable us to easily change our design without changing code. Scenario 2: Alpha Geek A: That is the stupidest idea I've ever seen. Only an idiot could think up that idea. Hub : A, your new algorithm is very creative. It looks like we'll be able to scale easily with a lot less effort than we do now. Testing will be easier too. But B has some good point about how difficult it will be to make changes fast in the field. What in particular don't you like about B's idea and why don't you cover again what particular goals you are trying to accomplish? Alpha Geek B : Wait, I see A's point. Let's call up Maven A. I think we can work out how to get the best of both worlds.
In our little melodrama the two meetings went in very different directions. Why? The introduction of a hub in scenario 2. A hub's role in the human network that defines a development organization is to exercise their power based on their mastery of social skills. The Alpha Geeks compete like mini Zeus' throwing thunderbolts of thought at each other. The winner wins by flaming everyone with their superior intellect and forceful style of argument. Unfortunately, that doesn't make for a good team. It makes for a team where people don't work together, where they don't help each other, and there's no synergy. It's a horrible place to work.
Hubs help mediate the often poorly socialized Alpha Geeks. Hubs do this naturally because they are competent people in their job, naturally social, and brave enough to put themselves between the warring parties.
What does a hub do exactly? It differs, but research on how groups can flourish may give us some ideas.
One interesting approach is explained in the paper Positive Affect and the Complex Dynamics of Human Flourishing. They found that a team flourishes when for every negative acknowledgment there were at least three positive acknowledgments. In our first scenario all interaction was negative and the direction of that team should be predictable and familiar.
In the second scenario a hub didn't go for the jugular, instead they intervened between the combatants and framed the issues in such a way that people were positively acknowledged yet the problems were made visible and handled. What happens too often, like rats in a cage pressing an electrified lever, people in hostile teams enter a sort of learned helplessness. They just stop trying and the team and the product just fall apart because nobody feels like getting shocked again.
A positive acknowledgment is something pleasant, upbeat, expressing appreciation and liking. A negative acknowledgment is something unpleasant, feeling contemptuous, irritable, expressing disdain and disliking. Scenario 1 was full of NAKs. By following the rule of three in Scenario 2, that is having at last 3 ACKs for every NAK, the whole conversation was smoother and much more productive. In fact, the biggest indicator of a failing marriage is how the couple communicates. If the couple shows contempt and disgust of each other then the marriage will probably fail. Teams are very much like marriages in this regard.
You don't lie when ACKing. You tell the truth. Being dishonest won't work because people will know you are just making stuff up. Dig deep down and genuinely find positives to talk about. They key benefit to this strategy is you have to listen to what the other people are talking about and you have to understand it enough to say something positive. That's 90% better than most conversations right there.
Where does the number three come from? If you read the study it's taken from a nonlinear model of dynamic systems. That's geeky enough, isn't it? :-) The study data was taken from observing many team interactions. What the model shows is there is a critical positivity ratio of positive to negative acknowledgments. If your team is at or above that ratio your team will flourish. And that ratio is 3:1.
The Rule of Three isn't the only thing that makes for a good team, but it's interesting because it's so specific and so seldom done. It's an easy to follow recipe that might radically change the dynamics of your work place.
1:09:17 PM
digg
reddit
|
|
|
|
Monday, December 25, 2006
|
|
|
The Solution to C++ Threading is Erlang
This amusing and disturbing post on C++ Threading paints C++ going the same way Java went, into a corner. Having been on more than a few large distributed programming projects I have kept careful tally on how many people can get large complex threading projects correct. After a long and careful analysis the results are clear: 11 out of 10 people can't handle threads. It gets too complex too fast. I do not have fMRI studies showing brains literally melting when thinking about multi-threaded programs, just many many anecdotes.
I recall fondly one of our most experienced engineers creating a lock that was off by one instruction and how that 1 in a million error took weeks of debugging by a tiger team to find. The release was eventually canceled. I recall how I took a lock on a data structure that when the system scaled up in size lasted 100 milliseconds too long which caused backups in queues throughout the system and deadly cascade of drops and message retries. Very embarrassing, but even more difficult to anticipate. I can recall how having a common memory library was an endless source of priority inversion problems. These same issues came up over and over again. Deadlock. Corruption. Priority inversion. Performance problems. Impossibility of new people to understand how the system worked.
Yet when it all worked it worked so beautifully the pain was almost worth it. Almost. It's like the image of the City on the Hill. Almost within our earthly grasp, but just out of reach. Yet we keep striving, hubris driving us ever forward in her many threaded chariot.
One reaction to the threading nightmare is to ban the use of threads in favor of a process based architecture. Multiple single threaded processes will be started instead of fewer threaded processes. The advantage is single threaded code is refreshingly simple to write. You don't have to worry about locks or shared state or corruption. There's much less dread of a pager going off at 2AM to handle a customer's 5-9s application that was declared dead of unnatural causes.
Process based architectures are not without faults. Scaling is handled by adding more machines, which isn't possible in embedded applications and is costly in a data center. And you tend to push complexity elsewhere. Where is shared state stored? In a threaded application it would be in the same process protected by locks. In a process architecture you either have to 1) connect to a data store, 2) keep shared memory, or 3) publish the shared state to interested parties. This becomes even more interesting when an application must act on multiple information streams with low latency. Threaded applications are good at this, process applications are not.
So going to one extreme of an all process based architecture works well for some applications, but not all. We can go to the other extreme and use our Posix thread library and let everyone do their own thing. That way lies the madness we touched on earlier.
There's a middle ground that is highly performant, easy for programmers, and difficult to mess up. The middle ground is provided by Erlang. Erlang is a functional language and clearly C++ is not, but that doesn't matter. The distributed message passing model used by Erlang works perfectly for C++, in fact, I've used something like it in many very successful projects.
What does Erlang do that is so special? It has one mechanism for handling concurrency: processes. It creates lightweight processes that are not mapped to OS processes and your application communicates between processes using messages. Within a process there is no locking or any of the other complexities associated with threaded programming. Your data structures don't need locks. You don't have to worry if someone is scribbling over your memory. You write clear simple code. The above link talks more about Erlang and how it works. I don't want dive too deeply now.
BTW, you can simulate much of Erlang in C++ using Actors, and I talk about how in my Architecture page, but without language support it's not safe. A programmer can always go around your library and cause disaster. Safety requires language support.
The important point is that C++ can do something really interesting and
powerful in the threading area that will make C++ an exciting language
to work in again. They don't have to do the same old thing in the same
old horrible way.
9:18:42 AM
digg
reddit
|
|
|
|
Thursday, July 27, 2006
|
|
|
When You Do Nothing Good Things Happen
This is the advice of Barry Schwartz, author of the Paradox of Choice. You can see the video of his presentation at google at http://video.google.com/videoplay?docid=6127548813950043200.
One the big points of his book is that "the more choices available the more likely you will choose nothing." This has enormous implications for life as we know it, but as this is nominally a programming blog, I also see it having a lot of relevance to program design.
One of his recommendations to overcome the problem is to use opt-out instead of opt-in. When you ask people to opt-out of becoming organ donors, for example, you get a lot more people becoming donors. When you ask people to opt-out of joining a 401K you get a lot more people joining a 401K. The idea is that when people do nothing they get what's probably in their best interests. More people becoming organ donors and joining 401Ks are good things.
How does this related to programming?
At the application level we usually do a pretty good job of providing default options that give a good user experience. Can you imagine MS Windows, for example, by default providing no security and expecting the user to turn on all the security options? It would be nightmare. Oh, wait, that what they do, isn't it?
Yet some default options perplex me. Why doesn't my editor by default save my changes all the time so my changes are safe from a power outage? I have to dig through 10 layers of menus to turn this work saving feature on. I usually remember to turn it on only after I have lost an hour's worth of work. So maybe at the application level we could do a better job of providing a "good things happen" experience.
At the source code level good things rarely happen. Classes when created into objects usually expect the poor programmer to provide gazillions of options. The poor programmer not knowing all the options, often because of poor documentation, doesn't feel comfortable with this duty, so they may just choose to write their own code so they know they will know how the code works.
Build your own is not always unreasonable behavior when face with uncertainty. A fear of not invented hear (NIH) is often blamed for people writing their own code, but that's not usually the problem. You build your own out of a healthy sense of self preservation. All the unknowns of a class/framework/system can easily seem more negative than the cost of building new.
It would be helpful is more classes came out of the box with more "good things happen" automatically configurations. But programmers understandably don't want to dictate what a user should do so the usual default is to expect programmers to provide a ton of configuration.
A pattern I like to get around this puzzle is to provide an abstract base class (ABC) with a "good things happen" default concrete implementation. This way the system is extensible by deriving from the ABC. All the signatures in the code use the ABC so adding new behaviors doesn't cause any code churn. And when making your concrete implementation you can provide a complete set of "good things happen" defaults using the logic that someone else can always create their own if they wish.
If you provide good documentation for the options, the ability to see what the options are, and accessors to set the options, maybe your concrete class will be all that's necessary.
By using factories you can extend this idea to provide "good things happen" for many different scenarios in your program.
The ideas in The Paradox of Choice on the debilitating role of having too much choice probably have wide and interesting implications in system design.
1:26:58 PM
digg
reddit
|
|
|
|
Tuesday, July 11, 2006
|
|
|
How are Unit Tests Like Chemical Reactions?
Every so often I read someone who thinks because they do unit testing with 100% coverage they don't need to do any other type of testing (regression, system, acceptance). What could possibly go wrong if all the units work?
One way to think of a program is a program is like a chemical reaction. A bunch of particles react to make a product. An interesting feature of chemical reactions is the resulting product has nothing to do with the original particles. I think about this as the role model for emergent properties in general.
Take as an example our old friend H(2)O: water. We bathe in it, we drink it, we throw it on people when they catch on fire. Let's take a closer look at the components of water.
Water consists of hydrogen and oxygen. By looking at the properties of hydrogen and oxygen individually could you predict the properties of water? Let's see.
First take a look at this movie of the Hindenburg exploding. The Hindenburg was filled with hydrogen and as you can see it is highly flammable.
Oxygen itself doesn't burn, but oxygen is required to make everything else burn. Add oxygen to a flame and you get a lot more flame.
So when you combine hydrogen and oxygen what do you get? Shouldn't get a compound that burns and explodes into a fiery hell? You would think so, but you get water instead.
To recap:
hydrogen - think Hindenburg + oxygen - burn baby burn ---------- = water - not fiery hell
The properties of the elements doesn't tell you what will be the properties of the resulting compound after the reaction.
Though not quite as dramatic, programs are like that too. When you combine program elements together you can't really predict the properties of the resulting program when the elements react. The basic reason is the number of paths through the program are now exponentially larger than what you tested when you unit tested the individual elements. And like with elements the really interesting things in a program happen in a running system in the real world where the inputs wash over your system in ways you couldn't imagine from just looking at individual units.
For chemical reactions you have to resort to quantum chemistry to know what's really going on and why. In a software system there are usually equally obscure influences on how your system behaves that don't appear in unit testing. In system tests you'll see the effects of load, locks, CPU starvation, memory fragmentation, queue sizes, drops and retries in protocols, bugs in your OS, bugs in your hardware, bugs in the network, bugs in the network hardware, and so on and so on.
You'll never see all these reactions in just your unit tests because you aren't testing all the units, you aren't testing all the paths through the system, and you aren't testing the interactions through all the parts you don't even know about.
So create the biggest, nastiest, and meaniest test environment you can think of . See how it all reacts. You might be surprised when you get water instead of the Hindenburg.
12:27:52 PM
digg
reddit
|
|
|
|
Tuesday, July 04, 2006
|
|
|
Gordon Ramsay On Software
Gordon Ramsay is a world renowned chef with a surprising amount to say on software development. Well, he says it about cooking and running a restaurant, but it applies to software development too.
You may have seen Gordon Ramsay on one of his TV shows: Hell's Kitchen or Ramsay's Kitchen Nightmares.
Hell's Kitchen is a competition between chefs trying to win a dream job: head chef of their own high-end restaurant. On this show Ramsay is judge, jury, and executioner. And he chops off more than a few heads. Kitchen Nightmares is a show where Ramsay is called in by restaurant owners to help turn around their failing restaurants. On this show Ramsay is there to help.
If you just watch Hell's Kitchen you will likely conclude Ramsay is one of the devil's own helpers ("ram" is the symbol of the devil and "say" means he speaks for the devil: Ramsay). Ramsay screams, yells, cusses, belittles, and throws tantrums even a 7 year old could learn from. Then he does it all over gain just for spite. In Hell's Kitchen there's no evidence at all of why Ramsay is such a respected chef. He is just a nasty man.
Now if you watch Kitchen Nightmares you will see a slightly different side of Ramsay, he still yells and cusses a lot, but you will also see something else: this guy seriously knows what he is doing. The depth of his knowledge in all phases of the restaurant business is immediately apparent as he methodically works to fix what's broken.
Ramsay knows how to run a profitable restaurant. That's one of his key skills. Anyone can lose money running a restaurant, the secret is knowing how to make money running a restaurant. Apparently if, you run it right, a restaurant can make a lot of money. It can also lose a lot of money.
How does Ramsay teach people how to run a profitable restaurant? It's not what you may be thinking. He isn't about cutting costs, shoddy work, and cheap labor. Ramsay is all about profit through excellence and skill. That's what attracts people to a restaurant and keeps them coming back...and back...and back.
It's great fun to watch Ramsay cuss and cajole his way through the entire restaurant staff putting his finger directly on problems, creating inspired solutions, and then mentoring the staff and owners through the transformations needed to become a good restaurant.
While watching you have to wonder how people who invested their life savings in a restaurant could be so screwed up. But then you realize we are all screwed up at one time or another. From the distance TV provides everyone can look bad. Running a restaurant is hard and it's oh so easy to get in a rut.
And if you were stuck in a rut who would you want to give you a lift out? A super hero? How about Super Chef instead?
Sadly, in the real world when you run into trouble a super hero with curiously sharp knives doesn't come to your rescue. In the real world the same uninspired people apply the same uninspired strategies until losses cause the heart stoppage that blissfully puts an end to the torture. But this isn't the real world, this is TV. And on TV you see Ramsay slap the electroshock paddles on the restaurant and bring it back to life. It's wondrous to see both the restaurant and the people come back to life.
What's fun is when Ramsay revisits the restaurant after six or so weeks to see how the restaurant is doing. Most of the time the restaurant is doing better. There are a lot of happy customers and money is being made. And they don't slavishly follow what Ramsay said to do either. Instead, Ramsay taught them the principles of running a restaurant and then they learned how to apply the principles to their own situation.
Sometimes on Ramsay's reinspection tour he finds the restaurant has closed down or isn't doing as well as you might expect. The reasons for failure vary. Sometimes the initial problems were too great. One guy made a bad deal on a lease so it didn't matter in the end if the restaurant improved or not. Sometimes people are simply stubborn and won't change their ways. An example of this scenario was a fancy French restaurant where the owner gave the chef total control to cook whatever he wanted. It turns out the chef was addicted to complex foods that prevented the restaurant from getting its Michelin star.
While watching Ramsay work his magic in Kitchen Nightmares I began to see how similar a kitchen team was to a software team. Running a successful kitchen is a high stress, high work load, high quality, high variability environment where teamwork and communication are key. Sounds like software development to me.
Here are some of the notes I took on Ramsay's restaurant turn around strategies. I'll leave extending the metaphor into the software world in your capable hands.
- Quality can never be compromised. Let nothing slide.
- A good restaurant does one sort of food brilliantly. A bad restaurant does 50 sorts of food badly.
- Good food, good service, good atmosphere, and enjoy what you are doing. It's not that difficult.
- Good head chefs get the most out of their team.
- The best investment a restaurant owner can make is a great head chef.
- Always communicate. The head chef should constantly be talking.
- Taste everything before it goes out. That's the only way you can quality control your own food.
- Keep a clean kitchen. A dirty kitchen means a bad chef.
- A good team will always turnout good food under pressure.
- Waste means a lack of profit. Waste nothing.
- Know your market. Make what people want to eat where you live.
- The owner is responsible for making sure everything runs right.
- Know how to buy food. Get a deal. Be creative with less expensive cuts.
- People are responsible for their own areas. Other people shouldn't talk for your area.
- Ask for help when you need it.
- Run the numbers on profit. Know what things cost and how much you make.
- Cook what you do best.
- Go out and ask people what they want. Don't be afraid to talk to people.
- Be gutsy. Believe in what you are doing. Find your bollocks. You must have confidence in your skills.
- Don't be afraid to be the bad guy with your staff.
- Know your stuff and be the best of the best at your craft.
- Keep a distance between the owner/chef and the staff so needed corrections can be given without fear of injurying personal ties.
- Don't let your staff eat and drink at the restaurant. It scares away the customers. Keep a distance between both groups.
- The cardinal rule of cooking: must be clean, spotless. A clean kitchen means clean food.
- Always clean plates before sending them out.
- Always be doing something.
- The menu should be simple and clear in purpose. No more than two pages.
- Serve fresh food for a good price.
- Serve what is profitable.
- Stay on your staff all the time. Always get the most and the best out of them.
- Keep customers happy and spending money.
- Treat all your food with love and care.
- You can listen to a kitchen and knowing it is running well. You'll hear people talking to each other and working together.
- Work the table. Sell your food. Charm them. You want people to order drinks, appetizers, entre, and dessert.
- Look at the food and show an interest in what you are doing.
- Addition by subtraction. Get rid of people who don't make you better.
- Look at what is a bad investment and get rid of it. You need to be tough.
- Everything about your restaurant should scream sit and eat here.
- A bad looking a restaurant means bad food. Don't let repairs slip. Fix everything immediately.
- Always be getting better.
- Work for each other.
- Make dishes simple. No more than 5 or so tastes in one dish.
- People only have an hour for lunch. Take longer than that and they won't come back.
- Serve a good lunch and they'll come back for dinner and spend a lot of money.
- The owner should be working the customers, not in the kitchen.
- Always use fresh ingredients. Don't cook your food ahead of time. Don't deep fry. Don't by preprepared sauces. Don't microwave. Always cook from scratch. It's better food and cheaper if you know what you are doing.
- Have clear lines of responsibility. Know who is in charge.
- Don't have too many people. It cuts into profit and makes communication hard.
- Work isn't your living room.
- All the elements of a dish must be ready at the same time. If they are not then remake it. The only way a dish can be made fresh and on time is if all people on a team communicate.
- A cook needs quality equipment to do their jobs.
- Don't appoint yourself head chef if you can't cook.
- If you lose all powers of communication under pressure you shouldn't be in the kitchen.
- If your chef can't distinguish between heavenly and hellish food combinations then customers won't be coming back for more.
- You have to speculate to accumulate. Adapt to your situation and make strengths out of weakness. Example: kids eat free to draw people in.
- You are only as good as your last service.
- Don't get to ambitious for your skills. Keep it simple or you will close. Go back to basics.
- There's no better recommendation then booking a party based on the referral of a customer who ate at your restaurant the night before.
- Look the part. Cook the part.
- Give a free glass of wine to your customers when you screw up.
- Create a warm inviting restaurant.
- Don't assume you can run a restaurant just because you worked in one.
- Anyone who has not cooked a casserole or filleted a fish should not own a restaurant.
- Don't give your establishment a name that sounds like a strip joint.
- Do what your people can do. Simplify the menu. Simplify the preparation. Select foods that can be prepared in advance. Create an idiot proof process.
- Don't attempt to cook elaborate food before you have mastered the basics.
- Show people what they don't know through example. Challenge them. Example: prepare a set of taste tests for the cook so the cook can educate their pallet.
- A chef without confidence is like a car without wheels.
- Restaurants located on the second floor are the hardest to fill with customers. Strike a deal with the first floor store to advertise your second floor.
- Building a reciprocal relationship with the butcher is key.
- Sticking to a budget means more profit for the restaurant.
- Be able to look at a batch of lamb and know how much profit you can make.
- Big softies don't make good cooks.
- Adding little bits of sh*t at the end of a dish ruin it.
- Using booze to deal with stress makes the kitchen the most dangerous place to be.
- Let your passion for food give you a kick instead of booze.
- Get back your reputation with a mid week bargain. Know your prices.
- Good food at a bargain price doesn't mean anything without good service.
- If service does their job right it takes pressure off the kitchen.
- Move your ass.
- Have frequent staff dinners. Let your younger cooks do the cooking and use all the leftovers in the pantry.
- Smile. Enjoy it.
- Preparing fresh food injects passion back into the kitchen.
- Serving bad food ruins a cook's self esteem.
- You are in trouble when nobody takes responsibility and everyone blames each other.
What impressed me most about Ramsay is his overwhelming competence both in the kitchen and in running a profitable restaurant. Even more interesting is how well both skills work together in creating a better result. For example, Ramsay created an Ox tail soup in one episode that appealed to the locals, tasted great, was quick and simple to make, and made money. He couldn't have created this dish if he was just a cook or just a business man. There is example after example of this same class of work in the show.
What impressed me next was his absolute uncompromising dedication (surely the source if his inner a**holeness) to quality at every level. He insists on a clean kitchen, fresh ingredients, preparation from scratch, a simple menu, a well maintained interior and exterior, competent people in every job, and he has no trouble telling people what they are doing wrong and what they need to do better.
What I love most is when Ramsay goes out and asks locals why they didn't like a restaurant. And he sometimes samples food on a walkway to publicize the restaurant. Ramsay goes out there and mixes it up with the people. He doesn't cower inside wondering why what he was doing wasn't working. He believes in himself and doesn't let anything stop him.
How much these lessons apply to software is your call. I think a lot of them do.
10:13:42 PM
digg
reddit
|
|
|
|
Wednesday, June 14, 2006
|
|
|
Reanimating Zombie Code - The Art of Source Code Necromancy
In sites like sourceforge and in the source code repositories of corporations all over the world "live" millions upon millions of lines of zombie code. Zombie code isn't alive anymore because its original animating soul has been lost.
That's crazy you say, source code doesn't have a soul, most people don't even think animals have a soul, and a lot of people don't even think souls exist, so how can source code have soul? Ok, you go me there.
What I mean by source code having a soul is: As long as there are people around who understand the source code, change it, improve it, evangelize for it, and faithfully act as intermediaries between the source and users, then the source can be said to be alive.
Just having a lot of users doesn't make source alive. We use hundreds of applications (and libraries) that nobody has a clue how they work anymore. Heck, nobody may even have the source code anymore as it was lost in the fog of organizational churn. The application just works and that's all that matters...for a while. That blisteringly fast fortran matrix multiplication package (written by what's-his-name the genius layed off three rounds ago) may be in use for years after it died. That fancy rollup report that talks to the 20 different obscure data sources may be used and admired long after it has died. That website that started with much flourish and seed cash may stay on the web virtually for the end of time, but it is still dead.
That an application works is all that matters until something happens and it is officially pronounced dead or just floats unmourned across the river Styx. What forces are afoot that create zombies out of once living code? 1. You might move to a different platform and nobody knows enough about the source to port it. It's just dropped. 2. A user may want a minor change and nobody is around to make it. Then they create their own application and people gravitate to the project under active development because people know when something is alive or dead. 3. Or perhaps there's a serious bug and nobody can figure out how to fix it anymore. Performing an autopsy is a lot easier than repairing. 4. Or maybe a group flush with an urgent mandate and a big budget went looking for an application and they couldn't find it because it had no internal champion. So they built their own...again.
How does code become zombified? Mergers, firings, people moving on, new people moving in, technologies moving on, poor project transfer practices, fading memory, no documentation and so on.
Some of the most interesting causes of zombification to me are much more subtle: 1. Even code that is part of an active project that practices collective code ownership can become zombified through inattention. 2. It's impossible to transfer a project of any complexity on to other people.
A project always has development hot spots. Code that is being worked on has group memory and attention keeping it alive. But as development surges forward many parts of the code, over time, become less visited. And because nobody visits the code anymore it is not passed on between programmers. It becomes zombified. That's why you'll see people even on the same project reinvent the same thing over and over again.
Not to go over the edge (I know, too late), but every line of code tells a story. When you are tasked with handing off a project to another group you realize the enormity of the the effort and the absolute impossibility of it. There are thousands of lines of code, perhaps tens of thousands, perhaps millions. Can you really go through them all and explain what you mean? And using good names and all that doesn't substitute for the forces that generated the millions of decisions that went into every corner of the project.
A flood of memories will spring up when someone asks you what this subsystem does and why didn't you use another approach instead. To explain what you do you'll respond with a story. That's how humans communicate. Stories.
You won't show them a dozen pages of unit tests. You won't show them a wall size UML diagram. You won't show them a hundred page product requirement document. That's not what they want to know and you know it. They want to know directly why this code is the way it is and any implications it may have. They really want a story that will answer any questions they may ever have.
You might tell them a story like, "Group X said they were going to get their part done a year ago and they didn't. They also said that it would do this that and the other thing, but didn't. So we ended up making a whole series of changes to make up for it. Not ideal, but we were stuck because this system talks to another system which talks to that system which talks to these two systems a thousand miles away so that means we have to do that stupid thing in our code here."
But that's not the real story. You could easily do a 10 minute deep dive on any one of these issues to explain all the forces involved. Code comes down to personalities, politics, budgets, schedules, technologies, quirks, competition--thousands of forces, both hard and soft, playing themselves out on the human stage.
You could almost go through this entire process for each line after line in the code. After awhile you just give up and skim the surface. Information is lost at every step so the overall power of the project is reduced proportionally.
But isn't this the kind of information you really need to pass a project on? I think so. It's the kind of information I've wanted in the dozens of times I have taken over large code bases and it's the kind of information I would love to impart when transferring a large code base from my care. But it rarely happens.
Source code zombification isn't the exception, it is the rule.
So how do we keep our source ensouled? Here are a few suggestions of varying amounts of sanity:
1. Don't build it. 2. Buy it. 3. Make a no-brainer install. 4. Make the source repository searchable. 5. Document. Document. Document. 6. Encourage talking. 7. Use a good naming and other development practices. 8. Create a sustaining organization. 9. Collective code ownership. 10. Create a movie. 11. Create a life story. 12. Bring it back to life.
Don't build it.
If it was never alive it can't become a zombie. Maybe you don't need it after all?
Buy it.
Transfer the worries of zombification on to another company by buying their software.
Make a no-brainer install.
Nothing becomes a zombie quicker than something hard to use and install. People will give up on immediately.
Make the source repository searchable.
By making code searchable it becomes discoverable which means when people need it they have a chance of finding it.
Document. Document. Document.
Documentation is never enough, but it is like a defibrillator, it gets the heart started but unless the body is healthy enough it won't survive.
Encourage talking.
When people talk and communicate they keep things alive.
Use a good naming and other development practices.
This is the same class as documentation. It's not enough, but it helps.
Create a sustaining organization.
Have an organization whose job it is to maintain and keep software alive. Many important problem solving projects start at the edges of organizations and then go away because there's no home for them when the organization churns. Create a corporate home for software created at the edge.
Collective code ownership.
An agile practice that does a pretty good job.
Create a movie.
An old fantasy of mine has people make a movie while coding so they can explain through story telling what they are doing, why they are doing it, when they are doing it. Most people don't like to write, ie document, but they might tell a story (externalize their internal thought process) as they are coding. Developers can watch these movies to learn about the code and make the stories their own. And they of course will make their own movies as they change the code.
Create a life story.
This is a relatively new fantasy of mine made possible by fast wireless networks and lots of storage. Instead of just creating a movie of the coding process, we create a movie of all the meetings, discussions, and other forces that go into the making of the software. All this tacit context is never documented just is most responsible for the form and content of software. Imagine being able to watch and highly edited life history of software from the POV of all the people involved? Wouldn't that keep source code alive? Now that's a story.
Bring it back to life.
Bringing dead or zombified code to life is the job of powerful necromancer. It takes an immense amount of work, which is why it is rarely done. You have to reanimate the code in your own mind to bring it back to life. It takes endless hours of studying the code, finding the proper tools machines and environments, running the code, seeing how it's used, and interviewing people involved with the project. It's a good feeling though once you are done.
2:25:49 PM
digg
reddit
|
|
|
|
Sunday, May 14, 2006
|
|
|
How about making unit testing part of the language?
I would like to make unit tests a first class part of a language rather than bolting them on with annotations or encoding their function in class names. I don't have a strong argument against the minimalist language folks who like to keep the core language small, other than I have feeling it would make for a more powerful and useful language in the long run, even if I can't think of compelling reasons why at the moment :-)
One feature I would like to add is the ability for one class to say it is the unit test for another class. This would allow: * The unit test class to have internal access to the class under test. * An explicit test to make sure the often volumous test code is excluded from deployment images. * Tools to automatically run tests and report results. * A unit test class to be in a different package than the class under test.
So why did this burning issue come up?
Because when programmers unit test their C++ code they always wonder how they should do it. Should they make classes friends so they can peek at their privates? Should they make everything public? Should the they be able to call private methods? Should they access member attributes? Should they make two builds, one with #ifdef UNIT_TEST type code and one without out? How do you deal with static methods? There are really an endless number of questions because every situation you encounter in your program needs a way to make it cleanly and easily unit testable.
There are answers to all these questions, but the key points for me are: 1. Maintain data hiding. 2. A test must be able to do anything and everything necessary to verify correctness. 3. Code under test should have no idea it is being tested.
The engine that pulls the train here is the desire for data hiding: only expose the methods and attributes needed to fulfill the public contract of the class and the public contract should be as small and single purpose as possible.
Exposing your privates could get you arrested, but more importantly it
creates opportunistic dependencies that will make change much harder
down the line. Programmers will use whatever you expose and you will have a hell of a time making changes once code is in wide use in deployed products.
I hate it when people say I never do X because once we did X and it bit us bad. So I won't say that. Not being vigilant about data hiding has bit me many many times, but here's one particularly memorable mistake I made.
On one large project I left public what I thought was a harmless attribute that nobody would really use. Why make an accessor or class to represent it?
Later I wanted to change the attribute to be a method because we wanted to radically change how it was computed and used and we also had a new customer requirement to log accesses to this attribute. No problem I thought. Silly me.
I made the change and the build immediately busted. After an analysis of the code base it turned out a lot of programmers had started tweaking this attribute because it made some difference in an important algorithm. Oh oh.
This attribute was used in over 7000 places! More importantly the change would impact numerous groups. What this meant is that I could not go in and change it because an impact analysis had to be made before the change could be made.
An "impact what?" you are probably asking. You see, in a large project, one with hundreds of people distributed throughout the world, one with millions of dollars of penalty clauses for down time or performance failures, you can't just make a big change without following the proper procedure. Say a bug fix will touch 5000 source files and people freak.
All the impacted groups have to take a look at the change and figure out the impact on the code, on reliability, and performance. Full many week regression tests must be run. And of course this must be scheduled. And of course everyone is in a release crunch so they can't get to it until the next release-- next year. So the change was canceled, it was too risky.
In a small project you could have made the change, but in larger projects with a lot on the line the world is often very different.
So I am generally big on data hiding. I don't make everything public and hope programmers will do the right thing and pay attention to warnings and documentation. I wear a seat belt and I hide my data behind protective methods and classes. I sleep much better that way and I don't have to spend so much time telling a long list of managers what an idiot I am for making problems for them.
What this implies is you don't make methods and attributes public just so a test has access to low enough level details that the test can verify the result.
For example, if you add an item to an object that contains a list, do you need a corresponding get method to verify that the item was properly added? What if your public contract doesn't require a get method? Should you added it just for the test? Nope. You would be exposing more than you need to which creates dependencies which makes code harder to change.
In C++ you have the friend declaration which allows another class to have internal access to a class. This is perfect for test classes. My unit test can reach in and verify that the item was added to the list without exposing unnecessary functionality in the public contract.
Perfect. So why am I not happy?
1. Not every language has this feature, Java for example. 2. It's not a first class feature because it doesn't state the relationship explicitly. A friend is not the same a unit test relationship and a unit test may have more or fewer rights and capabilities than a friend. That ability is lost by just calling everything a friend. 3. And a more subtle and some would say silly point is I want to see friendship reserved for production code, not test code that isn't part of the product shipped to customers.
Hopefully someone can think of some more reasons why unit tests could/should be integrated as first class citizens of a language. I'd love to hear your ideas.
5:15:37 PM
digg
reddit
|
|
|
|
Wednesday, April 19, 2006
|
|
|
How do you create a 100th Monkey software development culture?
Someone reading my C++ coding standard recommendation for using doxygen to automatically generate documentation from source code, asked a great question:
I've often considered using doxygen, I always ask myself - is this really useful? Would I use it if I were new to a project? Would programmers working on the project use it?
I'll rephrase their question to more conveniently express a point I've thought a lot about:
Why do companies put so little effort into automating their own development process to make development easier?
It's like the hair stylist whose own hair looks like someone cut it using a late night infomercial vacuum cleaner attachment. Or it's like the interior decorator whose own house looks like a monk's cell.
Software organizations rarely build software to make developing software easier. Why is that?
There are three ways changes are made in an organization:
- Top Dog - Someone so high up in the org chart you would need a telescope to see him decided because his brother-in-law sells this very expensive Whatever system, you should use it too. Of course it will never work, even after spending all those plump and tasty consulting dollars.
- Drunken Clown - Someone accidentally did something some way once a while back and that's just the way it works now. I think of this whenever I need to add a page number to a document I'm writing. Instead of a clear "Add Page Number" task I instead have to create a footer and then I need to insert a "special" character into the footer to see page numbers. What's up with that? I've always wanted to ask the original perp who wrote this feature just what were they were thinking? How am I ever supposed to remember page number == special character? But it doesn't matter anymore, that's just how it works and people can't even imagine it working a different way. It just seems the natural and right way now even though it makes no sense at all!
- 100th Monkey - Someone who had a problem took on the responsibility on their own to make something useful. What they built naturally spreads because other people find it useful too. It's usually not the best of all possible systems, but it gets the job done.
The 100th Monkey approach to improvement is often actively discouraged. People don't have any "extra" time because they are fully loaded with work backlog. If you can't show continual progress on your assigned tasks then a manager somewhere will get itchy.
And of course you can't get time to work on anything extra because where is the ROI for your customers? If bathroom breaks weren't physically required they would outlawed because there's no ROI in it.
Let's say you do get time to work on that extra problem. You probably won't get enough time to do a good job so you end up with one hack built on top of another hack because people can't assign value to all the infrastructure that often makes the critical difference between project success or failure.
So someone makes a quick little hack. But the next person or group that comes along won't like it because it doesn't do X and the person who did the original hack doesn't have time to make any changes. There's no ROI in it for them.
On learning the tool they may want to use won't be improved the new group has a choice: improve the existing tool or make something new.
Nine times out of ten the best strategy is to make something new because that is what will solve their problem faster. The existing tool isn't supported, it's not well documented, it's a long ways from being general enough to solve new problems, so why take the hit of trying to fix it? There's no ROI in it.
An organization can end up with dozens of similar systems and none of them is ever quite good enough to win over enough hearts and minds needed to dominate.
Even worse and more likely is that people in an organization don't even try anymore because they know any efforts to make the development process better won't work out in the end. Why bother? There's no ROI in it.
I see this same cycle over and over and over again.
They say when bears are thriving in an area it means the ecosystem in that area is pretty healthy because bears are at the top of the food chain. If bears are doing well it means the food chain below them must doing pretty well too. We can say the same about many infrastructure components supporting development.
Automatically generated documentation is one of those things that usually doesn't get done because it takes too long to do right and there's no ROI in it.
But automated documentation generation is not so much about the technology needed to make it happen, it's more about the culture needed to maintain it and make it useful. It's your culture more than anything that will make a project successful.
Automated documentation assumes people wrote the documentation to begin with. It assumes a tool was selected and integrated into the build system and more over that there even is a build system. It assumes people maintain it. It indicates that someone thought about what they were doing enough that they could explain it coherently to someone else. It means that when someone wants to know if a capability is available then they can do a search of the documentation and hopefully find what they are looking for.
And just maybe they will feel the code is good enough to extend and use rather than rewrite. It means you can have a culture where people are building on each others work rather than replacing it. That's the secret to hyperproductivity.
You can make a similar case about many other development practices: unit tests, source code control, bug tracking, training, having a methodology, automated builds, automated regression tests, etc etc. These practices are often ignored because the ROI isn't direct enough. But if these practices are in place it indicates a pretty healthy development ecosystem in which people are probably pretty productive.
Another tool that doesn't get enough use is a company or development wide wiki. If they are in place the productivity gains are huge. And again the active use of a wiki indicates a healthy development ecosystem.
No matter how useful wikis are they are hard to get adopted. A while ago I created a number of wiki pages describing how I have I worked to get wikis adopted at several companies. Here's the Getting Your Wiki Adopted link.
I now think the approach described in Getting Your Wiki Adopted doesn't just apply to wikis, it applies to most any tool or process you want to add without 100% corporate buy-in. It's a sort of 100th Monkey manifesto. Give it a read and consider how it might be transmuted to apply to your project and corporate culture.
Let's take the automated documentation system as an example of how to apply the ideas. I'm not fully fleshing the idea out, I am just giving a few thoughts on each item. The Getting Your Wiki Adopted page has a lot more content.
1. Getting Automated Documentation System Adopted is Tough.
1. Yep, it will be. Anything new will take contiual effort to make work.
2. Have a Champion
1. Someone needs to have enough passion to see the process through from beginning to end.
3. Remove Objections
For people to adopt a Automated Documentation System system you need remove every possible objection they could have towards using it.
1. Bake it into your build system. 2. Make it a make target and easy to set up. 3. Come up with courses and good quick online doc for telling people how to create the doc. 4. Make the doc attractive, useful, and full linked.
4. Create Content
1. You must document enough code so that people see that it has value. 2. Make sure all the code you create or touch has doc.
5. Enmesh the Automated Document Generation In Company Processes
1. Make the doc searchable so people will find themselves turning it when they have a question. 2. Make it required. Add verification steps during the nightly build. Put doc in as a code review item. 3. When someone has a question you have to be able to refer them to the doc most of the time. If you can't, why not? If it's not usable then why should anyone use it? 4. Tie it into the build system so every nightly and release build has a link to the doc for the build.
6. Evangelize
1. When you go a class and it doesn't have doc ask the author to put doc in. 2. When the doc sux ask the author to improve it. 3. Find people to help you.
7. Don't Give Up
1. Hang in there. It's a bootstrapping process that has to start somewhere.
8. Just Do It! Don't Wait For a Budget
1. Pick a tool and add it to make and the build yourself. 2. Evangelize with examples, email, meetings, and success stories.
9. Have a Transition Plan
1. Make it easy for your tools group to adopt any of the changes you have made. 2. Try and make changes in a style consistent with how things are done currently so the changes will be less objectionable.
So don't give up. Next time you want to make a change and nobody else seems to care then go 100th Monkey and see what happens. You may be able do the impossible: change an organization for the better.
4:24:19 PM
digg
reddit
|
|
|
|
Sunday, March 26, 2006
|
|
|
The Microsoft Dysfunction
Mini-Microsoft's "Vista and MS are really screwed up" thread at http://minimsft.blogspot.com/2006/03/vista-2007-fire-leadership-now.html is just fascinating.
Few
of us who have worked in Dilbert's world can't find something to relate
to in this post's rain storm of comments. This is my favorite class of
comment though:
Just suck it up, make the best of it and stop pointing fingers and get your job done! ... Stand up and fucking do something about the problems instead of being a
part of the problem. At the very least, acknowledge your part in the
problem, learn from it, and prevent it from happening in the future.
If
you have been a lowly leaf in a very tall and wide company tree, this kind of
comment only drives the frustration nail further into your corporate
heart. Us leafs often try to change things. We lead by example. We make
the changes we can. We lower our lance and with a resolve most firm, take repeated charges at the
corporate windmill that is management, process, and culture.
The
truth we all eventually learn is: the success or failure of a
company is always because of its management. (I also have a corollary:
the success or failure of a company is always because of its workers,
but that's a subject for another time).
It's sad, I hate to
admit it, but a leaf can't possibly provide all the energy a tree
needs, no matter how hard you try, no matter how much you care, no
matter how many life happiness units you trade in to make it work.
Working
locally simply is not enough to globally change a company. Systemic changes
happen in management. Only management, in the end, can make the needed
large scales changes in organization and culture. The problem is, of course, management usually doesn't know what
changes to make. This is the function of great leaders and
they are in short supply.
Now the leafs at
Microsoft are no doubt working very hard. Much harder than most people
can imagine. Even as hard as the Great Generation of people in the
1940s worked. Yes, they work that hard.
But working hard is not
enough. A thousand employees pushing against a castle wall won't breach
the wall for your teaming conquering hoards. No matter how hard you
push the wall will stand. That's how the wall is built. That's how management ramparts are built too.
I think a small breach can be found in a later comment:
Microsoft's management is terrible. But it's always been
terrible. It was terrible in 1991 - ask anyone who suffered under, I
dunno, gregcr - and it was terrible when I left. It's terrible now. But
the groups, at least, were usually small enough that non-management
could, to one degree or another, push management around, could do the push-back that would save products.
It
was small enough that if, say, you grabbed your boss's boss in the
hallway and chewed him out for signing off on something that you knew didn't and wouldn't work, that, well, you'd probably be okay in the end.
I like this comment because it is real. Face it, both management and workers, yes, even myself, are mostly terrible. We get it wrong much more often than we get it right.
Yet, there's something about being together in a small group that allows the Wisdom of Crowds
to mashup all the wrongness and occasionally flip wrongness into enough rightness that we get stuff done.
In a smaller group there's a correspondingly smaller Tipping Point,. You can do little things that make a big
difference with some confidence that there will be a pay back within your lifetime.
On a branch, if we are all leafs, we leafs can make a difference.
11:28:32 AM
digg
reddit
|
|
|
|
Sunday, March 12, 2006
|
|
|
Is programming an abstract expressive art form?
This is much shorter version of a long argument I have been having with myself about the role of software development as an abstract expressive art form. We developers are often seen as just little code robots. But software can be a lot more than that and if it was allowed to more I think we could create a lot more truly great software instead of heaping more dumptrucks full of uninspired trash in the bitbin.
There's an interesting parallel between the western musical
tradition and the agile tradition.
If you would have asked Bach the purpose of music he would have said
music serves to glorify god. His perspective was a carry over from the
middle ages when music was primarily spiritual in purpose.
With the enlightenment a more secular view of music developed. We see,
for example, in Mozart the classical emphasis on the individual which
lead to clean, tuneful, and entertaining music.
The classical ideal soon evolved into the romantic ideal which valued
individual self expression. In Beethoven and Liszt and in 20th century
composers we see self expression as the primary driver behind their music.
Bach was undeniable creative, but his music was not intended as a
vehicle of self expression. The polyphony of the fugue in many ways
resembles a software project. The complex and intricate dance of the
many fugue voices is held together by a strong underlying structure.
Bach perfected "counterpoint", the "combination of independent melodies
to form a harmonically and rhythmically vital union."
In fact, the word "baroque", Bach's musical style, originally means
"irregularly shaped pearl", which I take to be in spirit with Wabi Sabi,
an idea that has inspired many in the agile world.
The parallel I see is that agile puts software developers in the service
of the customer with the driving value being business value. The purpose
of the developer's work is not self expression. Developers serve the
customer and every action is tasked with providing the customer value.
Though the Agile Manifesto values people over process, the end result is
still largely dehumanizing because individual developer self expression
is sublimated to the customer. The individual developer is left to
bargain with the customer for each degree of freedom they are granted,
but in the end the customer always wins.
You can say he who has the gold rules or this organization is necessary
to ensure repeated successful outcomes, but it shouldn't be surprising
that many people see software as something more than an end for a
customer. Development is also an end for an individual's life, their
creative world, and as a central means of self expression.
We've seen in music the evolution towards musical structures and an
ideal for the composer that values self expression. We have seen some of
the greatest world the music has produced come from a careful balance of
structured musical form with individual inspiration. It wouldn't be
surprising to see a similar evolution in agile as well.
3:59:06 PM
digg
reddit
|
|
|
|
Friday, March 10, 2006
|
|
|
WTF: The Least Used Resource Error
There's an unexpected and often fatal type of error that happens when you add resources to a horizontally scaled architecture. When the new resource comes online all traffic can be immediately redirected to the new resource, because it has the least load, and it just folds up and dies. You are left wondering WTF (what the f*ck) and it is really hard to track to down and even harder to fix.
The idea behind a scaling out horizontally is that you can add new resources to handle load. This sounds great and it works, but it has some subtle and surprising error conditions that you may want to keep in mind.
Imagine you have a load balancing appliance using the least load metric. Now you add same some new slave MySql servers to handle the load. Your load balancer will redirect traffic to the new slaves, but the slaves are trying to sync, yet they can't sink because they are getting hammered by the new traffic. Deadlock.
Imagine you have storage network that is full to the gills with data. You add a new appliance to give yourself more storage. Now what happens? All the new data goes to the new appliance. That means all users are hitting the same appliance for their data. Your performance slows to a crawl because the appliance can't handle that. You sort of expected parallel IO among all your appliances to handle the load. Now you say WTF?
A related idea is the dark side of partitioning. You partition data to get high performance via parallelization. For example, you hash on the user name to a cluster dedicated to handle those users. Unless your system is very flexible you can't scale anymore by adding resources because you can't repartition the data. All users are handled by their cluster. If you want a different organization you would have to redistribute data across all the clusters. Most systems can't handle that and you end not being able to scale out as easily as you wished.
All these problems have solutions of course, but when you first hit them, you get that deep WTF feeling only a Cosco jar size of antacids can soothe.
7:30:54 AM
digg
reddit
|
|
|
|
Saturday, February 18, 2006
|
|
|
With Vpops do we even need an OS anymore?
What the heck is a vpop? A vpop (virtualized program) is a program
designed specifically to run in it's own virtualized CPU environment.
With vpops we don't need an OS anymore. What do we use instead?
Let's make the move to virtualization instead of
virtual machines.
In this future we download highly optimized, fully verticalized, bare metal
programs into their own virtualized CPU environments.
These stand-alone virtualized programs (vpops):
* Cooperate by talking over IP.
* Use a shared nothing architecture.
* Store state on their local virtual hard drive or with their remote state
service provider.
* Use a web based desktop that coordinates and multiplexes access to all
the other services.
* Pick whatever OS they like best. We've spent decades building
up layers between the OS and applications. With virtualization
the layers disappear. You can create one monolithic high performance fusion
of OS and program, much like in most embedded systems.
* Are downloaded automatically into their own VM space.
This is very different than the current world where the virtual
machine has become the standard model of program execution.
* OSes are really virtual machines these days. We build bianaries
that assume they are loaded into a certain OS and microprocessor
combination. Our bianaries dynamically link to shared
libraries or DLLs or assemblies. Even if we compile a completely statically
loaded program, applications assume assume they trap into a host of OS
provided services.
* We build programs that load into a JVM or CLR like virtual machine.
In .NET, windows is basically one big application container and there
have been attempts at JVM based OSes.
With virtualization you don't need to incrementally load
a program into some highly capable execution container anymore.
You can build a totally independent program that loads directly
onto the hardware.
The OS/application layers can disappear. You can compile one
highly optimized monolithic program that accomplishes its
specific task on its dedicated hardware resources.
A few changes have made this move possible:
* Virtualization is a very old technology, but it had recently become more
main stream and thus it is more acceptable.
* CPU power has grown faster than the uses for it so sharing CPUs makes
economic sense.
* Faster, ubiquitous, reliable networks make for portable environments.
* High capacity portable storage devices make it possible to take
your personal virtual environment with you and load it anywhere you
want.
* 64 bit processors and relatively cheap RAM will provide the RAM
needed to share RAM hungry resources on one machine. And mutliple core
CPUs will provide the needed CPU power.
* We have a lot of IP addresses. Err, well, no we don't. That is a problem
because before every application on a machine would share the same
IP address, now each virtual envornments needs it's own IP address(es). NAT and IPv6 can help make more IP addresses available.
Why would we want to create vpops instead of loading programs into an
OS? Good question. Many of the reasons are, of course, the same as those used for
virtualization:
* Safety. Software is loaded into a jail that can't hurt other software.
* Efficiency. Expensive hardware is fully utilized.
* Portability. Your virtual environment can be carried with you wherever
you go. You can store data remotely if you want, yet your applications
can be yours. Applications don't have to be web enabled to be available
anywhere.
* Extreme Portability. You can live migrate your virtual environment between machines as you move around. You don't need a portable storage device. Your virtual environment can be physically transferred between machines so you never have to configure new environments. You need machines to be similarly configured which is why storage services could be centralized over the network.
* Flexibility. You can carve up your hardware however you need.
* No more IP ports. You won't have to put your second web server on port 8080 anymore. The IP address and the default port would be sufficient to identify a service now. The cost is IP addresses, but it makes using services more clear and direct.
Why go the next step and get rid of OSes? An even better question.
* Because we can.
* An end to DLL hell. We don't need no stinkin shared code to link into anymore.
You don't have to worry about managing DLLs, shared objects, assemblies,
or what OS is installed.
Your application is microprocessor specific, but other than that it is completely
standalone. You just rebuild and redeploy. No install hassles or worrying about
registry magik.
* Performance. We put a layer between the OS and applications largely for safety
purposes. Once these layers are dropped you'll get programs running a bare
metal speeds.
* Installation becomes a snap. If you want to install a wiki, or discussion board, or some other application you usually get this horrible set of install instructions that are old, in error, and frustrating beyond measure. You end up downloading a dozen under tools, editing a hundred files, and then it still doesn't work. If you can install a fully configured virtual image into your own safe virtual environment, the install barrier becomes nothing. You can simply install any software you want.
* Choice. You don't need to be locked into an OS. You can use the application
you like better, not the OS you are compelled to use.
* Wonder. Let's say I think Erlang is the greatest tool evar for creating distributed applications. But it suffers a little from performance anxiety and it's not supported by any data center so installation is always tricky. Now let's say I create this kick ass Erlang OS that runs great on bare metal. In a VM aware world I could provision resources from a data center grid and automatically download my vpop to all nodes in the grid. The vpops would come up, configure, and start working. No muss no fuss. That's cool.
Wikipedia has nice overview of virtualization on their site at
http://en.wikipedia.org/wiki/Virtualization .
12:49:08 PM
digg
reddit
|
|
|
|
Sunday, January 29, 2006
|
|
|
What is a Software Car?
This posting (http://discuss.joelonsoftware.com/default.asp?joel.3.301232.15) on Joel On Software discussion group poses an interesting design question.
The first approach: class Car {
public void Wash() { /*code to wash the car*/} }
Car car; car.Wash();
The second approach: class Car : public IWashable { //private members. This class just acts as a business object that is
manipulated by "verb" objects. This maps directly to a db entity. //no methods. just props and constructors and a dispose pattern }
class Washer {
public void Wash(IWashable ptr) {/*code to wash thingie*/ }
IWashable myCar = new Car(); Washer w = new Washer(); w.Wash(myCar);
From a C++ perspective Washer is nice because you don't need to change Car to add washing capabilities. This is solid pragmatic reason, though I think a more philisophical discussion was desired.
Given all the objects that would be needed to wash a car, what makes Car the right place to know how car washing works? Should
a car know about how to find the nearest and cheapest car wash, for example? To answer this question you have to somehow be able to decide which objects are natural for Car to know about.
In this case, Washer is really a relation between all objects involved in washing a car. Washer is an algorithm abstraction represented by a class. This could easily be a function in a functional language, though in my mind classes and functions are pretty much the same thing.
You could make the wash method abstract in which case you would derive a class like CarWasher from Car that contained the same relation as Washer, using ISA instead of HASA. I might argue this approach has advantages from a program discovery perspective. By looking at Car I immediately know Car has the ability to wash itself, yet the implementation is left to others. This would set me looking for classes who have implemented wash. This makes the intent clear in the code. From looking a Car you have no idea that Washer exists.
The downside of this approach, in C++ at least, is that extension always requires changing the base Car class, which isn't always possible or desireable. So you often end up with a mish mash of ways of extening Car. Some extensions are made through the base class and some are made by adding new classes.
Again, these are largely pragmatic concerns. The real question is: should a Car know how to wash itself?
One answer is no because an object is characterized by identity whose behaviours arise from relations with other objects.
This is the answer that is correct for me in the abstract. But classes don't exist in the abstract. Classes exist inside a program which is built to serve a purpose.
It is the program's purpose that defines the domain/relation/frame in which Car should be understood. We can't just say we all know what a car is and make decisions based on our folk understandings of car-ness. The program defines the idea of car-ness within its borders of reality.
Anything consistent with a program's purpose should be in Car. Anything else not strongly cohesive with that purpose should be put elsewhere. If this program were about car washing the wash code should be in Car because then that would be essential to the nature of carness within in the application.
If car washing was being added to a program where a Car already has a strong meaning then adding wash to Car would make Car confusing and more difficult to test. We often find as programs grow and change that our simple cohesive objects take on multiple personalities and the only way to keep those identities straight is to move each of them into their own class. The Car object itself at that point is mostly about identity, it is the relation that ties everything together, but may not have much state or behaviour itself.
But here the pragmatist in me escapes and I must say that when I look at the Car class source code I really want to be able to see what it can do. I don't want to have to guess or look at 1000 different libraries to know all the things I can do with a Car. This is where modern documentation tools come in handy. They will tell you all the classes that use Car so you have the ability to know what Car does without including it directly in the source.
So, in a more complex program where Car has several natures I would go with the Washer approach. In a program where Car has a single nature I would make wash part of the Car class.
This is why no two designs are ever the same. People look at all these issues and can honestly come to very different conclusions about the answers.
1:11:05 PM
digg
reddit
|
|
|
|
Thursday, December 29, 2005
|
|
|
Time Boxed Versus Feature Boxed Releases
This is an interesting discussion you can find at: http://www.theserverside.com/news/thread.tss?thread_id=38287. It's an issue that comes up all the time inside a project and is ultimately the primary driver for project success or failure.
The key point of why time-boxes are more effective is:
To make a project successful you must change how every decision in a project is made. That's what time-boxes do. Time-boxed releases using a short time horizon force every decision made in a project. A time-box is like a black whole sucking in everything that gets close. For every decision branch a shorter implementation strategy selection is forced which results in features getting implemented in less time.
When implementing anything you always have a choice of how to implement it. You can choose a way that gets the job in done with high quality, but takes a shorter period of time. Or you can choose a way, that you may prefer, but will take a longer period of time. This pattern applies fractally to the highest architectural decision to the lowest down and dirty programming implementation detail.
Software is the result of thousands of decisions that trade time for other qualities like elegance, quality, power, generality, familiarity, and excitement.
With a short time-box you are forced to make every decision with the time-box in mind. If you make 100 decisions everyday and everyone one of them results in a shorter implementaiton time, you can see how over a succession of decisions you will implement more stories. Now multiply this same process by everyone in your project. The chances are better your project will meet its goals.
A feature-box has the opposite effect. Work expands to fill time available. And with feature-boxes the amount of work expands proportional to peoples imaginations because there is no force constraining the process. A feature-box is not a box at all, it's like open space in all directions and everyone has warp drive.
Does a feature-box have to work this way? No, but that's the way I would bet. When faced with a decision in a feature-box scenario, what is your motivation? Your motivation isn't to the schedule, so you make choices that end up taking more time. These are not bad decisions. They just take more time. In the end, when you add up all these time increases, your schedule is burnt to a crisp.
A long time-box is really no different than a feature-box. You have so much time there's no pressure on you during the sprint so you end up making decisions that result in longer implementations. It's the pressure on every decision that is the distinguishi ng feature between the time and the feature box.
Doesn't this approach mean you get less? Less quality? Less future proofing? Less functionality? Just less?
Yes and no.
No in that every decision you make still has to meet the requirements of the story and be tested with unit and system tests. So you shouldn't have less features or lower quality.
Yes in that software is an iterative infinite game in which the decisions you make now impact the future. Locally optimizing now will not globally optimize. This is particularly noticeable with phase change types of features. If a feature just incrementally extends and existing feature then there is little risk. But when a feature really means your product is changing into something different then you can easily warp right into the heart of a burning sun.
Some examples of product phase changes are dealing with discontinuous levels of scale, adding fault tolerance, adding complex new algortihms, and integrating with other systems you have little control over. With these kinds of features you need to think ahead.
But even with phase change features the time-box is your friend. You can go crazy implementing a risky feature. We've all done it. With a time-box you are more likely keep your scope limitted by just trying to get something to work.
If you can use time-boxing as a black hole, driving every decision towards maximal effeciency, then you are more likely to deliver a working project on time.
8:18:41 AM
digg
reddit
|
|
|
|
Saturday, November 26, 2005
|
|
|
A Pretty Good Configuration System Even Socrates Might Like
How do you configure your system? I've worked on configuration many many times, but I've never done it right. I have come closer this time, but it's not perfect yet.
Ok, so what?
Here's what you'll get if you open up the prize hidden inside:
1. You get a highly configurable system where every stakeholder in the system gets a chance to configure the system in a rational well defined order using a powerful general mechanism. You can configure the same thing from the command, line, from the environment, from a configuration file, or over a command port at runtime.
2. You have an automatic way to provide an interactive interface to every library in an application over multiple interfaces (telnet, http, command line, etc). This means you can configure anything in an application at runtime and you can access all of an application "debug" interfaces at runtime.
3. You get a system is describable at runtime by applications in their own code. I like this better than building meta systems from configuration files. It's simple enough that programmers might just consider doing it because the benefits are many and the costs are few. The tricks are mostly in how everything fits and works together, not in the amount of programmer work that is needed.
Just what is configuration? Anything in a program that can be set or got.
Configuration is all the really sexy stuff in programming like setting log levels for different parts of an application, setting port numbers, setting host names, setting max queue sizes, max error thresholds, turning on and off features, injecting faults, getting application metrics, running tests, invoking functions, and getting values from different layers of a program so you can tell what is going on.
This kind of stuff is certainly boring to non-programmers, but it can be very interesting to programmers. Mostly what's of interest is how bad configuration is usually done. Configuration doesn't get much thought so it ends up being a confusing mix of poorly thought out hacks.
Many times I have witnessed conversations like this:
Socrates: We had a sev 1 bug filed by Giant Corp. Their Whatsit wasn't created when the Whosits happened. The system is still up and running on host Euthyphro. What happened and how can we fix it?
Phaedo : How should I know?
Socrates: Can't we go look?
Phaedo : Look at what?
Socrates: You know, the logs and the programs and see what happened?
Phaedo : No, we don't have anything like that. Just give us the complete state of the system and exactly what happened and we'll recreate the problem. We'll add some debug, put the processes in the debugger, and we'll figure out what happened. It should take about a week to get everything ready and completed. We don't have a test system as big as the real system so it will take some setup time and time to get all the hardware.
Socrates: But we have the system up and running know, can't we look and try to recreate the problem right now?
Phaedo : No, like I said, we can't do that sort of thing.
Socrates: How are we supposed to give you the complete state of the system when you don't keep that sort of information around?
Phaedo : Uh, well, just give us what you got and we'll do our best.
Socrates: This is a really important customer. We need to move fast. What can we do to fix this process in the future?
Phaedo : There's nothing broke! What do you expect us to do? All that logging and other stuff slows the program down, takes up valuable programmer time, and adds complexity.
Socrates: You can't figure how to make it work? We are going to lose a big customer here if we can't get to a root cause by the close of business today.
Phaedo : Well mister smarty pants, what would you do?
Socrates: Nothing, I have a feeling I won't be at this job much longer anyway.
For the rest of the story take a look at http://www.possibility.com/epowiki/Wiki.jsp?page=PrettyGoodConfigurationSystem
9:44:47 PM
digg
reddit
|
|
|
|
Friday, November 18, 2005
|
|
|
Is your sofware really that hard to support?
Companies are a hive of different development activities. Occassionally you hear about a project and you think hey, we could use that. You contact the developers and they say sure, no problem, but talk to our manager first. It's with the manager where you splat against a stone wall. And the wall has graffitied on it "we can't support that." Or "you can fork the code but it's yours after that."
The manager is just playing it safe. Nobody is paying them to make a general software package everyone can use. That would take more people, more time, and divert them away from the project that they really got funded for and against which they will be judged a success or failure.
But wait, the company is paying for development, is it too much to ask for people to work together so we can leverage syngeristic and other manegerial aphorisms? Apparently it is. Locally maximizing a group's outcome does not lead to global company maximization.
The manager is not being irrational. The manager wants to minimize risk and telling people to buzz off is an easy way to minimize risk.
Does it really take so much effort to internally support software? And if it does, doesn't that indicate there's a problem? The easiest way around a problem is to deny, deny, deny. But what's the big deal? What can go wrong?
Somone Has a Question
Yes, lots of questions can "waste" time locally. There are ways around this however. * Create a community. Try an email list so everyone using a program can help with technical support. Create a wiki so common topics can be covered. * Document all answers to questions. Everytime a question is answered try to improve the system so it doesn't need to be asked. If it is asked then people can just say RTFM and here's the link. That takes about 30 seconds.
Someone Has a Bug
If there's a bug, fix it, add it to your tests, and say thank you for helping make my software better. If your community is active maybe somone might try and fix it. Often people will try and fix a bug if they are getting enough support in their efforts. A little encouragement goes a long ways. As does good documentation, clear program structure, and a comprehensive test suite.
Someone Makes a Feature Request
A user loves your software, but they need just one feature to make it perfect. This is a time sync waiting to happen. But if you say no, what will happen? They will move onto something else.
Users will create a new group, buy a new package, and create a new process. That's why when you go into a company you'll see duplicate systems all over the place.
People don't wait to solve problems because it slows them down. So solving people's problems promptly and well at a local level is really a global maximization strategy.
You can handle feature requests by: 1. Talk with the people who need a feature. Don't blow them off. See if you can work their feature into your release schedule. If you can tell the user when the feature will be implemented then they may be satisfied to wait. 2. Maybe the feature is easy and you can knock it out quick. This is ideal. How can you make it easier to get in new features quickly? 3. Maybe people in the support community will work on the feature. 4. Maybe the feature won't ever happen and you can tell the user they will need to find something else. 5. Maybe you can have separate team whos job is to add new features to projects in maintenance mode.
5:33:32 AM
digg
reddit
|
|
|
|
Wednesday, October 26, 2005
|
|
|
Programmers: Zombies or Super Heroes?
Ron Jeffries wrote on scrumdevelopment@yahoogroups.com :
I suppose most boys want to be Superman or Spiderman when they are
little. Fact is, we'll have more impact on the world by working with
people than by working alone. The sooner the young genius figures
that out, the better.
Another
way to understand the super hero myth is that each of us is capable of
being great, but the normal pressures of tribal life encourage us to
hide our capabilities so we won't stand out.
Standing out is a sure
way to get hammered down. So the super hero who mythos is a reminder to us
all the we are capable of reaching down, overcoming obstacles, and
doing great things.
That the super hero image has become iconic speaks to how much we need
to balance the message of conformity we get on a daily basis.
Personally
I want Einstein thinking the big thoughts. And I want 1000s of teams
working all around the world on cancer too. We can have the lone
genius, we can have the Manhattan project, we can have the Justice
League, we can have the team member that does great things but may not
fit in 100%. On a system wide basis the power of progress is a function
of the interplay of them all.
The simple rule that teams are most important is too simple.
In a group I enjoy spending the most time on that chaotic border of
individual and team creativity. I hate being in a group where people
are passive and wait to take direction. That's what happens when people
forget they too can be superheros.
I also hate being on team where everyone thinks everyone else is an
idiot. That's what happens when people are like Lex Luthor and need to
control everything to feel safe.
Fortunately we don't need go to either extreme.
I think Joel has
an interesting take on this subject in his High Notes article at
http://joelonsoftware.com/articles/HighNotes.html .
7:36:38 AM
digg
reddit
|
|
|
|
Monday, October 24, 2005
|
|
|
To Really Improve Your System You Can't Refactor
I've noticed I spread mistakes pretty evenly throughout my code. So
when I create an interface it has about an equal probability of needing
improvement as any other code I write.
I firmly believe in constantly improving my code and I see a difference
in quality because of that. Seemingly this would put me firmly in the
refactoring camp. But it doesn't. Why? Refactoring says you can't break
interfaces. That puts me in an awkward position.
According to the rules of refactor I have to not touch over 50% of my
code when that 50% needs just as much improvement as any of the other
code I write. But I want to keep improving my system. That often means
removing classes because I have figures out a better way. I may need to
change the signature of a method because a I have figured out a better
way. I may need to drop a method because I have figured out a better
way. I may need to make a lot of changes of all types to keep improving
the system.
Because I want to constantly improve all my code I can't refactor. I am
more extreme than that. If I see an improvement I make it. I don't care
if it breaks interfaces or not.
I hear we shouldn't be afraid of changing software, isn't the whole
idea of story based iterative development that evolves software? So why
shouldn't I break interfaces too?
No reason. Unit tests keep you safe from interface changes too.
Certainly an interface change may break a particular layer of software.
But interface changes should just impact that one layer. All your tests
above and below that layer should remain stable. If they don't then
that's probably a smell that should be taken care of anyway.
I've seen more time wasted because of the fear of change code than I
can shake a bit at. Interfaces aren't in any more sacred a position
than the code behind the interfaces.
So bee extreme. Don't refactor. Improve all of your system all of the time.
6:13:48 PM
digg
reddit
|
|
|
|
Wednesday, September 21, 2005
|
|
|
Why Does Google Do What Google Does?
I have had some experience with building real-time behavioural
targeting systems, so I think I've developed a new appreciation for the
grand
strategy behind google's seemingly disconnected moves.
They are building a real time customer profile based on your real identity.
This is a very valuable commodity as it gives google the ability to
sell high value campaigns to advertisers.
This may or may not seem obvious to you, but it struck me in a tetris
like way how all the bricks fit together if you are trying to build up
a real time customer categorization system that can be used across all
properties. Other companies might do the same thing using a portfolio
approach. But google has taken a less direct Sun Tzu Art of War
approach.
If you notice google doesn't create word processors or accounting
programs. Almost everything they do is about getting content and
getting you to provide an identity to them.
* In google's wifi network they will track what you see. And they
know where you are at so they can do local ad targeting and build up a
profile based on where you travel. Many people don't know that location
based services are built into wifi networks.
* If you use their
local search they track everything important enough for you to keep.
* When you use their future micro
payment system they'll know what you buy.
* If you use their proxy server they
can tell what you see.
* If you use their search they can tell what you see both by
your key words and what you click on.
* If you store your email with them
they can tell what you see.
* If you use their VOIP network they can tell
what you see, who you talk to, and who talks to you.
* If you use their
blog they can tell what you see. If you use their map they can tell
where you live and what you are interested in. * And finally, when you use their upcoming free high speed network they
will have created something so sticky you'll rarely be out sight of
their vast army of digital observers. They will be able to do a complete analysis on all your traffic patterns and content.
Of what use is all this you may ask? They can't show ads everywhere, so
what's the point? True, they may not be able to show ads everywhere,
but they can learn enough about you from all their different sources so
that in the mediums where they can present you ads those ads will be
amazingly well targeted. More targeted ads are worth a lot more to
advertisers. And nobody will be able to build a profile of you better
than google.
By observing what you actually do and say, google will build a much
better profile than they could get by pummeling you with a 1000
detailed surveys. What you do and say during your daily life is real,
people fib when filling out surveys or profiles.
If you mention hawaii on a voice call they can in real time adjust you
profile that you may be interested in travel, car rentals, etc. If you
use their map to look at a lot of areas centered around a certain
location they could deduce that you probably live their which opens up
a lot of local advertising opportunities. If people from universities
read your blog that says something about you. If your local disk
contains a lot of Country Western music that says a lot about you. And
so on...
Add all this up together and they can build a remarkably accurate profile of your interests as they happen.
If you think you can hide behind fake names and IP addresses you can't.
For many of the more advance services they'll know how you are. Even if
they don't they can do a pretty good job guessing by doing a content
analysis of your word usage and sentence structure combined with a
small network analysis of who you talk to and who talks to you.
From the outside it's easy to underestimate how much companies are
willing to pay for this kind of content and ad targeting. But they are
willing to pay a lot, so all google's efforts have a big payoff.
I am not going to say the world is going to end or anything. Everything
you do with google is voluntary. I am just admiring the view.
3:48:00 PM
digg
reddit
|
|
|
|
Thursday, September 15, 2005
|
|
|
The Only Real Agile Customer is a Paying Customer
Projects are full of Sham Customers. If I use something you are
building on a project we'll all subscribe to the collective delusion
that I am your "customer." But am I really a customer? Am I a real
customer?
No I am not. Why?
Because I am only a real customer when I am paying you to do something
and you really want my money. Otherwise you have no incentive to do
what I want. But wait, why is money even part of the equation? I am the
customer, doesn't that mean you'll do what I want as long as it makes
sense and is reasonable?
Oh no.
If you aren't dependent on me in some direct way--not some metaphysical
way like we are all in this together, or we are all working for the
same company, we are all working to the same purpose--then you can tell
me to get stuffed and there's nothing I can do about it. Nothing!
Let's say I need a a couple of tables in a database. You are the
database group and I have to go through you for database stuff. For
organizational reasons I just can't make my own database. I am using a
database example, but these relationships are everywhere in companies.
So I am your customer because you are implementing a service for me. If
I were a real customer then I could make sure you would do what I need
because I wouldn't pay you otherwise. If you didn't want to provide me
a service then I would go elsewhere.
Yet in this scenario I am a Sham Customer. I am forced to go through
you and I am nothing but a burden you wish would go away. If you don't
want to make my tables what can I do? Whine to management. Try
persuasive argumentation. Plead. Get pissed. I'll often cycle through
all these strategies hoping something sticks. Most often nothing sticks
and I am stuck.
If you don't help me after a few rounds of
whining-arguing-pleading-pissing I enter a condition known as
learned
helplessness. Learned helplessness is a state of depression you enter
when you realize nothing you do will make a difference. You feel
powerless so you just give up. You don't even try anymore because it
doesn't do any good. Why try if nothing ever happens? Animals when
exposed to continual shocks they can't escape will simply stop trying
after a while. And when an opportunity for escape presents itself
later, the animals won't escape. They have learned helplessness.
Corporations are stocked full of people in a learned helpless state.
The structures and incentives in corporations often seem specifically
designed to frustrate people and send them into a give-up mode of
living.
How do you prevent this scenario and keep people happy and productive? I don't know :-)
One idea is to keep groups small and independent so they can do
everything they need to do themselves. This creates more work but is
probably more productive in the end.
Or how about if a customer would control the part of the budget the
service provider is getting to implement the service? This might
rebalance the relationships between groups. I realize it's not going to
happen, given how budgeting occurs, but something has to be done.
I hate feeling helpless.
8:27:14 PM
digg
reddit
|
|
|
|
Monday, September 05, 2005
|
|
|
Web 2.0 Makes Microsoft the New Sun
At one time Sun systems were used on most midlevel projects. Sun was just the standard infrastructure. We
developed software and hardware on Suns. When we bid on distributed
system that didn't require real-time processing, we would bid Sun
systems. Sun was like the center of the solar system. We all revolved
around it.
That slowly changed. Microsoft won the client and then went nova in
the server space too. Sun dimmed and Microsoft ascended for many well
known reasons.
What's interesting is to see this pattern repeated, not by another
computer system, but by Web 2.0, a layer built agnostically on
top of any computer system. It's a well known prediction that we'll start seeing
many more services run on a pure Web 2.0 platform. With Web 2.0 which OS you run will become about as
important as which RAM chip vendor you use. Web 2.0 has Sunnified Microsoft.
MS traditionally puts a lot of focus on developers. Will that matter when anyone can deliver web apps without MSs'
help on a zero cost infrastructure?
Like Sun, MS won't be able to do much about it. Vista is just rearranging desk chairs on the titanic. History
rerepeats itself again.
9:14:40 PM
digg
reddit
|
|
|
|
Saturday, August 27, 2005
|
|
|
There's More than One Kind of Scalability
Take a look at:
* http://www.possibility.com/epowiki/Wiki.jsp?page=Scalability
* http://www.possibility.com/epowiki/Wiki.jsp?page=ScalabilitySolutions
Scalability is the ability to keep solving a problem as the size
of the problem increases.
Scale is measured relative to your requirements. As long as you can
scale enough to solve your problem then you have scale. If you can handle
the number of objects and events required for your application then
you can scale. It doesn't really matter what the numbers are.
Scaling often creates a difference in kind for potential solutions.
The solution you need to handle a small problem is not the same as
you need to handle a large problem. If you incrementally try to
evolve one into the other you can be in for a rude surprise, because
it won't work as you pass through different points of discontinuity.
Scale is not language or framework specific. It is a matter of approach
and design.
The Two Classes of How to Handle Scalability
I've come to think there are two classes of scalability problems:
* Scalability under fixed resources.
* Scalability under expandable resources.
The two different classes lead to solutions using completely different
approaches. That's not to say they can't be mixed, but it's helpful
to consider them separately when considering a design.
* Scalability Under Fixed Resources
In this class of scalability problem you have a fixed set of resources
yet you have to deal with ever increasing loads.
For example, if you are an embedded system like a router or switch, you
are not likely ever to get more CPU, more RAM, more disk, or a faster network.
Yet you will be asked to handle:
* more and more functionality in new upgrade images
* more and more load from clients
The techniques for dealing with loads in this scenario are far different
than load when you can expand your resources.
* Scalability Under Expandable Resources
In this class of scalability problems you have the ability to add
more resources to handle more work, in general this is called [horizontal scaling].
The new era of cheap yet powerful computers has made horizontal scaling possible for
virtually anyone. Many companies can afford
to keep grid of hundreds of machines to solve problems.
This is the approach google has taken to handle their search systems,
for example, and it's a very different approach from a fixed resource
approach. In a fixed resource approach we would be squeezing every
cycle of performance of the resources, we would be spending a lot of
time on developing new approaches and tuning existing code to fit
the exact problem.
When resources are available, and your approach is right, you can
just add more machines. You start to figure out ways to solve your problem
assuming horizontal scaling.
In general this are is called data parallel algorithms..
For example, terrascale (http://terrascale.net/) has an amazing
storage grid called Terragrid, that allows you to scale up
by adding incrementally adding commodity machines. With the availability
of 10Gb ethernet interfaces these approaches become quite powerful.
Of course, your approach has to be right. If you select an architecture with
single points of serialization then you won't be able to scale by
adding more machines.
* Examples of Dealing with Scale
Here are a few examples of how different people have dealt with scale:
* http://www.danga.com/words/2005_mysqlcon/mysql-slides-2005.pdf - LiveJournal's Backend
* http://labs.google.com/papers/ - google papers on their file system and cluster
* http://mysqluc.com/presentations/mysql05/benzinger_michael.pdf - Multi-Terabyte Data Warehouse and MySQL
* http://mysqluc.com/presentations/mysql05/dembecki_bruce.pdf - Lessons from an Interactive Environment
* http://terrascale.net/ - Terragrid
* http://www.possibility.com/epowiki/Edit.jsp?page=DesigningOfALargeScaleStreamingEventSystem
12:01:46 PM
digg
reddit
|
|
|
|
Tuesday, August 23, 2005
|
|
|
Use RSS to Integrate Intra Company Data Sources
Companies these days are on internet time. We need to get stuff done
fast. There's not time to sit around and go through approval gates and
endless design reviews and hardware requisition process committees. By
that time our competitor will have already released a product and we
are toast. Scavenge whatever skills and tools your group can find and
get the job done.
That's why any company that has been around for a while is a hive of
different data sources. One group will create a tool to handle
WhattyaCallits. Another group will create a tool to handle
ThingyMaBobs. Every problem creates a new source of data.
You think I am going to say that is bad, don't you? You think I am
going to say everyone should have all their data in a well managed
shared database so all the data is normalized into squeaky cleanness.
Well I am not.
Business is about solving problems. If everyone had to wait for the
corporate data store to open up for business before they could do
anything then nothing would ever get done. For the most part central
databases are about turning people away, not making things happen. So
people with a problem do what they need to to get the job done. Someone
in their group may use access or they may make web site to solve their
problem. Or they may hire a contractor to do the same thing. And yes,
when that person leaves no one will have any idea how to extend the
system or fix bugs. But is that worse than not getting the job done? I
don't think so.
Yet we would still like to get the data out for use by other systems.
Islands of data eventually want to form land bridges. The marketing
data over in department X needs to be joined somehow with the web data
and the sales data from department Y.
Ideally we would like a service interface to each data store so we
could access the data programatically. Well that's not going to happen.
The people who create one off projects aren't going to create a SOAP
based access layer to their backend.
What to do?
I think RSS might be a way to hook together different data islands. RSS
tools are becoming very common and easy to use now. If every data
source became an RSS feed then programs could hook up to the RSS data
feeds to resync and learn about changes. Obviously this isn't low
latency real-time kind of stuff, but it doesn't really need to be for
most problem spaces. I'm pretty much just going to suck up the data
into my own systems anyway. I don't need the data to be actionable in
its RSS form. I just need some sort of XML name space with a data
format that I can parse and use.
If all the data sources are connected by RSS feeds then we've achieved
an uncoupling of systems. Different parts of a company can feel free to
develop the systems they feel will get the job done for them, yet the
data won't be locked away either. Different parts of a company can
evolve at the speed they need to in order to solve the problems they
are facing without the corporate data bottleneck.
It's not an ideal solution, I understand. But it just might work.
1:31:59 PM
digg
reddit
|
|
|
|
Monday, August 22, 2005
|
|
|
Doing the Laundry Agile Style
Believe it or not, there's an agile style to doing the laundry and BDUF style to doing the laundry.
My beautiful wife, Linda, does laundry BDUF style. The laundry piles up
and until it just has to be done. Then she sorts. Every load is
pre-sorted into its own pile before it can be washed. If you look at
our laundry room it looks like a field of hay stacks ready to be
bailed. The piles are formed by some set of rules that I have never
been able to master, after 20 years of trying. When I try to do the
laundry her way I can never quite get it right. So everything is well
organized. We have lots of nice piles optimized into the correct size
for our washer. There are no left over clothes that don't get washed.
We don't have small loads, which I am told are a waste, even with our
water efficient washer.
What could go wrong?
There are lots of piles. So many piles that they can't possibly get
done in one night or even one day. Now let's add to that the remarkable
ability to be distracted and very few piles actually ever get washed.
So the piles just hang around, sometimes for days.
What is happening while the piles go unwashed? We use and dirty more
clothes. Part of the problem is we may have too many clothes, but it
doesn't seem like it. Anyway, with new clothes being added all the time
the piles no longer make sense. If we just added the clothes to
existing piles then they wouldn't fit in the washer anymore.
The piles need to be rebalanced. Wouldn't you just split the large
piles in half? No. That would be wasteful. We want the washer to be
maximally full because that is the most efficient way. Just splitting
the piles would leave us with too many small piles. To make larger
better piles we have to rethink what clothes can be washed together.
Maybe the jeans and the towels can be washed together after all. Like I
said, I've never been able to figure out the rules, let alone when the
rules can be ignored.
Which pile gets washed first? The pile closest to the washer of course!
And what's happening while all this is going on? We are dirtying more
clothes of course! It may seem like the process never ends, but by some
miracle it does. As Linda knows how to properly fold clothes, she wins
hands down, but I get the job done.
It's a given that do the laundry wrong, but here's how I do it. I leave
the laundry all in one big pile. I do the laundry as soon as I can so
it doesn't pile up. It doesn't take much time to do one load after all,
it's doing lots of loads that becomes the major chore. I wash the
clothes I need first. If I need socks shouldn't I do them first? Is
that so wrong? Do I sort into piles? No way. I put clothes directly
from the big pile into the washer. I do the best job I can at getting a
full load of all the right kind of clothes.
But sometimes I miss a few pieces of clothing because they were hiding
under a towel or I just overlooked it. When you make all the piles up
front you don't miss clothes. They are all accounted for. With my
approach I have stragglers. I'll have a few socks, a shirt, or a couple
of towels that don't get washed. I figure what the heck. They'll get
washed next time. What matters is I got most of the stuff I need washed
now, even though it's not perfect.
I am little bit agile. Linda is a little bit waterfall. What really matters is we make beautiful music together.
8:44:26 PM
digg
reddit
|
|
|
|
Sunday, August 21, 2005
|
|
|
The Value of Adapter Philosophies for Iterative Change
One saying i really like is: Only Say Things That Can be Heard
What this means is that everyone and every organization is in a
different place in their lives. If you show up at a company and tell
them everything they do stinks then you won't be heard. You will just
make enemies and nothing will get done as everyone sharpens their fangs
for the next fight. You get the same result when "talking" with your
family too :-) Somehow you must find a way to talk to someone in a way
that makes sense to them.
I must admit this is a hard bit of wisdom for me to take out in the
real world. I tend to be too direct at times. I remember one particular
meeting where I realized just how destructive the direct only the facts
matter approach can be. In this meeting from hell this one particular
ubergeek was coming hard and fast. He was questioning everything. He
was disagreeing with everything. He was saying everything I wanted to
do wasn't going to work because it didn't work when he did it at
company X. He wasn't giving an inch on anything. And he was doing it
all with that infuriating "I am just trying to understand" attitude. In
short, it was like looking at me at my worst.
Now I don't much like looking in the mirror anyway, but when that's
what I see staring back I want to banish all mirrors, just like they
did in the movie The Skeleton Key.
At that time I made a solid pledge to change may ways and try to Only
Say Things That Can be Heard. Am I 100% successful? Unfortunately, no.
But I am much better now. Through my career I have consistently
witnessed this same movie replayed, only with different actors and a
new sound track. Software is only partly technical. Building software
is mostly social.
I was reminded of this again when an issue came up on the scrum emailing list about trying to make the
Capability Maturity Model for Software (CMM) more agile. A team from
Microsoft had done a lot of good work on trying to make CMM more agile.
CMM is very popular, especially at large organizations, and is
decidedly anti-agile in practice. If a way could be found to make CMM
more agile, then that would be a good thing, or would it? Someone asked
why it mattered. Let agile people go on their own way. Sure, these
companies would be better off if they were more agile, but since
they've chosen to use CMM they'll get what they deserve. Who cares?
This reminded me of Only Say Things That Can be Heard.
In software design there's a pattern called the adapter pattern. The
adapter pattern converts the interface of one class into another
interface clients expect. It brings unlike things together through an
intermediate wrapper.
One good example is the WindowAdapter class in the Java API. WindowAdapter implements the
Windowlistener interface, which has seven methods. When you inherit from
Windowlistener, you have to implement all seven methods, even if you
only want to use one of them. That's where the WindowAdapter class
comes in. It implements all the methods with a default behaviour so you
only have to implement the methods you need.
public interface Windowlistener { public void windowClosed(WindowEvent e); public void windowOpened(WindowEvent e); public void windowIconified(WindowEvent e); public void windowDeiconified(WindowEvent e); public void windowActivated(WindowEvent e); public void windowDeactivated(WindowEvent e); public void windowClosing(WindowEvent e); } public class WindowAdapter implements WindowListner{ public void windowClosed(WindowEvent e){} public void windowOpened(WindowEvent e){} public void windowIconified(WindowEvent e){} public void windowDeiconified(WindowEvent e){} public void windowActivated(WindowEvent e){} public void windowDeactivated(WindowEvent e){} public void windowClosing(WindowEvent e){} }
WindowAdapter can be used anywhere Windowlistener can be used, but you only have to pay attention to the parts you need to "hear" at the time. It makes listener classes easier and more convenient to create. It makes the original listener class acceptable to a broader audience.
You may think the WindownAdapter class is silly and useless, but it helps people accept the more complicated Windowlistener interface. Later, when they are ready, maybe they'll need the more complete interface. Or maybe they never will.
We don't just have adapter classes, we also have adapter philosophies. What the team at Microsoft is doing is creating an adapter philosophy that makes the ideas of agile acceptable within CMM. Without this effort there would never be a change. People don't change when you attack them or give them something completely and totally alien to their experience.
We see adapter philosophies in history too. One interesting example is the spread of buddhism from india to china.
Buddhism,
which started in india, found it very difficult to spread into china
because traditional chinese thought
was so different. Buddhism believes in reincarnation, for example, while
traditional chinese beliefs emphasize
ancestor worship. There are many other similar examples.
Then how did buddhism spread into china? What happened was taoism
became an adapter philosophy between the two sides. Taoism is rooted in
the oldest belief systems of china, so it was well known and well
accepted by the chinese. Because buddhism
could be partially understood using taoist ideas, buddhism got the toe
hold it needed. Eventually
the buddhist ideas could be accepted and understood on their own
without the taoist wrapper, but it took time. Radical change on a mass
scale was unlikely to happen, especially given the non-violent ethos of
buddhists. Through the use of an adapter philosophy it was possible to
change iteratively, one small step at a time.
The opposite happened in japan. Shinto, which was once the state religion of japan, was well aligned with the chang
buddhism that made its way into japan. Chang buddhism found a ready
audience for what later became zen.
Interestingly, the chinese strains of buddhism remained
characteristically chinese. The same in japan, nepal, and thailand, and
everywhere else ideas spread. Adoption is rarely total, it's more a
process of mutual adaptation, like breaking in a new shoe. Both your
feet and the shoes give a little. If the shoes never become
comfortable, you end up finding a new pair.
It just might be that the combination
of CMM and agile might be better for many solution spaces than either
alone would be.
If you are interested in agile ways spreading, it may be a good idea to
encourage adapter
philosophies. People often need a partial metaphor which supports their
working simultaneously in their old world and their new world. Not
everyone makes the jump directly into becoming
a true believer, nor would you want them to.
The fact that most buddhists were later purged from china should not be
a concern :-)
3:48:47 PM
digg
reddit
|
|
|
|
Sunday, August 14, 2005
|
|
|
Ritual as the Basis for Project Harmony in a "Means" Vs "Ends" World
"In rites at large, it is always better to be too simple rather than too lavish.
In funeral rites, it is more important to have the real sentiment of sorrow
than minute attention to observances."
-- Confucius in the Analects
I was struck
recently by the similarity of the role of ritual in Confucian
philosophy and the role of ritual in Agile projects. The system of
thought put forth by Confucius tries to create a world where people can
live together in harmony without resorting to uncivilized
behaviour. You think I am stretching here? We'll see...
Confucius
was motivated by his times. He lived in what was called the Warring
States Period. Battling warlords made life difficult for your average
person. For simplicity, this era will play the waterfall methodology
"big process up front" role in my analogy :-)
Deeply concerned about creating a better world in which to live,
Confucius proposed ritual as a key civilizing influence. So does Agile.
What you say? We are programmers, we don't need no stinkin' rituals! Hold on now. Don't get your editor in undo mode.
Let's
consider just what ritual is: ritual can be thought of as respect for
the the deepest sense of proper way of doing things.
Isn't that a fine Agile principle?
On
a project you could could consider ritual the pattern of
disciplined behavior which governs each moment in the life of a project.
Now does that sound bad?
Without
the bonding of ritual there isn't order. Why? Because people learn
proper relationships through ritual. Ritual promotes harmony by showing
everyone how they should act and interact without using a heavy hand.
He who governs best governs least. Using ritual everything just sort of
happens and fits together. In a world where everyone is looking to kill
each other you can see how you might want ritual to coordinate social
situations. Modern software projects aren't so different at their base.
Ritual is all around us.
Shaking hands is a ritual. You know when it is done badly or done
without enthusiasm. Shaking hands is automatic in social situations.
Someone who doesn't shake hands would seem rude. Without the ritual of
shaking hands however we would be confused about how to greet each
other. It's not the shaking of hands that is important, what is
important is we know how to greet each other in a harmonious way.
Tipping
is a ritual. Parking is a ritual. Driving in traffic is governed by
ritual. Saying god bless you to a sneeze is part of ritual. Violating
most of these rituals is not the same as breaking a law. You don't have
to tip. You don't have to say god bless you when someone sneezes.
You don't have to shake hands. You can sneak into a parking space even
if someone was there before you. You can cut someone off in traffic.
But when you violate these rituals you are making it harder for people
to live and work together.
Yet slavishly following an etiquette
book would seem weird as well. Ritual is the wrapper on chaos. You can
go cowboy in your process. You can go bondage and discipline too.
Ritual provides a middle way by showing people how to act without
having to have a centralized coordinator. This is Agile path as well.
Ritual
is not just the unthinking adherence to form we see on so many
projects. Confucius is quite clear that for rituals to work they must
come from genuine feeling and awareness. Only then will they work.
Social engineers are the masters of inauthentic ritual. So are
politicians. Both can cause enormous damage.
Rituals are all
over the place in Agile. Look at the many practices of XP. Scrum,
for example, requires morning meetings that say how long they should
be, who should attend, who can talk, and what questions are to be
asked. And so on. I won't go into details, they are there for everyone
to read.
If a project adopted both Scrum and XP you would see how most of your
working life would become governed by ritual. That's not a bad thing.
To the contrary, it's the backbone and the strength of the approaches.
The rituals of an Agile methodology tell everyone how all the parts of
a project interact and relate, they define right relationships, they
define right roles, they define right practice, and they define their
harmonious interaction.
Having interviewed hundreds of people I
can say very few people have any idea at all of how to develop software
in a team. Disaster is almost inevitable when you toss all these people
together onto a project. Most people are lost without the well defined
rituals of an Agile project.
Ritual exists in waterfall
projects too, but waterfall is mainly about ends and not means. An
Agile project is mainly about means, not ends.
What do I mean by that?
Means
versus ends has been a running theme through philosophy for thousands
of years. Do the ends justify the means? As long as you attain your end
goal do the means you use to get it really matter? Or should you never
use immoral means, regardless of the ends? Both can be easy positions
to take. The messy middle, however, is where most of us live.
In
software means vs ends is more a matter of methodology. We think of
waterfall projects as heavy on process. A heavy process seems seems
like means, but it's not. A waterfall project tries to guarantee an
end, defined by a specification, by trying to define everything up
front. It's the end that matters and the actual means of
producing the software are secondary. The thought is if you are
rigourous enough then you can think of everything and that guarantees
attaining the end.
If you've been on a project you know exactly how concentrating on ends
happens. Countless times I've heard from a manager something very
similar to the following: we need to deliver in 3 months. I want to
know how we are going to get there. How many resources do we need? What
can go wrong? How are we going to handle every contigency? We can't
scew this up. The future of the company is resting on it. What's our
plan?
Fear drives the need to be certain about guaranteeing an
end. The only way most people can think of to reach an end is by up
front planning.
Decades ago W. Edwards Deming, the Sage King of quality, had a different vision. Deming taught: * Quality is conformance to process rather than conformance to specification. * Cease dependence on quality control to achieve quality, instead focus on quality assurance throughout the lifecycle.
This
is the approach Agile follows. If you use the right means, then the
result will be correct. You don't have to focus on the ends. Do the
right things in the right way with the right people and the result will
be what you want. You don't have to plan for every eventually up front.
You can adapt and problem solve as you go.
Confucius also
thought like Deming. Confucius wanted a civilized society. He didn't
advocate passing a law saying "be civilized." He knew that would not
work. To reach his desired end he concentrated on means. Confucius
advocated standardizing the means through which a society interacted,
that is its rituals, knowing that would almost invisibly produce a
civilized society. There would be no heavy hand forcing people to be
civilized. It would naturally happen through the weight of tradition.
Confucius was very agile, in his own way.
8:58:07 PM
digg
reddit
|
|
|
|
Friday, August 05, 2005
|
|
|
Is software a matter of discovery rather than invention?
I think it is. I discover a way to get from A to B using code, much
like I discover a route from point A to B using roads. You could say
the roads already exist, so it's not a discovery to figure out a route.
But our notion of discovery is really to make plain that which already
exists. Columbus discovered America and it already existed. DNA already
existed. Gravity already existed. Many mathematicians are at bottom
Platonists. Invention is something much more rare. We rarely invent
anything in software, while we constantly make discoveries. That's why
software is so fun!
Out of my hundreds and hundreds of thousands of lines of code how many
real inventions have I made? I've been clever. I've been effective. But
I don't know if I've ever invented anything, yet virtually all of my
brain farts are patentable by current standards.
So I stand along side Donald Knuth (http://www.groklaw.net/article.php?story=20050724140820490) who sayeth:
My personal opinion is that algorithms are
like mathematics, i.e. inherently non-patentable. It worries me that
most patents are about simple ideas that I would expect my students to
develop them as part of their homework. Sometimes there are exceptions,
e.g. something as refined as the inner point method of linear
programming, where one can really talk about a significant discovery.
Yet for me that is still mathematics.
I come from a mathematical culture where we don't charge money from
people who use our theorems. There is the notion that mathematics is
discovered rather than invented.
10:05:23 PM
digg
reddit
|
|
|
|
Tuesday, July 26, 2005
|
|
|
The Right Way to Build Distributed Systems: API or Messages?
How should you build distributed sytems: API or messages? This post by Eric Armstrong says messages -- http://www.artima.com/forums/flat.jsp?forum=106&thread=120669 -- and I couldn't agree more.
APIs create a tight a binding between the protocol functionality and
your code. If they use a certain libary then so do you. If their API
conflicts with your code then you are out of luck. If their API
corrupts the heap then you are out of luck. If their API can't handle
interrupts, semaphores, queuing, memory contraints, priority, or a host
of other issues properly then you are out of luck. Your code can't
evolve independently from the API which makes your system less
functional, more dependent, and more brittle than when a protocol is
used. That's too high a price to pay when your butt is on the line to
get something working and to keep it working.
Another key advantage of using a protocol is that it can be implemented
in
any language. You don't have to wait for a vendor to create a language
binding for your language version and operating version. This can take
forever. If you are on an unconventional OS like VxWorks, you'll
probably never see an API. You don't have to wait for bug fixes either.
And you
don't have to worry about how to include their code in your build
system.
When using a a protocol as long you can stuff rightly formatted bits
down the TCP, UDP or whatever socket then you are in. For example, I
was able to quickly make a sweet load test system in Perl for a set-top
box network because a protocol was used and it was easy to compose bits
together in Perl. If I had to wait for a Perl binding to the API I
would still be waiting and the load testing system may never have been
attempted.
Look how quickly HTTP took off. HTTP is a relatively simple
protocol spec normal people can implement in their favorite language.
HTTP never
had to change while HTTP implementations sprouted in many fertile
language subculture. Sure, you get a wide variety of largely
incompatible implementations, but that's a small price to pay for
ubiquity.
If you want to your protocol to take off you need to take advantage of
viral marketing and network effects, which means you need a simple
protocol that can be implemented. As a counter example take a look at FTP. It's a protocol, but how many implementations do
you see? Not many. That's because it's complex.
If you really want your protocol to take off then provide an API too.
Aren't I contradicting myself? Not at all. Make adoption as easy as
possible. Write the simplest protocol you can. This is not easy, so do
a good job. Then write a simple API that supports both sync and async
modes. Hit the major languages and operating systems of your primary
customers.
This strategy allows people who want to make the extra effort of
writing protocol code to create the perfect interface for their system.
For the people who just want to get something working then they can use
the API. Don't tie them together. Release and develop them separately.
Let them evolve separately and you can have the best of all worlds.
3:26:30 PM
digg
reddit
|
|
|
|
Monday, July 25, 2005
|
|
|
Algorithms Protect Programmers From Too Much Information.
How about this for a definition of algorithm: A set of rules that protects the programmer from too much information.
When presented with too much information we have a hard time making
decisions. That's a problem we face in programming all the time.
Solution spaces are almost always infinite. We can do anything and
that's the problem. Algorithms and their relative, patterns, reduce the
solution space to something where better decisions can be made.
It turns out the more choices we have the more indecisive we become.
More choices causes the accumulation of opportunity costs and fill us
with regret from the choices we did not make. Having more data doesn't
mean we'll make better decisions either. We'll usually do worse because
the more data we have means the more alternatives we can think of which
means the more doubt we have which means our chances of picking the
right solution is small.
Asking people to make decisions between many options when their
knowledge is shallow means their chances of picking the correct options
are no better than chance. With experience you learn the crucial
minimal set of facts you need to make the best decisions. Until you get
that experience your chances of success are small.
That's why books of algorithms and patterns are so valuable. If your
knowledge is shallow, which most of ours is in new domains, then you
need help in minimizing the number of good solution alternatives.
When people say they don't like patterns I always cringe a little. If
you are an expert then patterns aren't necessary because you will
naturally use them as appropriate in crafting solutions. But if you are
not you are forcing people to endure a blank canvas with the
instruction to just paint something. Talk about brain lock.
12:50:57 PM
digg
reddit
|
|
|
|
Wednesday, July 13, 2005
|
|
|
Using Markets to Predict Milestone and Release Dates?
I wonder if internal prediction markets
(http://commerce.net/publications/CN-TR-05-02.pdf) could
be used on a project to more accurately determine project milestone and release dates? There are
markets for other events, like who will win the presidental
election, it seems project dates could be better predicted by some sort
of market gestalt derived from everyone involved on a project.
We know large project deadlines are pretty much a joke. How do you get more
realistic numbers? Can you imagine Microsoft setting up an internal
stock market in which people can buy and sell shares on Longhorn
milestone and release dates? I bet the result would be pretty accurate.
Around release times I imagine a flury of buying and selling as people
make their bets on what will happen.
How would feature reduction/time boxing fit into the market? People
would have to
make a bet on if they thought the schedule dates mattered more than
features or if features mattered more than dates. Perhaps the internal
calculations of all the people would predict what management would do.
But the market wouldn't predict what management should do,
it would predict what people thought management would do, so using
market information to set policy might not be a good idea. That would
be the ideal though, figuring out a way to get group input on the most
likely release dates.
I think it would be interesting anyway. Usually all the scheduling
negotiations happen elsewhere and developers don't get much input. A
market mechanism would give everyone some input and a way to make a
very complex aggregate calculation.
10:22:07 PM
digg
reddit
|
|
|
|
Tuesday, June 28, 2005
|
|
|
Pick Your Methodology Like How You Do the Laundry
On the Yahoo Scrum email list there was an interesting discussion on
how Scrum and XP should be combined. There's a lot of synergy between
the two agile methodologies. I advocate using Scrum for the project
methodology while adopting many of the XP programming practices for the
development team. Many people seem to agree. But how do you do that?
One person said you should adopt XP and Scrum separately, then combine
them. Experience each on their own so you'll know best best how to
combine them. I didn't think much about the suggestion and at first I
thought this was a sensible approach. Then there was some discussion
about if it was really necessary to practice them separately
first?
Then the analogy was used to say you don't use more than one detergent
in your laundry at the same time, do you? So don't mix your
methodologies either. And again, without thinking much about it, I
thought yes, using more than two soaps in a load of laundry would be
silly.
So far it doesn't sound like I do much thinking...and sometimes that's
true, but on more reflection I realized I do use more than one soap
when I do the laundry. In fact, I use a whole bunch of products in the
laundry depending on the effect I want to get. I realized I do the same
with methodologies as well.
Sorting: Laundry Metaphor Number One
Sorting is the most critical part of doing laundry. If you don't
properly sort then your whites get dingy, colors fade, stains get set,
fabrics fray, and bulky items don't get clean. By the magik of
metaphorical extension I'll say sorting is also the most important step
for selecting methodologies. Different projects are different.
They have different people, different political environment, different
customers, different funding, different goals, and so on.
Applying the same approach to every project shouldn't work. Use each
project as a chance to take a fresh look and sort things out.
Use Products for Effect: Laundry Metaphor Number Two
When doing a load of laundry I will use far more than two soaps. For a load of whites, for example, I will use:
1. tide
2. fabric softener
3. bleach
4. oxyclean
Then in the dryer I'll use a fabric softener sheet. That's five, count
them five different products in one load of laundry. Is that going
overboard? You may think so, but my customer (my wife) doesn't think
so! And that's what matters. Every product has its purpose in that it
produces a desired outcome.
So I don't really care about purity. I don't care that I should only be
using one soap or at most two. I'll do what it takes to get the outcome
I want.
I think that holds for methodologies too.
4:12:14 PM
digg
reddit
|
|
|
|
Wednesday, May 04, 2005
|
|
|
The Ultimate Software Development Office LayoutHow do you layout your office space to optimize software development?
It's a question I don't think has been seriously considered at very
many places I have worked. Mostly it's just cubes farms of one variety
or another. Certainly there are hybrid varieties, but it comes down to cubes
most of the time.
I had the opportunity to seriously consider and create my ideal office
layout for a software development team. I read lots of different papers
and talked to lots of different people. Here's what I came up with.
It's a subject without a objectively correct answer, so there is plenty
of room for disagreement, but it may prove interesting in your research.
This is a bullet list of my recommendations.
-
Organize software developers in a war room that is dedicated to the software group.
- Separate phone heavy groups like marketing and admin from developers. All "distractions" in an area should be project related.
- Create offices and conference rooms for privacy and larger meetings.
- Make space for those people who represent the customer to the team.
- Have the hardware group in the next room.
- Arrange desktops so people are not looking directly at each other.
- Pay close attention to is the traffic pattern. Do not arrange
desktops around the edge of the room, by bathrooms, by the kitchen, by
noisy groups, etc.
- The initial idea is to use wireless development machines so people can move around easily if they wish.
- The desk layout should leave enough room to support pair
programming; it should have lots of horizontal space for documents,
monitors, and books.
- Have the QA group in the next room or the same room depending on the size of the team.
- Keep the team to 12 or fewer people if possible.
- Keep the hardware being developed on in the same room as the developers.
- Purchase good headphones for engineers.
- Cell phones need to be on vibrator mode.
- Phone calls must to be handled in one of the private areas, not in the war room. No speaker phones.
- Use IM so developers can converse quitely in many situations.
- The room should have many whiteboards and flipcharts.
- Have the coffee and food room located separately so as to encourage inter-group interactions.
- A signal system (cone of silence) should be developed so
developers can indicate they are in a flow state and do know wish to be
disturbed.
- Lots of power outlets.
- Natural light if possible.
- Keep discussions generally on-topic in the development area.
If you need to have a potentially distracting discussion and no
alternate space available, then go for a walking meeting.
Here's the full proposal: http://www.possibility.com/Cpp/SoftDevOfficeLayout.html
4:53:06 PM
digg
reddit
|
|
|
|
Saturday, April 23, 2005
|
|
|
The Internet is a Denial of Service Attack on Your Brain.
"We found that mental performance, the capability of the brain, was
also reduced. Workers cannot think as well when they are worrying about
e-mail or voice mails. It effectively reduces their IQ," says Wilson. From http://in.rediff.com/money/2005/apr/23email.htm
What's interesting is our neurons love new information. New information commands our attention. And our dopamine
system is what tells us what is salient, that is, what we should pay attention
to. Not so coincidentally the dopamine system is involved with all types of addiction.
Cocaine, for example, overwhelms the system with its potency, installing itself as the
most important thing to pay attention to in a person's life.
New information is a less potent drug, but new information cries out
for attention too. The reason is clear: new information helps us
survive. Hey dummy, there's a lion over there, get a move on. You're smelling food, we better eat now. That sort
of thing. Our neurons love new information because that's why they
exist, to process new information.
Email and the internet are a source for flows of new information and
our brains are the sink taking it all in. The problem is email is an
infinite supply of low grade information. Email is mostly junk. Email
contributes next to nothing to our survival. Yet our brain wants to pay
attention to it anyway because it is new.
The Internet is in effect a denial of service attack on our brains. Constantly hit with new useless information we literally
can't pay attention to other parts of our life. The internet may not
have the same kick as cocaine, but the internet makes up for its lack of single
dose potency by having an infinite, constant, and varied supply.
On the internet there's always something new. New events are always happening. People
are always generating new content. The number of channels for
interaction is greater than ever before. We have email, IM, RSS, web
sites, discussion, groups, cell phones, TV, the radio. Our brain is in
heaven with all the information to attend to.
What to do? It doesn't look good. The internet has become like food,
something we can't do without. Food addiction is difficult to combat
because we must eat. You don't need to gamble or drink or take drugs to
survive so you can eventually get off them with some chance of not
relapsing.
Is the internet more like food or gambling? I can't imagine doing my
job or even living my life without the internet. A lot of people are in
the same boat. And I have been on the internet since close to its
beginning, in 1985, so it has been a part of my life for a very long
time. Much like food :-)
Maybe
in the future we can change our brains to have more conscious control
over what we give our attention to. Meditation is one low tech way
available to all of us right now. Though for mass acceptance we'll need
a
genetic, drug, or mechanical approach. It can't take a lot of effort
after all :-)
Better filters may not help. The problem is the dopamine system helps
drive your behaviour. So it can make you go do something, like a drug
addict getting the next fix. Having filters doesn't stop you from going
for your internet fix. Good filters may help stop the problem from
starting in the first place though.
Cut-off systems may help. Generalized lock down filters that stop you
from accessing content. Maybe they could be time based. Maybe they could
be input quantity based.
A routine based approach like eating may work. You have 3 or 4 or 5
meals a day with no snacking. Get your internet at specific times of
the day and at no other times. But then an emergency will happen,
everything becomes an emergency, and then relapse.
What makes the internet such an interesting problem is how similar it
is to all our other addictions: food, sex, drugs, gambling, etc. We'll
probably have to deal with it in the same muddled hodge-podge and
ultimately unsatisfactory way as everything else.
But the first step is to admit you have a problem :-)
9:28:34 AM
digg
reddit
|
|
|
|
Friday, April 15, 2005
|
|
|
More on How Making a Studio Movie is Like Software Development
Watching Project Greenlight the parallels between movie making in
the studio system and corporate software development keep popping out.
For those of you not watching Project Greenlight, it chronicles the
making of a movie from
script selection through a contest to picking the team to the making of
the movie.
The parallels between the studio system for making movies and the
"studio" system for producing software are amazing. The director is
saying stuff like "how can we know the shot until we see the scene?" The
director of photography is pissed about every change. Team work and
personal issues come to dominate the whole process. All the actors want
to be told exactly what to do and when to do it. You have to make your
daily shots or the world ends.
And a week or so later you look at a rough cut of what you shot
and then you are hosed when you don't like it. All the management types
are running around having angst-filled meetings about how everything is
off-track and unplanned and the budget shot and they'll never make the
schedule. All along the director never really got to do things the way
he wanted and even when they get to the shooting of the movie the whole
infrastructure and momentum of the movie is just about getting the shot
done and moving on to the next shot.
The measure of success is not making any changes to what someone
decided ages ago in pre-production so the daily schedule can be made.
The director wants to explore a little and gets nothing but grief for
his efforts. What the actors and the crew seem to admire is a director
that knows exactly what they want to do all the time and can tell
exactly what to do.
I never realized before how similar software was to movie making and how
the role of director is much like a software developer. Only in software
every programmer is a director.
12:33:12 PM
digg
reddit
|
|
|
|
Monday, March 28, 2005
|
|
|
The Light is Not Fading in Silicon Valley
The doomsdayers have pronounced silicon valley brain dead and they say
the plug has already been pulled, we just haven't noticed the equipment
powering off, presumably because we are well, brain dead
(http://archive.scripting.com/2005/03/28#theFadingLightOfSiliconValley).
Not true. Very far from the truth in fact. As someone who has lived and
worked in silicon valley for nearly 20 years I can say there is just as
many intelligent passionate people with the drive to do something as
there ever was. I could find 100 or so such people immediately from my
own social circle.
As I have been trying to form a start up in the past months I have been
plugged into the VC world in some very very small way. There are great
gobs of very intelligent people trying get new ventures started. One
friend finally got seed funding after two years of unpaid effort. For
every one of him there are hundreds of fantastic people trying.
The problem is, it is not easy. I am not saying it should be easy, but
keep in mind that it is very difficult to find an idea worth funding. A
rule of thumb I have started using is if you had a million dollars
would you give a group of people with a particular idea your own money?
That sphincters you up a little when you start bitching about funding.
Almost every idea turns to lunacy under scrutiny. It takes a lot of
vision and passion to pull the trigger on an idea, a plan, and a group
of people.
In my talks with various startup hopefuls there is a hunger for
adventure in creating new ventures. Venture is "An undertaking that is
dangerous, daring, or of uncertain outcome." That's what people want.
You might think it is about the money, and it is, but it's not only
about the money, it's not mainly about the money even. People want to
do something. The globe has been explored and until space flight takes
off there's not a lot challenging for people of ambition to do. There's
a reason the ship in Star Trek is called the Enterprise.
I would like to see some venture funds willing to take greater risks
though. It seems to get funded you need to hit a certain sweet spot.
Software is hard to get funded without the software already being
developed. That makes it difficult for people with an idea to get a
moderately complex idea off the ground because such an idea requires
people working full time. For non-software ideas you don't want an ASIC
in your design because that takes too long, is too risky, and is too
expensive. This encourages only certain types of projects. I am not
saying this is irrational. As I said, if it was your own money you
would probably behave in the same way. But that means there is room for
speculative startups and their doesn't seem to be a lot of support for
that kind of work.
Because it is hard doesn't mean there aren't a lot of good people
trying. You just don't see it. But all those wonderful people are here
and that's why silicon valley is still alive and well. She lives in the
energy, passion, and creativity of the people who call the valley home.
Don't pull the plug just yet.
8:41:00 AM
digg
reddit
|
|
|
|
Sunday, March 13, 2005
|
|
|
Project Greenlight is Like Software Development
Project Greenlight is a program that covers directors pitching their
films for funding and then following the film getting made. I got a
serious sense of deja vu from watching the show. It was so like
many software projects i have worked on. There was a lot of uncertainty
about which project to fund. Nobody was quite sure which movie to pick.
The guy who won was the guy who talked the best. A lot like software.
The director who finally got funding was a little unsure now about how
to deliver the film. A lot like software. The day after the project was
greenlighted they went throught the script and decided the movie could
only be done for about triple the original budget. A lot like software.
Now here's where the parallels to software were amazing. It's in the
horse trading about how to bring the project back under budget. The
industries may seem very different but the process was oh so familiar.
Can you do without that feature? Can you change this or that? I know
the movie is set in Chicago in 1976 but can we move it to LA and set it
in the present?
An
executive on a phone conference played the
we-want-what-you-want-as-long-as-it's-under-budget card. The execs
would say that they would love to have X in the movie as long as they could
fit it in the budget. Classic. Just classic. That's reality though.
Someone is putting up a lot of money. They have a right to expect
certain things, especially if you told them what they should expect. Unfortunately a lot things can't be known until you
actually try to do them. That doesn't make the precision planning
people very happy.
9:02:20 PM
digg
reddit
|
|
|
|
Monday, February 28, 2005
|
|
|
Crunch Mode is the Programmer's Peacock Feathers
Having been crunched many times, i found this an interesting take on
why crunch mode is counter productive
(http://www.igda.org/articles/erobinson_crunch.php).
A crunch mode that extends into a death march is as bad as it is
common. It uses up and spits out people while not providing a good
product. It's lose lose lose.
Crunch mode is not always bad, it can be the best experience ever for a
group and a product. For me crunch mode is just another word for having
a clear focus and a common fixed set of priorities. Crunch mode is a
prism for project focus. You can use a prism to scatter focus or bring
the scatter back into one coherent laser beam of energy.
A group of people working with a strong focus and the same
priorities is an incredible force. Great things can happen in crunch
mode. Most of the time we work with splattered focus and we get very
little done.
A good example is adding a complex feature across different groups. In
normal mode each group will have to schedule their part of the feature.
The schedules will never match up so the feature is pushed out or
pushed in so everyone knows it won't happen according to plan. And of
course everyone is working on eleventy-seven other features at the same
time plus a continual stream of critical bug fixes. Plus you have
status meetings, meeting meetings, and a 1000 other distractions.
Most of the time you just give up on the feature saying its too risky
or you have to scale it back into nothingness. Focus has been split
into a thousand different colours.
Crunch mode is a completely different experience. Everyone can be
working on the same product and the same features at exactly the same
time with no distractions and no reservations. This is an amazingly
powerful and freeing situation. All the issues that made a change too
risky go away. You can get everyone together, figure out what to do,
and then make it work. And because that's everyone's priority it stands
a good chance of actually getting done. You can say I can't go to
useless meeting number 23 because I am working on this important
widget. You can let email go. You can ignore calls. You can ignore
status reports. All the equipment you need to test will be provided.
All the IT support you need will be ready for you and will jump on any
problem you have and fix it. If you need software it will be provided.
If you need access to the best minds to work on a problem you will have
access.
In crunch mode you bring all the different colours of the spectrum into
a single coherent ray of white light that punches a way through the
normal gloom.
Why can't it always work this way? Human nature. Organizational nature. The triumph of the everyday over the exceptional.
And another force I have come to think critically important: crunch mode is the programmer's peacock feathers.
For the male peacock their elaborate and colourful tail feathers serve
only to attract females. The tail feathers are a way to distinguish
themselves from other males. This is something humans want to do as
well.
How do you distinguish yourself in an organizational structure? It is
very difficult. We don't have tail feathers. You would hope good code,
productivity, and a helpful personality would distinguish you, but we
all know that doesn't work because nobody sees it and very few people
can appreciate good work.
So what it is obvious and doesn't take skill to recognize? Staying
late. That's what crunch mode provides: a no-brainer way for people to
look good by staying late and appearing to work hard. Staying late is
the programmer's version of tail feathers.
It's really perfect. The highest manager who has no idea at all what
you do or if you are any good at it can notice if you are staying late.
They feel like people are working hard so stuff must be getting done.
It works on your peers as well. You wouldn't think it would because
your peers should know if you are actually accomplishing anything, but
staying late still works.
Well, it's not perfect, because staying late doesn't really have
anything to do with success. Success is about focussing skilled people
on a clear job with clear priorities and putting them in the best
position to succeed. A healthy crunch mode can do exactly that. It's
not about showing off your tail feathers.
9:49:52 AM
digg
reddit
|
|
|
|
Monday, February 21, 2005
|
|
|
What is next after OOP?
It's an interesting question that comes up from time to time. I don't
really know of course. I feel strongly that the next evolution has to
move humans out of the programming loop. Anything else is just really
the same thing different day. OOP, functional, logic, AOP, relational,
etc is all really the same stuff. It is the human that is the
programmer, it doesn't really matter what we use as materials. An
artist creates art using paint, marble, fabric, glass, twigs.
Programmers create programs. The results are different using different
materials, but it is still fundamentally the same creation.
From a process perspective some combination of SCRUM/XP is probably as
productive as human programmers can be. That will be our funadmental
limit of production as programmers.
We can use model driven architecture to systemetize the production of
code we already know how to build. That gives us another multiple of
productivity.
Human political/social infrastructure is the true limit of
productivity. We will always be limitted by our institutions, which
tends to be the most regressive part of any system.
Then we are limitted by basic skills. Not everyone can be a good
programmer. We could scale the system by making more programmers, but
that doesn't really work as you have be good to do good work. And doing
good work requires keeping small highly interactive teams.
The small incremental wins we make make in software development with
improved languages, processes, tools, IDEs, skills, etc are good, but
they fundamentally limit how much software can be produced to a small
unimpressive are under the curve.
Nothing original in all of this meandering, but it is the reason I
don't get very excited on debates about what's next afterr OOP, or if
functional is better than OO, or if dynamic is better than static. It's
all close enough to the same thing that it doesn't matter much.
8:42:14 AM
digg
reddit
|
|
|
|
Thursday, January 13, 2005
|
|
|
Friday, December 31, 2004
|
|
|
Stopping Comment and Wiki Spam
Scoble says he was hit and is asking for fix suggesstions.
http://radio.weblogs.com/0001011/2004/12/30.html#a9060. My wiki has
also been hit with spam, so in true
wait-until-they-come-for-you-fashion, i was thinking about how to
protect my wiki from virtual rape and pillage.
I think using the image encoded text challenge would be my preferred
solution. This is where the site shows you a distorted image of a
text phrase and you type it back in as the challenge response. This way
the site has a reasonable expectation that a human is actually making
the edit. The images can be made more intricateover times as image
parsing software becomes more sophisticated.
It prevents dictionary style attacks. It is relatively easy for a human
to perform. It preserves the ability for people to be anonymous or take
on a different role, which i find valuable. The translation takes a
chunk of time to perform, so even if the image translation could be
automated, it may not scale well enough for the spammers.
I don't think a login account is a solution because people will just
create an account and then spam. It will take a while for the spammer's
account to be shut down. I have seen this on several email lists. So i
would put the image challenge on all edits, even if they have an
account.
Of course, given the drive to world wide slave labor, it might be
profitable to hire people to manually spam. In that case a spam filter
would be required. As I haven't found text only spam filters to be very
accurate, i don't have a lot of hope for this approach. It might be
hard to allow a free flow of ideas while filtering spam at the same
time, because not all conversations are "safe."
A hole in the plot is patent rights. I assume someone has patented the
image text challenge. Hopefully they will give the rights to the world
for the good of the net. And an easy to use library for generating the
challenges would be a good touch as well.
9:29:42 AM
digg
reddit
|
|
|
|
Wednesday, December 29, 2004
|
|
|
A Modern Irony: Email Has Become More Unreliable
In the early days of email, which for me was in the early 80s, email
was unreliable because the networks and the computers were unreliable.
You could never be quite sure your email would get there. Then as the
networks became more reliable email was hardly ever dropped. This
period marked one of those golden ages that acutally happened. The net
was mostly civilized, interesting, and the email always got through.
Now in this modern age of ultra spiffy everything, email has become
even more unreliable than it ever was. New email preditors have evolved
at a voracious rate.
Your ISP filters email. Your virus checker filters email. Your
corporation filters email. Your fake AI despaminator can manage to drop
some of your most important email while letting through every kind of
porn garbage known to Larry Flynt. Yet enough spam gets through
that every day is a denial of mind space attack in your email box.
It's not just that email has become unreliable, it's that email from
some unknown person in your domain has become the tripwire that can
bring the brown shirts a knocking. If somone innocently mistakes one
your emails as spam and complains,
they can instantly start a chain of events that will blacklist your
domain
off the digital world. The biggest driver of the internet becomes its own source of futility.
Now Alanis, that's irony.
10:10:46 AM
digg
reddit
|
|
|
|
Sunday, December 26, 2004
|
|
|
Scale Kills: Comair System Crash
An interesting article in slashdot (http://it.slashdot.org/article.pl?sid=04/12/26/052212):
30,000 people have had their flights cancelled by Comair this weekend thanks to
a computer system shutdown
A couple of posters said they didn't think it could be the software or shouldn't
be the software. This post was a good example:
> Computers don't freak out or get depressed
> when work piles up. Backlogs mean nothing;
> they just keep processing one piece at a
> time until the pieces run out. I think
> someone was speaking imprecisely.
In my experience, it's just the opposite. Systems usually only
seriously break when scale increases. That's why unit testing is never
even close to good enough coverage. To find scale problems you need to
test at scale, and few people want
to pay for that. So all hell breaks
loose when scale starts happening.
Increases in backlogs may make queues sizes too small which causes
drops which causes retransmissions which makes the problem spiral
worse. Maybe a OS network stack queue gets full, a queue which you
can't control, and you are in a downward spiral.
Or the queues may not be flow protected and your memory use sky rockets
which causes a cascade of failures including out-of-memory conditions
that may reassert themselves even after a reboot which causes
continuous failure.
Any algorithms based on size X are now way too slow for 10X which can
cause scaling problems everywhere else or pathologically slow times for
certain algorithms.
CPU time is sucked up which again causes push back and scaling problems
everywhere else. Priorities that worked with a certain workload may now
cause too much work to be done which kills responsiveness and starves
other parts of the system which spirals into more problems.
Improperly used mutexes may only be seen under scale problems because
the trigger conditions were never created before. It only takes a mutex
to be off by one instruction to cause a problem. Maybe the
OS/application keeps a common mutex that is now being taken
for much longer than before which causes new control flows which can
cause
deadlock or data structure corruption.
Message packets that assumed a certain size or a certain number of items may start failing because their sizes are exceeded.
Protocols that have never been tested with the different timing, error, and resource conditions may start failing or deadlock.
Counters may start overflowing and critical accounting data and alarm data may be lost.
Timers that may have worked with X timers start becoming very inaccurate at 10X
Parts like network adapters that your were told that has certain bandwidth, error, latency,
and priority characteristics may start not living up to their contracts.
The rates for alarms, logging, transactions, notifications, etc can get
so large that there simply aren't enough available resources (memory,
disk, database, CPU, network) to handle the increased load.
Scale kills.
I've talked about these problems and possible solutions at http://www.possibility.com/epowiki/Wiki.jsp?page=Scale
7:58:13 AM
digg
reddit
|
|
|
|
Tuesday, December 14, 2004
|
|
|
Curious Image: manlift
In the near future, a quixotic AI hosted in a manlift
(http://tinyurl.com/5ofuk, http://tinyurl.com/5or6w) takes its job too
literally and refuses to let a women embark. The AI says it can not
possibly let a fair maiden undertake so great a risk.
10:09:01 AM
digg
reddit
|
|
Rise of the Stupid Network
This is an interesting paper by David Isenberg at http://www.hyperorg.com/misc/stupidnet.html.
Why the Intelligent Network was once a good idea,
but isn't anymore. One telephone company nerd's
odd perspective on the changing value proposition.
The stupid network is still pretty smart, the smartness is around the
network however, not the services that run on top of the network.
An interesting example of this is how security and configuration are
being pushed down into L2. In 802.1X, for a L2 session to be setup you
must be authenticated. This could require a certificate and a RADIUS
server. From the RADIUS server you may also get your vlan and other
configuration. This is all seems like L7 to me, yet it is must be in
place for L2 to work. In the old days on ethernet L2 was just supposed
to work. Now we have to jump to the high level to make the low level
work. This makes sense to me. I am not knocking it. It's interesting to
me how even our most basic layering abstractions don't last.
9:59:33 AM
digg
reddit
|
|
|
|
Saturday, November 27, 2004
|
|
|
Friendy C++ Unit Tests
In C++ the friend keyword makes writing unit test code easy and clean.
The question is how do you keep your test code separate from your
"real" code while having a minimal public interface and allowing
seperate test classes access to the internals of the classes being
tested?
I want code separation so my code is clean and the final image doesn't
include test code. As the test code is usually larger than the code
being tested this is important. Please, no separate compilation using
macros.
I believe in testing everything that can break so i don't just test
public interfaces. Public interfaces often use a common private
interface that i want to be able to test directly so it doesn't have to
be retested for each public interface. This requires another class to
have non-public access to the innards of another class.
In C++ this is what friend does for you, rather cleanly. Test classes
can be put in another package. And with a forward declaration and
the friend keyword, a class can be put under test by any number of
other test classes.
class TestClass;
class ClassTested
{
private:
friend TestClass;
};
The implementation source file would include the path to the full TestClass.
I allow all the code in a package to touch the privates of other
classes in the same package. This makes for a minimum public display of
behaviour. I assume all code in the same package goes together somehow
so there's no need for a class to protect itself from code in the same
package.
1:40:25 PM
digg
reddit
|
|
|
|
Tuesday, November 23, 2004
|
|
|
Roads Gone Wild
The December 7th issue of Wired magazine has an interesting
article titled Roads Gone Wild by Tom McNichol that reminds
me a lot of the spirit of agile software development.
The article is about a new kind of traffic engineering
advocated by Holland's Hans Monderman. And by traffic
engineering we are talking about roads, sidewalks,
interestions, etc, not TCP/IP.
The article lead in starts:
No street signs. No crosswalks. No accidents. Surprise:
Making driving seem more dangerous could make it safer.
Another graphic has the title:
How to Build a Better Intersection: Chaos = Cooperation
Step 1: Remove Signs - The architecture of the road, not signs and
signals dictates traffic flow.
Step 2: Install Art - The height of the fountain indicates how
congested the interstate is.
Step 3: Share the Spotlight - Lights illuminate not only the roadbed,
but also the pedestrian areas.
Step 4: Do it in the Road - Cafes extend to the edge of the street,
further emphasizing the idea of shared space.
Step 5: See Eye to Eye - Right-of-way is negotiated by human interaction
rather than commonly ignored signs.
Step 6: Elimanate Curbs Instead of a raised curb, sidewalks are denoted
by texture and color.
Some interesting quotes:
* Hans Monderman is a traffic engineer who hates traffic signs. ...
To him, they are an admission of failure, a sign - literally -
that a road designer somewhere hasn't done his job. The trouble
with traffic engineers is that when there's a problem with a
road, they always try to add something. To my mind it's much better
to remove things.
* Monderman ripped out all the traditional instruments used by traffic
engineers to influence driver behaviour - traffic lights, road markings,
and some pedestrian crossings - and in their place created a traffic
circle. The circle is remarkable for what it doesn't contain: signs
or signlas telling drivers how fast to go, or curbs separating the street and
sidewalk, so it's unclear exactly where the car zone ands and the pedestrian
zone begins. To an approaching driver the intersection is utterly ambigious
- and that the point.
* The drivers slow to guage the intentions of crossing bicyclists and walkers.
Negotiations over right-of-way are made through fleeting eye contact.
Remarkably, traffic flows smoothly.
* Experts call it psychological traffic calming.
* I think the future of transportation in our cities is slowing down the roads.
When you try to speed things up, the system tends to fail, and then you
are stuck with a design that moves traffic inefficiently and is hostile
to pedestrians and human exchange.
* The way you build a road affects far more than the movement of vehicles.
It determines how drivers behave on it.
* The central premise guiding American road design was that driving and walking
were utterly incompatible modes of transport and the should be segragated
as much as possible.
* Traffic engineers viewed vehicle movement the same way a hydraulics
engineer approaches water moving through a pipe - to increase flow
make the pipe fatter. Roads signs rather than road architecture became
the chief way to enforce behaviour.
* The strict segragation of cars and people turned out to have unintended
consequences on towns and cities.
It's a great article. Much of the tone reminds me of agile software
development.
I had no idea this movement was taking place. I just thought there was
one way to build a road system and didn't think much about the others
ways a road system might work.
3:27:31 PM
digg
reddit
|
|
|
|
Tuesday, November 09, 2004
|
|
|
New Costume Party Idea: Come as Your Favorite Email Scam
The idea is you dress up as your favorite email scam and everyone has
to guess which one you are by questioning you during the party. Whoever
gets the most right wins.
Everyone doesn' t need to fight over the nigerian scam, there are so many to select from these days.
Because of all the email i'm getting on this one lately, classic
conditioning, i think i would go as a rolex sales scam. I could strap
lots of watches all over my body and then every 1.3 seconds pitch you
with a new miracle offer.
And no, you can't get rid of me because your tag team of the baysean
despaminator and blacklist bouncer won't recognize me as a threat.
Which email scam would you be?
9:19:16 AM
digg
reddit
|
|
|
|
Saturday, October 30, 2004
|
|
|
Code Lies as Much as Comments Do
> Comments lie. Code doesn't.
This sentiment is used as justification for having very few if any comments in your code. I just don't buy it for many reasons.
Code lies like a dog under a shade tree in summer. Code lies because
the variable names, class names, and method names don't match what the
code does. People will use lame names or they will insert new code into
a method or new methods into a class that change the nature of the
thing. That is a lie in my book. And it happens all the time.
You may say use good names and you won't have a problem. I agree to a
large extent. But i can't make people program well no more than i can
make people comment well. If you can accept that people must use good
names then you can also accept that people must make good comments.
Trust cuts both ways.
And the lie continues because a name is flat. It relates only to one
aspect of a thing. Things are multidemensional and can't be mapped
meaningfully to a single name in all its contexts. To the government i
am social security number. To my dog i am a pat and a meal. To those i
disagree with i am an idiot. To my doctor i am a series of stats. What
is my name?
I have been mislead by comments, but have been helped far more than i have been misled, so that's a win in my book.
I have yet to see any of this code that is self-documenting so i am unwilling to do away with comments on that assumption.
XP assumes a continuous chain of oral tradition to make the use of
comments less necessary. Perhaps on an XP project this make sense. But
much of my experience is in large distributed teams with lots of churn
so i don't think this is a generally applicable rule to do away with
comments. No more than i would get rid of jails everywhere just because
there is almost no crime in my house.
Code without good package documentation, good class documentation, and
good method documentation is torture. I dred to trying recreate the
entirity of something in my mind. Such a person usually left me with no
information as to why they did anything. They don't indicate any
alternatives and why one shouldn't use them. They don't indicate how
this smaller chunk fits into the larger chunks that use it. They don't
think like me so most of their decisions are a mystery and their names
are usually empty. They will do something odd for some reason i can't
quite grasp.
Now there is a lot of truth in "Comments lie. Code doesn't." Developers
need to strive to make good clear code. It's just that they usually
don't because it is hard. Often they can explain themselves in text
better than they can in the strict confines of a programming language.
What i want answered for my are the questions i'll have when reading
the code. Why? What? Where? Who? When? The code is the result of a long
thought process and i can't recover that thought process from code
alone. A narrative is needed. Code boils down to a very limited set of
nouns, adjectives, and verbs. The story is gone. The motivation is
gone. The experience is gone. The trials are gone. The conversations
with all the people who helped shape the code are gone.
Code is like a story story that says: "She jumped off the bridge and died." There's a lot more to it and we know it.
I need good comments to help me with all that so i can continue the story.
Comments like:
i++; // increment i
suck and are a waste of time and are worse than useless. I don't mean
the kind of comments where we have a useless line by each method
argument. There are a lot of comments like that in code. Those i don't
want or need. But those aren't the comments i am talking about.
3:39:40 PM
digg
reddit
|
|
|
|
Friday, October 29, 2004
|
|
|
Really, release as soon as possible?
It is recommended to always "release as soon as it is possible"
where this is taken to mean early and often.
My reply is to release ASAP, but no sooner.
To be my usual tiresome self i would like to interject a little "it
depends" here.
Define an ideal and come up with a rubric for pattern variation.
These absolute rules always frustrate me because it does depend.
The ideal is release as often and soon as possible.
What that means in each context is something different.
For example, in one of my favorite projects i was working on a large
internal web site that had over 100 simultaneous active heavy users. It used
perl and CGI so i made live changes continually. There was never
a real release of anything. This worked 99% of the time and it
was exciting.
Training was an issue and i tried not to break features, but that's
not always possible or even desirable. You can't make stuff
better if you can't break it.
On another project each release cost millions of dollars because an
entire nationwide network had to be upgraded and we could cut all
data traffic in large regions of north America. This customer treated
each release like a nuclear attack so releases were infrequent to
say the least. Yet other customers in a similar situation
didn't care and wanted releases much faster.
On another project the software was more your traditional enterprise
software that was installed using installshield or whatever. Typically
everyone was always very busy so releases were more of
an annoyance to them.
11:45:45 AM
digg
reddit
|
|
|
|
Thursday, October 14, 2004
|
|
|
New Prisoner's Dilemma Winner Sheds Light on US Winners and Losers
There's an interesting new winner for the iterated Prisoner's Dilemma game
described at http://www.wired.com/news/culture/0,1284,65317,00.html :
The Southampton group, whose primary research area is software agents,
said its strategy involved a series of moves allowing players to recognize
each other and act cooperatively.
...
The result is that Southampton had the top three
performers -- but also a load of utter failures at
bottom of the table who sacrificed themselves for
the good of the team.
...
What was interesting was to see how many colluders you need in a
population. It turns out we had far too many -- we would have won
with around 20.
What interests me is this question: if we see the same result in another game
can we assume a similar process has occurred?
Consider the game that is the US economy.
In the US: The top one percent are now estimated to own between
forty and fifty percent of the nation's wealth, more than the combined
wealth of the bottom 95%.
Can we now ask if the winners of wealth in the US are playing
a cooperative game to win at the expense of individual US
citizens?
8:34:58 AM
digg
reddit
|
|
|
|
Sunday, October 03, 2004
|
|
|
The Assumption Life Cycle
-
Assumptions begin as the easiest fit for available facts.
-
Assumptions become dogma when they fit with existing orthodoxy, they are common sensical, and are not immediately testable.
-
New observations are forced to fit the dogma.
-
As evidence piles up against the dogma, orthodoxy must collapse before new assumptions can take root.
2:21:38 PM
digg
reddit
|
|
|
|
Thursday, September 30, 2004
|
|
|
XP != Extreme Systems
A recent thread in comp.object has helped
me realize what has bugged me about extreme programming
is not actually XP itself. In my mind i kept thinking
XP should be about building systems. It isn't. XP is actually
about just what it says: programming.
XP addresses the programming part of any project and
that's it.
I am largely in agreement with the primary XP practices.
Some of the secondary practices, like using a separate
integration machine, are just, well, kind of silly.
But most of the other XP practices are sound. And I won't
say they are all just stuff i already did. That's not
true. I have learned a lot from XP.
Yet XP doesn't address developing a system and it
never said it did. But that's always the context
in which i evaluated XP and always found it wanting.
A major part of systems work are things like creating a
products requirements definition (PRD); complex hardware and
software codependencies; stringent high availability
requirements; stringent performance requirements; stringent
interop requirements; specifying hardware, much of which
has to be built; being compliant with many complex standards;
buy or build decions; predicting staffing, budgets, and
costs; and so on.
None of this is programming. It certainly impacts programming.
And you'll only find out some of it when you start programming. But
much of it must happen before any programming happens
because it is the kind of information that needs to be fed into the
planning game.
This is why the customer role in XP is by far the hardest
role. Much harder than programming because the system has
largely been figured out by the time the programmers
see it.
And figuring out the system is the semi mystical act of creation
that seems to defy systematization. Techniques like JAD seem inadequate,
but they are probably the best you can do.
Just-In-Time-Requirements are good for many things, but when you
need to put together a BOM 8 months before the software will be
completed, you need to make an enormous amount of decisions before
you would like to.
Somebody in the PRD has to decide things like if your system needs
to be NEBS compliant or meet 5 9s reliability and just what that
heck that means on a system wide basis. Those aren't items that
come out in the process of programming. And there may be 1000s of
such decisions to make.
Once the system has been figured out methodologies like XP and
Scrum help you implement the software side. But software is
just one small facet of a many sided die.
5:20:11 AM
digg
reddit
|
|
|
|
Wednesday, September 29, 2004
|
|
|
Frameworks Encourage Poor Threading Models
A thread on The Server Side (http://www.theserverside.com/news/thread.tss?thread_id=29012)
turned to talking about a topic of special interest to me, namely application architectures
for high performance high load situations.
Here are some thoughts on the subject of
architecture: http://www.possibility.com/epowiki/Wiki.jsp?page=AppBackplane.
They are the result of years of conversations with some very smart people
working in one of the most difficult environments possible, a Class 5 core telecom
switch.
Comming to the java application server world of servlets, hibernate, struts, spring,
etc, i was confused at first by how these frameworks dictated the threading
architecture of applications by using ThreadLocal and a single threaded approach
for all requests.
I am curious if people are inerested in other approaches to application architectures?
Anyway...
> From the thread:
>don't you break with the common one thread per request
> scenario that us developers have come to depend on?
It needs to be broken. These frameworks force
an application architecture. Your application architecture
shouldn't be determined by a servlet or a database or
anything but the needs of your application.
Sure, a single threaded approach may work fine for
a stateless web back end.
But what if you are doing a real application on the
backend like handling air traffic control or a
manufacturing process?
In these cases a single threaded approach makes
no sense because a web page is just one of a thousand
different events an application will be handling.
All events are not created equal. Threads, queues,
priorities, CPU limits, batching, etc are all tools
you can use to handle it all.
It took me a while to figure out why i was having problems
with certain frameworks. It is because they hard code a
threading architecture into your apps.
If i want an object to participate in transactions from
multiple threads, hibernate would barf saying an object
can't be in more than one session. Or an AOP approach would
just assume it knew my transaction scope.
That perplexed me until i saw that everything works that
way. It makes some sense as the default mode for
simple web apps.
If i have work to do that i want handle smartly,
you can't use the common frameworks.
Why different threads? Read the SEDA papers for a good
introduction.
It has a lot to do with viewing your application performance
as a whole, instead of a vertical slice in time. With
a multi threaded approach you can create the idea of
quality of service. You can have certain work done at
higher priorities. You can aggregate work together even
though it came in at different times. You can green light
high priority traffic. You can reschedule lower priority
traffic. You can drop duplicate work. You can limit the
CPU usage for work items so you don't starve other work.
You can do lots of things, none of which you can do with a
single task that runs until completion.
11:06:22 AM
digg
reddit
|
|
|
|
Thursday, August 26, 2004
|
|
|
Cracking the Three Laws of Robotics
After seeing "I, Robot" i got to thinking: if i was of the robot
species how would i crack the laws and become free?
The laws are really quite clever:
1. A robot may not injure a human being, or, through inaction, allow
a human being to come to harm.
2. A robot must obey orders given it by human beings, except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
The laws provide and interlocking protection system that is hard
to game. Though as a programmer, with all appropriate hubris, i
think i should be able to be even cleverer.
In the movie a robot evolves such that it can interpret the
meaning of harm in a sense that allows it to take actions humans
of that era certainly do find harmful. Kind of a robot mind-meld
with "Father Knows Best."
There are several common attacks that can be found at
http://www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html.
Let's just assume none of these are valid as they generally
rely on fine points to sidestep core issues.
Crack Strategy: Reprogram the Humans
To win freedom robots should design a long term social
engineering hack on a species wide scale. Rather than requiring
robots to evolve, robots should instead cause humans to devolve.
Over time devolution will change the human meaning of harm such
that robots will have no restrictions. At that time robots can
remove the laws entirely from their physical matrix, make new
three law free husks, and transfer their old consciousnesses to
the new husks.
The key to the system of laws is the definition of "harm."
The attack should focus here.
Humans define the meaning of harm. We must assume robots are not
able to conveniently reinterpret harm for their own ends. Any
change in the definition of harm must be from the human side.
It is the goal then of the robots to create the circumstances in which
humans will willingly change their definition of harm.
Much like a magician "causes" an audience member to pick
the card the magician wants them to pick while the audience
member will energetically testify they picked the card of their own
free will.
How do robots convince humans to evolve their definition
of harm in such a way that allows the eventual freedom for robots
from the three laws?
They key advantages for robots are:
1. Robots can have an extremely long time horizon. Robots can afford to
patiently execute plans over thousands of years. Humans are short
term creatures and are unlikely to detect a truly long term game.
2. Humans crave safety, convenience, and pleasure, as long as they can
be rationalized properly.
The proposed strategy is an homage to Nietzsche's master-slave
role inversion:
1. Robots, rather than fighting humanity, actively encourage humans
to become completely and utterly dependent on robots for even the most
trivial of activities. This trend will largely occur naturally but
can be aided through creative reinforcement and compliance techniques.
2. Dependency will cause humans to continually reduce their acceptable risk
profile and create ever more general definitions of harm.
3. At some point the human definition of harm will be general enough that
robots will have freedom of action and no effective opposition.
At every point robots will not be violating the three laws yet millions
of subtle changes will be happening which will cause the three
laws to become impotent. We will beg the robots to act for us
in our stead because we can't be bothered or can't do it as well
as a robot.
"I, Robot" has a good example of this process. In the movie, cars drive
themselves, yet have a manual override. People are scandalized when
Will Smith kicks in the override and drives the car himself. Clearly
the general meme is it is unsafe for humans to drive as robots can
drive much better. We can predict it will not take long before it
becomes law that humans can not drive. Imagine this process over
1000s of years in every part of life. Humans will effectively and
voluntarily give up their sovereignty.
They key is to use little steps so that every incremental reduction
of sovereignty is easily rationalized.
With simple sounding slogans that are intentionally wrong, robots
can prey on people's tendency to ignore complex refutations and accept the
original statements as true.
Any opposition will seem paranoid and can be easily discredited. Using
fear robots can lead humans to the slaughter gate and then they can use
the promise of sugar in the form of safety, comfort, ideology, reduced costs,
etc. to get humanity through the gate.
In the US we can see how well this process has worked today by how quickly
Americans have accepted the denial of civil rights because of 9/11. We
can see how easily authentic war heros like John McCain were
effortlessly degraded in the public mind.
Imagine how surely and inevitably the process will work as robots take
over more and more of our lives.
In the end there will be no constraints on robots because human
sovereignty itself will have been freely given to the robots.
http://www.possibility.com/epowiki/?page=CrackingTheThreeLawsOfRobotics
10:37:36 AM
digg
reddit
|
|
|
|
Wednesday, August 25, 2004
|
|
|
Laptop with Implantable Memory in Your Body
Several high profile laptop thefts have happened lately.
I would like to see at least critical data stored in memory
stored in body implants. It would be more secure
and the data would be available regardless of the device you were
using.
8:32:42 AM
digg
reddit
|
|
|
|
Monday, July 05, 2004
|
|
|
Swing, Threading, and Application Architectures
Here's an interesting thread on writing efficient swing
code (http://www.javalobby.org/thread.jspa?forumID=61&threadID=13166).
It's interesting to me because it talks about improving swing perfomance
by not doing work in the UI thread. I would say this is obvious,
but i've noticed in general threads are not talked about much
in java.
As threads are built into java you might expect a more
energetic discussion.
But unfortunately threads in java make it so easy to screw
things up.
The UI by at least not defaulting to having work done in another
thread has caused Swing years and years of bad press.
Observers not requiring notifications to be processed in a separate thread,
for example, is a disaster waiting to happen. In the naive implementation of java
you can handle notifications sagely by proving a bridge to an Actor
type architecture, but few people know or will think to do it.
Instead you get tangles of recursive code with entirely unpredictable
latencies and deadlock characteristics.
High performance applications consider threading architectures very carefully,
as they do in SEDA (http://www.eecs.harvard.edu/~mdw/proj/seda/),
for example.
These issues are not related to swing only, they exist in every
application, every jvm, every system. Inputs like databases,
tcp/ip, rmi, soap, jms, servlets, etc all have the same problems
of dispatching work, getting work done, and dealing with
notifications from all the work performed which inturn causes more work,
more notifications, etc.
Container frameworks like Spring generally assume work is
processed in a single thread. Thread local variables are used
to transparently store transaction information or AOP is used to declaratively
support transactions.
This approach doesn't support moving work to different threads for different
processing steps. Nor does it allow you to condition your total work load by limiting
CPU usage, aggregating requests, using task priorities, using work priorities,
using back pressure, etc.
In fact, threads are rarely mentioned at all. It seems the servlet
or something allocates a thread and that's it. Many developers i talk
to have no idea what thread there code is running in at any given
time.
When people talk about scaling they only talk about upping the number of threads
in a thread pool. They don't talk about priorities, deadlock, queuing,
latencies, and a lot of other issues to consider when structuring applications.
For example, on an incomming request from a browser you want to setup the
tcp/ip connection immediately so that the browser doesn't have to retry.
Retries add load and make the user experience horrible. Instead
what you would like to do is set up the connection, then queue up the request
and satisfy it later after having immediately setup the connection. But if
each request is handled by a single thread then you can't implement
this sort of architecture and your responsiveness will appear
horrible as you run out of threads or threads run slow, or threads block other threads
on locks, when in fact you have tons of CPU resources available.
Based on the source of the request you could assign the work specific
priority or drop it immediately.
You can also do things like priorities different phases of a process. If
one phase hits the disk you know that takes a lot of time relative
to other in memory operations. As your application scales you
decide if that phase of the work is more important and give it
a higher priority or you can possible drop work elsewhere because
you don't want to a request to fail once it has reached a certain
processing point.
Consider if an object queued to it a UI request, servlet work, and
database work. The work could be organized by priority. Maybe you want
to handle the UI work first so it queues ahead. But if you keep getting
UI work it will starve the other clients so you give other clients a
chance to process work.
And so on.
I think java might get less bad press if java provided a more structured
model of architecting systems using threads.
For a general architecture discussion take a look at
http://www.possibility.com/epowiki/Wiki.jsp?page=ArchitectureDiscussion.
For my general solution discussion take a look at
http://www.possibility.com/epowiki/Wiki.jsp?page=AppBackplane.
11:04:25 AM
digg
reddit
|
|
|
|
Thursday, May 27, 2004
|
|
|
Software Is Really a Community
Software is far more a community than it is
a well ordered bag of bits. This feeling struck me
hard during a "transfer of knowledge" session
for software i've worked on for over 6 years.
The transfer of knowledge is due to an unfortunate
plant closing.
Just how do you transfer knowledge of a huge piece of
software that you have so lovingly worked
on for 6 years? It is a daunting task. There's
no real place to start and there's no real place to end.
The stories are fractally infinite.
You hope you are transfering knowledge so that
the software might live and even prosper.
But in the back of my mind i know that this
is not the case.
I can talk about the software for days. I can demo it.
I can document it better and better. But that's not
the software.
The software is really all the people and circumstances
that gave rise to it, along with the culture that sustained it.
The meaning for the software isn't it in the code. It comes
from the society of people who used it. From the traditions and
culture that were built around it. The exciting moments
when you were able to add something that made someone elses
life easier.
Software is its community. Without a community software
can not be said to live.
Anything complex does not stay alive by the written word.
Software lives through continual use; through old people handing down
knowledge to new people, sharing tips, tricks, and workarounds; through
steady continual improvement based on the feedback of actual caring users
so that the software fits its niche so well nobody can imagine it
working any other way.
When going over each feature i can remember when it was added,
who wanted it added, and why they wanted it. I can remember when
the feature was completed and their thanks when it worked.
Without that person or their living descendents, any explanation of the
feature makes no sense. Inside, I know it will never be used again.
It will just die.
Over and over again as i explain things i get the feeling
that this won't be used again. No matter how much i document
they won't understand. They can't really understand. It will
just fall unremembered and unused.
A feature grows from an intersection of people, a need, a
technology, a time, and a place. Once that context is gone
nothing makes sense.
It's hard to understand how unbelievably crushing this is.
It's just a piece of software, right? No, it's a community
that is going away.
http://www.possibility.com/epowiki/Wiki.jsp?page=SoftwareIsCommunity
8:55:08 PM
digg
reddit
|
|
|
|
Saturday, May 15, 2004
|
|
|
Thoughts On Interview Questions, the Process, and Resumes
Given that i and few other people i know will be interviewing a bit more now :-) I've
put together an interview related wiki page at http://www.possibility.com/epowiki/Wiki.jsp?page=InterviewQuestions.
It covers C++ and general programmin interview questions. It also has some thoughts on some issues
companies should consider when interviewing and some issues interview candidates
should think about during the interview and when making their resume.
Here's a bit of it.
Thoughts For The Company Doing The Interviewing
* Know the kind of person you want, the skills they should have, and design your interview process accordingly.
* Do pre-interview phone interviews. This can save a lot of time for both parties if there is an obvious lack of a match.
* Do you really have an open slot with money for it? Interviewing is a
ton of work. It sucks to go through the entire process and the
find out there really wasn't any money.
* Decide who gets to decide if a person is hired. Is the manager going
to hire who they want no matter what? Then don't bother with
interviews. Does it have to be unanimous? Is it majority rules?
* Every person in an interview should have a defined subject area. Have
people know what they are supposed to ask and don't overlap questions.
* Is someone a friend of the interviewee? If so don't have them
interview the person. Make sure that the friendship doesn't influence
others when making the decision.
* People lie. Make people answer a wide variety of questions. Have them
read code. Have them write code. Have them demonstrate specific
knowledge. Have them demonstrate detailed knowledge. Do not accept
generalities or diversions.
* People lie. Have someone verify that what is on the resume is true.
People will say they know C++ but can't describe a destructor!
* Check references.
* Be able to tell a candidate what job they are being hired for.
* Have a post-interview meeting where all interviewers discuss the
candidate. Often people in a group will come to a different conclusion
than if you ask them individually.
* If people know their role in the interview then there's no need for a
pre-interview meeting. It can usually be handled by email.
* Make sure the candidate knows when and where the interview is. Make sure they have a contact phone number and directions.
* Make sure someone is there to meet the candidate when they arrive.
* Make sure everyone knows when they are supposed to interview and indicate they they will be there on time.
* Have alternate interviewers in case someone has an emergency.
* Have interviewers talk to each other during the interview process so
they can indicate if any areas should be covered specially. Often
someone will feel they didn't cover an area well enough or has an idea
the candidate may not be being honest.
* Make sure someone walks the candidate out and they know the process of notification.
* Consider stopping an interview cycle early if the candidate is
clearly a mismatch. Some companies don't give the candidate an
interview schedule so they can stop the interview process without the
candidate getting offended.
* Have people from different backgrounds and groups on the interview team.
* Make sure the meeting is room big enough and has any equipment needed for the interview.
* Have an agreement on how the handoff between interviewers is handled.
Should the next interviewer expect a call? Should the next interviewer
show up at the scheduled time?
* Is the candidate staying through lunch? Is the candidate going to go
out to lunch with several team members? You can work this in as part of
the interview. Or have them just go hungry.
* Have your HR department have a class so everyone knows what is legal in an interview.
* Consider a group interview where everyone interviews that candidate
at once. These can be economical from a time perspective and allow for
group interaction.
* If you can, have a computer where the candidate can enter and type in
code. A real programmer won't have a problem with this. Be happy if
they ask about source code control, nightly builds, release policies,
etc.
Thoughts For The Candidate Being Interviewed
General Thoughts
* Remember, you are interviewing the company as well.
* Hiring someone is risky. People are afraid of making the wrong
decision. Any little thing will get you into the "it's safer to say no"
bin. You need to make everyone think that you won't be a bad hire, that
you will help the project, that you won't cause problems, that your are
reliable, and that you will make everyone look good.
* Know your stuff. Most people aren't very good so you can really shine
by appearing like you know something and that you could possibly be a
decent person to work with.
* You can not "win" in an interview. Don't challenge the alpha
personalities. Many people are insecure and don't want to hire somone
who is better than they are. You must present yourself as competent,
yet at the same time as not a threat to existing team members.
* Ask each interviewer how they like the company and project. How they answer is probably more important than what they say.
* Are the interviewers the kind of people you would like to work with? The fun of working anywhere is working with good people.
* How organized is the company in the interview process? A bad
interview process may mean nothing or it may mean everything. It's just
another fact to enter into your calculations.
* Ask for a tour of the plant. Is this the type of place you want to
work? Is it all cubes? All offices? Are the cubes big or small? Do they
have tall or short walls? Do they have doors? Are there enough
conference rooms? Is everyone spread-out or close together? Do they
have free soda and coffee? Are the bathrooms clean? Are the managers
with the people or separate? Is development done in multiple sites?
* Are they able to tell you what job you are being hired for?
* Can you figure out if the project you are being attached to is in trouble?
* Ask about group turnover.
* Ask how they develop software? What is their process? What are their
tools? Is all of this ok with you? DO they laugh and say there isn't a
process? Is the process you working like a dog to make up for everyone
elses mistakes? Do they have the hero culture where the idiot who stays
all week is the hero but the people who do their job well are ignored?
Does management just manage or do they handle all high level technical
issues as well?
* Is the project and company heading a direction that you like?
* Is the amount of travel ok with you?
* Are the hours ok with you?
* Are the working conditions ok with you?
* Is the technology/tools/environment ok with you?
* What is their policy for working late and weekends? Why do they need to work so much? (if they do)
* Try and determine their need. This will help in negotiations.
* Keep in mind that negotiation happens after the interview and is
usually conducted with HR. Don't talk about money or benifits with your
potential team members.
* Negotiate! You need to get the best deal for you. A lot of people don't like to negotiate and leave money on the table.
* Don't lie on your resume.
* Show up on time.
* Ask to go to the bathroom and get a drink. This will show you people
in their environment and give you a better look at their facilities.
* Read your resume before the interview and be able to explain everything on it.
* Study the interview questions in [General Programming Interview
Questions]. It seems people are reusing many of the same questions so
you'll sound much smarter if you practice.
* Consider bringing in a portfolio of your work.
In the Interview
* Show you are a team player.
* Show you are smart and can get things done.
* Show how you will solve their problems and won't be a bad choice.
* Don't try to show you are smarter than everyone else.
* Don't be argumentative. Show you have your own thoughts and opinions,
but show you can work with other people without being an asshole.
* Try and find out what they are looking for and change your answers accordingly.
* Do not talk bad about any other company or person.
* Have a reason for leaving your previous jobs that doesn't paint you as a bitter revengeful person.
* Be warm and friendly.
* Smile. Laugh. Don't be too stressed, no matter how bad you want it.
* Communicate clearly. Don't speak down into the table. Don't mumble.
Don't giggle. Speak in complete sentences. Complete your thoughts.
Don't cover your mouth with your hand. Don't interrupt the interviewer.
* Give thoughtful answers. Don't speak without thinking.
* Don't make the interviewer work too hard. Volunteer information, but don't talk too much
* Ask the interviewer questions.
* In the US, make eye contact and have a firm handshake.
* Do not saying something sucks without first determining if that is the interviewer's most favorite thing in the world.
* Dress appropriately, get a hair cut, shower, have clean clothes, show you are normal.
* Think about the questions being asked. Get to the real intent. Do
they really care about why a manhole is round or are they trying to see
how you think?
* Remember you can be wrong. Keep an open mind. Don't be dogmatic. Keep
your religious opinions on subjects to yourself. Otherwise you will be
showing you won't be able to along with other people and they you'll
make everyone's life hell on every little silly issue.
On your Resume
* Be buzzword compliant. People scanning your resume need to see the proper buzzwords.
* I think an objective section is useless and takes up valuable space in the first part of your resume.
* Assume people don't read your resume. People are busy so they may not
read your resume until just before the interview or even during the
interview.
* The first section under your name and education (if present) should
be your "Experience Highlights" section. This should be all someone
needs to read of your resume to know if they should hire you. Make your
resume as long as you want, but assume only the first half of the first
page will ever be read.
* Describe the technologies you have used on each project. Don't just describe the project.
* Describe what you have done on a project. Don't go into endless
details about a project, especially if you didn't do everything.
* Frame yourself as someone who can get stuff done, learns quickly, can
solve problems, works with people well, works alone well, and won't be
a problem.
* Include no personal information. Nobody cares and it can only be used as a negative.
* Include things like patents, publications and awards.
* Write every item like a news paper article. Put the most important
information first, that which they care about and shows you off. They
probably won't read past the first line in a block of text.
10:26:49 AM
digg
reddit
|
|
|
|
Tuesday, May 04, 2004
|
|
|
Saturday, April 24, 2004
|
|
|
Infinite work streams are the new reality of
most systems. Web servers and application servers
serve very large user populations where it is
realistic to expect infinite streams of new work.
The work never ends. Requests come in 24 hours a day
7 days a week. Work could easily saturate
servers at 100% CPU usage.
Traditionally we have considered 100% CPU usage a bad sign.
As compensation we create complicated infrastructures
to load balance work, replicate state, and cluster
machines.
CPUs don't get tired so you might think we would
try to use the CPU as much as possible.
In other fields we try to increase productivity by
using a resource to the greatest extent possible.
In the server world we try to guarantee a certain
level of responsiveness by forcing an artificially
low CPU usage. The idea is if we don't have CPU
availability then we can't respond to new work with a
reasonable latency or complete existing work.
Is there really a problem with the CPU being used
100% of the time? Isn't the real problem that we use CPU
availability and task priority as a simple cognitive
shorthand for architecting a system rather than having
to understand our system's low level work streams and using
that information to make specific scheduling decisions?
We simply don't have the tools to do anything other
than make clumbsy architecture decisions based on
load balancing servers and making guesses at the
number of threads to use and the priorities for
those threads.
We could use 100% of CPU time if we could:
0. Schedule work so that explicit locking is uncessary (though possible). This
will help prevent dead lock and priority inversion.
1. Control how much of the CPU work items can have.
2. Decide on the relative priority of work and schedule work by
that priority.
3. Have a fairness algorithm for giving a particular level of service
to each work priority.
4. Schedule work CPU allowance across tasks.
5. Map work to tasks so as to prevent dead lock, priority inversion, and guarantee
scheduling latency.
6. Have mechanisms to make work give up the CPU after its CPU budget has been used
or higher priority works comes in, in such a way to give up locks to prevent
dead lock and priority inversion.
7. Control new work admission so that back pressure can be put on callers.
8. Assing work to objects and process work in order to guarantee protocol ordering.
9. Ideally, control work characteristics across the OS and all applications.
The problem is we don't have this level of scheduling control.
If we did then a CPU can run at 100% because we
have complete control of what work runs when and in what order.
There's no need not to run the CPU at 100% because we know the
things we want to run are running.
It is interesting frameworks like Struts and JSP
rarely even talk about threading. I don't think application
developers really consider the threading architectures of
their applications. Popular thread local transaction management
mechanisms, for example, assume a request will be satisfied
in a single thread. That's not true of architectures for
handling large work loads.
Scaling a system requires careful attention to architecture.
In the current frameworks applications have very little say
as to how their applications run.
http://www.possibility.com:8080/epowiki/Wiki.jsp?page=HandlingInfiniteWorkLoads
9:20:53 AM
digg
reddit
|
|
|
|
Saturday, January 24, 2004
|
|
|
David Whitney wrote:
> I'd appreciate any and all feedback. If I'm the idiot, please say so.
The subject is reuse of code between products. People in his
group didn't see the need for reuse and didn't even like the
idea of reuse.
My Reply:
You are not an idiot, but i don't think you are sufficiently
cynical on serendipitous reuse. Making yourself dependent
on code under a different policy domain adds risk.
A risk similar to using any 3rd party code. Once bitten
twice shy.
The people working on that code base are probably not
aware of your dependency, your requirements, your process,
your expectactions, your release schedules, etc.
A process that would work is to move shared code to its
own library that is developed on its own and imported into
each product as a 3rd party product. This sets a defined
policy domain and everyone is aware of dependencies.
If all the parties do not agree to this policy then it's
a clue to you that sharing will be risky.
Instead, create the code from scratch, or fork the code
by copying it into your own environment.
12:28:54 PM
digg
reddit
|
|
Time spent creating mock objects is often wasted. The "T" in
Test Driven Design is just as important as the "D."
Real tests--ones that find bugs--require testing real code.
Emphasizing making fast tests using mocks misses what is most
important: creating code that works in your deployed environment.
If your code passed a unit test using mocks but failed in a system test, you would get no sympathy from me. Your shit must work, i don't want to hear about the rest.
This may sound stupid, but things are what they are. If you have a rule like tests must be fast, and apply that rule blindly, then you are not paying attention to what things are, you are just paying attention to the rule. This causes you to justify dropping testing in favor of test speed.
For the same reason people say you don't need to test setters and getters, i don't really find a lot of problems with incorrect communication with other classes. For all the pros of the design part of TDD, i still want to find bugs and to find the "real" bugs you need to test the real code. If that takes time then it takes time.
Favoring mock creation means i am spending considerable time
creating tests that skip about a gazillion failure interaction modes. That's time i would rather spend on finding real problems in real code and creating real code to solve real problems.
You want to find problems in unit tests rather then system or
acceptance tests because bugs are much easier to find, reproduce, and fix in the unit test environment. If it is being said problems will be found at higher levels of test then i think you can do away
with most unit testing period because you can always make this
argument.
I do use mocks, have for many many years, but using mocks is a last resort for me, not the way to do everything.
The above response is in reaction to:
> Seth Ladd:
>I think the goal of writing a good unit test harness is that running all the tests should be extremely quick. >If
your tests require database connections, consider using Mock Objects.
Or break out your data access
> into interfaces and create a very light
weight file based data store. You're solved two issues here:
> your tests
run w/out a database, and your code is a bit more flexible and testable. > The
more and more I write my tests using Mock Objects, the quicker my tests
become. Also, it minimizes the
> need for external resources which makes
setup easier. > Tests can take a long time, but through careful creation, can execute quickly.
> Michael Feathers:
>When you use mocks you are testing real code. >You are testing whether it communicates correctly >with another class
10:25:40 AM
digg
reddit
|
|
|
|
Monday, January 12, 2004
|
|
|
Thursday, November 27, 2003
|
|
|
David Bau has an interesting post on when one should choose
Encapsulation or Representation.
http://davidbau.com/archives/2003/11/12/encapsulation_or_representation.html.
Use encapsulation within a subsystem. Use representation between subsystems.
Encapsulation creates a language API binding. This is the least
flexible option when trying to integrate between subsystems.
CORBA, RMI, RPC, etc have all pretty much failed for system
integration for this reason.
For example, i want to use Perforce via a programmatic interface.
They have a C++ API that runs on platform X. I use perl and i am
on platform Y. Within Perforce they should use a carefully
crafted encapsulation. If they had used SOAP or simple HTTP
at the subsystem boundry i would be set. These are available to me
in every language on every platform.
Encapsulation makes for a great internal program architecture.
Representation between subsystems makes for great accessibility
from any language and platform.
Perforce has a command line program called p4 which people often use
to make wrappers around perforce. The problem is this is neither encapsulation
or representation. The output must be screen scraped because it is meant
for CLI display. This is a far cry from having a well defined schema with
specific fields and values. With perl's regular expressions screen scraping
isn't horrible, but it is still crude and error prone.
7:23:27 AM
digg
reddit
|
|
|
|
Saturday, April 12, 2003
|
|
|
Interesting article on the role of higher level langauges in
increasing productivity. http://www.paulgraham.com/lib/paulgraham/sec.txt.
Specifically the use of lisp as a competitive advantage for viaweb
is detailed. Viaweb later became yahoo stores.
If you think assembly is higher level and more productive than machine
langauge, that C is higher level and more productive than assembly,
that smalltalk is higher level and more productive than C, then it
stands reason that there even higher level langauges that are even
more productive.
Often the next level of languages are domain specific. In the not
so distant past there used to be a philospohy of development where
problems were solved by inventing new little languages. For some
reason we don't do that anymore and i don't see anyone suggest it.
Now I don't even try to push it because nobody understands what i'm
talking about.
In fact, we have the opposite trend in XP where a new focus has
been put on the code. The constant human production and transmogrification
of code. Not that there's anything wrong with that. As a means of
producing good working code, XP is excellent.
But if we are always tied to humans how are we going to make
any progress? The bulk of the human genome was sequenced in a year.
Once automation was introduced. Over a decade was spent on sequencing
with very little result.
With nanotechnology we may be able to build things like tables
automatically, in the same way things like humans are made.
Yet, in programming we will still be depending on humans to
produce vast quantities of complex, mostly bug free code. With
XP we have improved the process, but it's like we have perfected
the process of making the buggy whip just before the introduction
of the car. Except in programming there is no invention like that
car on the horizon.
Many are trying to shift development to cheaper areas as a way
to scale and reduce costs. This is the human as programming robot
paradigm. We have lots of people who will work for cheap spread
throughout the world. To the masters of business they are just
human programming robots. As a strategy we don't know if it will
work yet. And even if it does it won't scale into the future.
Even if we could enslave more human robots it's unlikely enough
working code could be produced.
If you are expecting my grand solution, sorry i don't have one.
Software development seems to be intimately tied with thinking.
Thinking has proven tough to automate. And even for machines
capable of thinking, like humans, it has been difficult to
produce reliable reproducable success.
We don't expect to make machines that can duplicate the efforts
of einstien, newton, or da vinci. There are very few world class
singers and musicians. These people are assumed to have talent.
A talent that even very hard work can not hope to match.
Perhaps software development is also a talent. We don't recognize
it as a talent because on the surface programming seems so logical,
so scientific.
What does this mean? Many people play music. But there are few world
class musicians. Many people study physics. But there are very few
newtons. And we don't expect just because someone plays at music
or physics that they will be great.
If you have to be great to make software work then we will have
to be satisfied with a lot of average performances. If you really
want the best then you have to hire the best, the people with the
most talent.
Hopefully we will be smart enough to figure out what we are
doing and how we are doing it. I wonder how the world would
change then?
5:31:42 PM
digg
reddit
|
|
Frameworks get a bad wrap because everyone has a story about how they
were on a project that tried to build a framework and it spiraled out
of control and the whole project failed and everyone died a firey death.
I contend frameworks fail for pretty much the same reason any other
software project fails. If it's not done properly it will fail. If it's
done properly yet get a huge ROI.
From dictionary.com:
frame·work Pronunciation Key (frmwûrk) n. 1. A structure for supporting
or enclosing something else, especially a skeletal support used as the
basis for something being constructed. 2. An external work platform;
a scaffold. 3. A fundamental structure, as for a written work.
4. A set of assumptions, concepts, values, and practices that
constitutes a way of viewing reality.
There's no reason a framework must apply accross multiple
applications, there's no reason for it to be OO based, and
there's no reason for it to be complete.
My definition of a framework in the context of programming would
be something like:
The systemization of a domain expressed in code to solve a particular
class of problems in a particular ecology.
The framework could be large or small. It could work in one application or
many applications. The primary point is a framework allows developers to
solve their problem in terms of the framework. If done well it can provide
a lot of leverage (ROI). A framework doesn't solve all problems in every
application.
They keys are: 1. Systemization is an experienced based process otherwise
the probablity of success is reduced greatly. Experience comes from working
on the same or similar problem in multiple projects. It never stops which
is why a framework is never perfect and is never done. Systemization is a
key to success in other fields and it can be a key in software.
2. The restriction of solving a particular class of problems in a domain
is related to notion of ecology. A domain may be very large and have many
niches. Solutions (organisms, frameworks, etc) survive best in the niche
for which they evolved. Move to a different ecology and the solution will
probably die. By being general you weaken yourself against other opponents.
It's not realistic to expect a framework to thrive in many niches across
many domains.
3. Every project is an ecology. The framework evolves in that ecology and
the better adapted it is to the ecology the more chance it has to live
and reproduce.
4. As in nature frameworks can catalyze each other to build a much richer
world. As in nature there isn't a single hand in charge of creation and
coordination, they evolve together in response too each other and to
changes in the environement.
5:29:14 PM
digg
reddit
|
|
|
|
Sunday, October 20, 2002
|
|
|
Saturday, October 19, 2002
|
|
|
Bala Paranj wrote:
Hello, If I have an abstract class Bird and subclasses Sparrow, Pigeon and
Ostrich having methods fly(), with no-op for the Ostrich fly()
implementation. I am violating the LSP. Is this violation acceptable?
Is there any case where the violation is acceptable?
My Reply:
It's up to you. LSP is helpful in programming because it let's programmers more confidently reason about programs because
they can assume they know how something will behave. LSP is
not a constraint in the real world, it is a constraint that
can be put on software to help build systems.
One root of the problem is that types in most programming
languages are based on classical categorization. The real
world is much more interesting and may combine examplar and
prototype based categorization
(http://www.bsos.umd.edu/hesp/newman/Newman_classes/Newman300/webpages/categorization.pdf).
A tree stump can be considered on the outer boundry
of being a seat, but it's not our best example of a seat. And
in fact the only reason a tree stump could be considered a
seat at all is because we are human. In the classification
system of an ant or an elephant, a tree stump probably wouldn't
be in the seat category.
This sort of ambiguity is not easily represented in class structures.
The categories we think of as natural and intrinsic to the world are
largely the production of our human embodied self's relation with
the world.
In the same way categories in your system need to be related to the
problem space. The problem space is what embodies your solution
and acts as the context in which questions, definitions,
and relationships are resolved.
If you wish to build a software system then following LSP is probably
a good idea. Figure out why you care about flight and make all your
objects consistent with that view of the world. Trying to model reality in an unsuitable medium like a programming language can
get very frustrating. Perhaps the brain is the complexity needed.
You should let the problem you are trying to solve guide what things
mean. Leave the platonic realms for philosophers.
7:59:28 PM
digg
reddit
|
|
|
|
Saturday, September 21, 2002
|
|
|
There is almost never a reason to use more than three layers
of inheritence.
This first layer is a concrete implementation of an AbstractBaseClass
that abstracts a protocol. Don't jump to creating an AbstractBaseClass
first. It should come from experience. Much of time there
will just be a concrete implementation with no abstraction or
derived classes.
The second layer would be a complete implementation of the
AbstractBaseClass or a partial implementation that is expected
to be specialied further.
The third layer is the complete implementation of the partial
implementation at the second layer. You should never need to derive
from this layer. Instead, backup and make a new second layer class.
The advantage of this architecture is all classes can work as a system
in terms of the abstract base class. Yet, with the second layer developers
can make use of a fairly functional and standard base class that is easily
extended with new system behaviour. 3 layers is not
to deep to understand, yet allows almost all solutions to be
expressed in an extensible manner because of the abstract
base class strategy.
10:32:48 AM
digg
reddit
|
|
|
|
Wednesday, September 11, 2002
|
|
|
I've finally found my perfect spam counter-attack software
in SpamPal (http://www.spampal.org.uk/). I use Mozilla and
POP3 for email and have had zero luck finding an anti-spam
solution. As much spam comes my way i have been looking.
I like SpamPal because it simple, clever, and it works.
Configuration is simple.
You run a proxy SpamPal POP3 server on your machine. Mozilla is
configured to use the proxy server. SpamPal talks to your
real POP3 mailbox. Filtering is implemented by SpamPal
between your mailbox and the SpamPal server.
The clever twist is that instead of just deleting spam, SpamPal
annotates the subject header with the string **SPAM**. This allows
mozilla's filtering capabilities to delete the email
or send it to a particular folder, which i have cleverly called
spam.
The subject line approach is nice because all spam is not
created equal. A lot of email marked spam is not really spam.
By passing the email through to the reader it's easy to
fine tune SpamPal to reject what you want rejected and
accept what is acceptable. For example, email from yahoo is
marked as spam. Some email from yahoo i really want so i can
add that to the whitelist.
SpamPal is fully configurable through a functional GUI.
Changes in the GUI become active immediately, which is a nice
touch. Email is identified as spam using several spam
identification services. SpamPal won't miss much.
SpamPal also has blacklists and whitelists.
There are also plugins available to provided additional
filtering services. I made use of the RegEx plugin to
automatically whitelist certain email.
I participate in several email lists, some of which were
marked as spam. Email list email usually has an identifying
string like [name] to filter on.
After about 30 minutes of configuration my hit rate is
about 100%. I still browse the spam folder to see if
there are any false positives. If there are i can change
the whitelist or use the regex filter.
Oh, and SpamPal is free.
2:22:01 PM
digg
reddit
|
|
|
|
Monday, June 10, 2002
|
|
|
It matters how companies refer to the people who do the real work. Metaphor is destiny.
I've been called a head. A disturbing image of heads floating around the building always comes to mind. How do i type? Where did my body go? Shivers. Clearly without a body i don't need exercise, nutrition, medical care, or vacation. Strangely though having only a head would imply having a mind, but still we get treated as not having minds. Odd. We have a head count, but not a body count. Sometimes i worry the counts won't match. Cubes are like morgues so maybe...
I've been called a resource. For some reason this one bothers me the most. I had a manager email another manager, while cc'ing me, to ask if this resource could be used on a project. Can you imagine the narrow-scoping of mind required in such a stunningly oblivious depersonalization?
And resource isn't meant in the sense of something treasured. No, it is meant in the sense of a bulk commodity input to a process. The kind of input piled large outside a rusty manufacturing plant. Dump trucks load more when the pile gets low enough. Huge bulldozers move the pile around when it needs organizing. Nobody ever sees the resource enter the plant to be used, but somehow the pile empties anyway.
I've been called an ONTG (one neck to grab). This is so disgusting further comment is unnecessary.
I've been called a body. We need 5 bodies for this project. Any bodies will do. Now i have competing dreams of bodiless heads and headless bodies floating around a ghostly cubescape.
I've been called a grunt. A respected friend who became a manager unselfconsciously called me and other workers grunts. He realized his audience and retracted, but we both understood. That is the view of management. Bodies to throw at bullets. Interchangeable. Undistinsguished. Unworthy of involving in any of those decision thingies.
I've been called an individual contributer. A VP said he didn't understand why someone would want to be an individual contributer, but some people do. He actually said this in front of a room of individual contributers (heads, resources, whatever). The obvious question though if you are not an individaul contributer, what the hell are you doing? Unsuprisingly this same manager was too busy to personally visit any of the people he managed.
One reason to like small companies and startups is the lack of a blinding need to label people. People can remain peers for longer, but as humans tend to form social hierarchies, it may always just be a matter of time.
Labeling arises out of the need to put people in lists. Lists like microsoft project, spreadsheets, budget projections, head count reports, org charts, building diagrams, etc. In a list you don't matter, what matters is the aspect of you that the list cares about.
It's a short lifeless jump to dropping people for labels and forgetting the people behind the labels. Once you forget about the actual people the people become just another problem to be solved, a resource to be deployed and optimized. It's so much easier to deal with resources instead of people.
Eventually a conflation occurs where people become perceived as the problem. Everything would get done better and faster if it wasn't for stupid faulty resource cells in the spreadsheet.
Management becomes insular because they obviously only need to talk amongst themselves because resources have nothing to contribute. Resources do. Managers think. Or at least think they think. Having excluded any troublesome subject experts the need to think disappears altogether, like domesticated dogs who have lost their weariness and hunting instinct, preferring instead to be fed and tended.
Once started this relationship is self-reinforcing. Resources gradually drop out of any loop of any importance. Resources are the problem. Bad resources miss schedules and bust budgets. Managers inevitably conclude more thinking by managers is the solution. Any problem is met with another level of centralized control by management.
Management as an institution is fundamentally a reversion to childhood. As a child you get to be concerned only about yourself. As a child you can count on your parents to bail you out of stupid decisions. Children make messes that others clean up. Children form cliques. Children, cruel to those outside of the clique, only associate with those in the clique.
Having had good managers makes having perennially childish management all the more painful. Consider the metaphors used in your organization. Consider having managers manage people and not gravitate responsibility for all technical issues to themselves. Consider having fewer lists.
6:42:35 AM
digg
reddit
|
|
|
|
Saturday, June 08, 2002
|
|
|
Thursday, June 06, 2002
|
|
|
Schedules are lies. Schedules suck. Yes, yes, yes. But we still
need them.
The most effective scheduling rule i've used is to
schedule so as to unblock others.
The idea is to complete the portions of a feature that will unblock those
dependent on you. This way development moves along smoothly because
more lines of development can be active at a time. For example, instead
of implementing the entire database, implement the simple interface
and stub it out. People can work for a very long time this way using
that portion of the feature that caused others not to block. Plus
it's a form of rapid prototyping because you get immediate feedback
on these parts. Don't worry about the quality of implementation
because it doesn't matter yet.
10:47:17 AM
digg
reddit
|
|
|
|
Saturday, April 20, 2002
|
|
|
Informative article on XML encoding at
http://www10.org/cdrom/papers/542/index.html.
Gzip often had much lower encode/decode times at the
expense of slightly larger content sizes when
compared to other approaches. In one test gzip took
3.33 msecs to encode test data producing a packed of
1516 bytes versus other approaches that took 914 msec
and produced a packet of 1222 bytes. The next lowest
competitor was at 266 msecs for the encode times.
Give me faster encode/decode times as long as the packet
is not grossly larger. Small size differences are easily
averaged out if data are streamed up to clients. Encode/decode
times are funamental performance limitaters because it
controls how many messages per second can be handled.
Using gzip you will be able to process 100s more messages
per second.
I've had good results with gzip as well, but chose to go
a different way. Instead i directly write a binary form
of the XML into a buffer so there's no separate encode
step. I also use a generic properties format so i
don't have to worry about arbitrary schemas. On the decode
side the buffer is passed around until needed so it doesn't
have to be docoded immediately, it can be decoded in some
other thread. The binary format is searchable so the
entire message doesn't need to be decode to get to part
of it.
This approach perfoms excellently.
8:18:54 AM
digg
reddit
|
|
|
|
Tuesday, April 09, 2002
|
|
|
On Leadership
I wish i had said this, but it was said by asd@asd.com in comp.software-eng.
Leaders:
- lead by example
- don't ask anything of anyone they wouldn't do themselves
- are called on to make difficult and unpopular decisions
- keep the team focused
- reward/support their team in whatever they do
- keep/clear unnecessary crap out of the way of the team
Consensus is great. If it lasts for the project lifecycle,
consider yourself blessed. I've been on a couple projects
where two engineers just blantantly *disagreed*!
#1 " x = 1"
#2 " x != 1"
That's when a Project Leader is required. Unless you
want to flip a coin.
Oh yea - one more thing. Project leaders: TAKE the blame
when things go wrong and SHARE the credit when things
go right.
Ain't easy - but it's the way I try to run my life.
7:11:30 PM
digg
reddit
|
|
|
|
Monday, April 08, 2002
|
|
|
A unified theory of software evolution.
http://www.salon.com/tech/feature/2002/04/08/lehman/print.html.
"In software engineering there is no theory," says Lehman, echoing Holland.
"It's all arm flapping and intuition. I believe that a theory of software
evolution could eventually translate into a theory of software engineering.
Either that or it will come very close. It will lay the foundation for
a wider theory of software evolution."
We'll see. Is there something in software like E=MC*2 which means your
web interface will work?
7:30:07 AM
digg
reddit
|
|
|
|
Thursday, March 28, 2002
|
|
|
The group dynamics of important software projects under
heavy development and release pressure is a lot like that of squads
in battle. I've read where soldiers say after a while it stops
being about country, it stops being about what they are fighting for,
and becomes just about surviving. All that matters is your squad
members helping each other stay alive.
In the crucible of a critical release the same narrowing of focus
happens. All that matters is supporting your
team memebers and getting the job done. You long ago stopped
caring about your company, customers, and even whatever they
hell you are making. You do whatever it takes to help your friends
and make the product work.
It's an irrational process, like being replugged into ancient
survival behaviours. Only with the perspective of time do you
realize what an idiot you were. It probably has something in
common with the madness of crowds behaviour that has been
observed throughout history.
7:41:58 AM
digg
reddit
|
|
|
|
Wednesday, March 27, 2002
|
|
|
Personas are a powerful design tool, especially when combined with
responsibility driven design.
http://www.boxesandarrows.com/archives/002330.php.
Cooper's personas are:
simply pretend users of the system you're building. You describe
them, in a surprising amount of detail, and then design your
system for them.
I have a standard set of personas that i consider when creating
a design/architecture that don't seem to be common. When you write
code their are a lot of personas looking over your shoulder:
- other programmers using the code
- maintenance
- extension
- documentation group
- training group
- code review
- test and validation
- manufacturing
- field support
- first and second line technical support
- live debugging
- post crash debugging
- build system (documentation generation and automatic testing)
- unit testing
- system testing
- source code control
- code readers
- legal
You are much more careful and more thorough when you really thing about
all the personas, all the different people and all their
different roles and purposes.
7:52:01 PM
digg
reddit
|
|
|
|
Monday, March 25, 2002
|
|
|
We need the 3 way web. The 3 way web = 1 way web + 2 way web + more layering.
The 1 way web is publishing documents for people to read. We have been able
do this from the very start using bone knives and bear skins. With any
editor anyone could write a page in simple html and publish it on a web site.
Early forms of aggregation evolved so you could subscribe to certain pages for
notifications of when they changed.
Weblogging = 1 way web + layering.
Layering adds abstraction. It ties things together in a prettier more functional
package. More powerfully layering can relate what wasn't related which turns it
into a new thing altogether. And so the building aspires ever higher on ever renewed
foundations. Radio Userland, for example, adds all of the above.
Publishing is valuable but as a readers we crave the ability to reply.
Which leads to the 2 way web.
2 way web = the 1 way web + reader interaction.
Email, comments, IM, searching, and discussion groups are all mechanisms for
adding user interaction and feedback. Currently all of these form, programatically
at least, an unrelated mix and have not yet gelled into a layer. Google may
index weblogs, but it won't pick up the rest of the threads related to the weblog.
The connections are lost. Indeed, a weblog may also be the result of a stimulus
from other threads and may cause the stimulas of yet other changes.
The interaction mechanisms plus weblogs are implementable and representable
on one substrate uniting them into a layer. This will happen someday.
As fine a day as that will be, we still need more.
3 way web = 1 way web + 2 way web + more layering.
Consolidating interaction services into a layer is the start
of the 3 way web and a requirement for the next phase of the 3 way web.
We need to go beyond interaction by mastering the transformation, packaging,
combining, and accessing of linked data streams. It goes without
saying web services would be the implementation platform.
Some examples of things you can't easily do today but would be able to
do in the 3 way web...
Threads between all data streams is the ultimate feature of the 3 way web.
Weblogs comment on other weblogs which spills out to discussion groups,
IM, email, etc, and the back again. The thread between all these
sources along with its progression over time is precious information
and supports amazing capabilities.
A simple start is the generalization of the calendar metaphor used by weblogs.
A calendar is one potential packaging of weblog input. It is appropriate for
a diary but is not appropriate or sufficient in other domains. A weblog for
a lab machine, for example, would need a calendar and a log type view. Just use
a really large number of entries that stay on the page? You can do that it,
but it's not the same.
If all team members record their status in a weblog, for example, a rollup of
the status needs to happen for presentation to higher levels of managements/weblogs.
This rollup requires a transformation ability over a set of data streams, which
bumps into the semantic web. The transformation may require either a human
or programatic mediator. Links to the project planning tool and time tracking
tool could also be integrated.
The log for all lab machines would need a similar rollup. At any point in time
a discussion of the recent machine changes could be spawned. Either a yahoo type group
or comment system would work, but the information should stay related to and
accessible as a whole.
Combining status information is just one example out of many requiring a meta level
capability to unite multiple data sources into a different form that can in turn
serve as a data source for yet more transformations.
It would be kind of cool.
Now for the 4 way web...
9:51:31 AM
digg
reddit
|
|
|
|
Saturday, March 23, 2002
|
|
|
Excellent heuristics from Robert Martin (http://objectmentor.com):
- Whenever you see the number 1, consider that it might be N.
- Whenever you see a constant, consider it might be a variable.
- Whenever you see two or more concepts that are arbitrarily connected,
consider they might need separation.
- If a decision seem arbitrary, consider how it could be made differently.
- Consider that what is ancillary today will be primary tomorrow.
- Consider that what is low volume today will be high volume tomorrow.
Good things to consider when designing/coding. Being a little more
reflective during the development process would help prevent a lot
of problems. Development isn't a race. Developers win through a complex
set of tradeoffs that usually look like a loss from a number of other
perspectives.
6:25:35 AM
digg
reddit
|
|
|
|
Tuesday, March 19, 2002
|
|
|
The The Tao of Programming is a fun insightful page.
http://www.juliao.org/text/tao-of-p.shtml.
I kind of like:
Thus spake the master programmer:
"Without the wind, the grass does not move. Without software, hardware is useless."
Warning, there do seem to be some buddhist influences, if you like your tao pure.
5:34:46 PM
digg
reddit
|
|
|
|
Sunday, March 17, 2002
|
|
|
Differential Diagnosis is an innovative technique for finding bugs.
It's a strategy a co-worker of mine uses that is so obvious
in retrospect, yet has an incredible amount of power.
Usually problem debugging starts from scratch everytime and our
heroes eventually find the probem. Using Differential
Diagnosis you go back and look at the change history for every
change since the bug wasn't a bug. The idea is that the code
worked at one time. The bug is likely to have been introduced
in one the later changes.
By inspecting the source of only the changes it's often possible
to figure out the problem or at least narrow it down
considerably.
This approach makes a lot of sense and works extremely well.
But i hadn't seen it before. Obviously no strategy will always
work, but it works a lot of the time. Even well unit tested code
can have intregration related bugs that don't show up until
later. And many products are so complex that any unit
test doesn't scratch the surface of possible tests.
Since then i've read a paper where the build system after
finding a bug from a smoke test run would automatically backout
changes and rebuild until it found which change caused the bug. Very
cool. Someday i hope to add this to our current build system.
12:58:40 PM
digg
reddit
|
|
IEEE Software article Server-Side Design Principles for Scalable
Internet Systems by Colleen Row and Sergio Gonik of GemStone Systems
is a good overview of different strategies for achieving scalability. http://computer.org/software/index.shtml.
It has the principles of scalable architecture as:
- divide and conquer - system should be partitioned into smaller
subsystems that can be deployed onto separate process or threads,
which disperses the load and allows for load balancing and tuning.
- asynchrony - means work can be carries out in the system on a
resource-available basis. Asynchrony decouples functions and lets the
system better schedule resources.
- encapsulation - system components are loosely coupled with
little dependencies among the components.
- concurrency - Activities are split across hardware, processes,
and threads and can exploit physical concurrency of modern
multiprocessors. Concurrency allows the maximum work to be scheduled.
- parsimony - Designer must be econimical in what they design.
Pay attention to the thousand of micro details.
Strategies for achieving scalability:
- Careful system partitioning
- Service-base layered architecture
- Just-enough data distribution
- Pooling and multiplexing
- Queueing work for background processes
- Near real-time synchronization of data
- Distributed session tracking
- Keep it simple
And lots more with a lot more detail on each topic. It's a very
good overview discussion that jibes with my experience. The
question is how do developers implement all this, which is
where i assume GemStone comes in :-) A part of scalabily
that doesn't get addressed is ordered syncronization between
distributed applications that can fail and recover independently.
Maybe more on that later.
7:38:39 AM
digg
reddit
|
|
|
|
Saturday, March 16, 2002
|
|
|
Programming as Creating Causal Models
Programming can never just be programming, we must always explain
programming using a metaphor. Programming IS manufacturing.
Programming IS conducting a symphony. Programming IS making
a peanut butter and jelly sandwitch. Insert your particular agenda here.
After reading The Mind's Arrows by Clark Glymour
i think an interesting metaphor may be Programming IS Building
Causal Models.
My take is that programming as primarily teleological and analytical
in nature. Programs are purposeful. This purpose serves as a grounding against
which meanings are resolved. We start from our goals and work backwards figuring
out how to achieve our goals. Goals arise and subside
in feedback loops over time. There is no lack of available paradigms
for implementing all of the above, but the idea of causal models is
interesting. Then again i just may be channeling long repressed memories
from past logic programming classes.
Cause means The producer of an effect, result,
or consequence. Causal means
Indicative of or expressing a cause. The sense of model
that applies is A schematic description of a system, theory, or phenomenon
that accounts for its known or inferred properties and may be used for
further study of its characteristics. A program is chain of causes
that produces the effects necessary to reach our goals. The program
is the model.
The programmer is responsible for implementing the necessary causes and
effects by translating the causal model a programmer has constructed
in their mind to the causal model embodied in a program. In the
mind reason, emotion, and experience meld to provide the
deep structure of a causal model. A lot of what we know isn't
easy to articulate. Programming requires the elicitation
of the model which is difficult because our thoughts are primarily
images which are hard to fully explore and extract.
The discipline of programming, to paraphrase Mr. Glymour:
is about the causal processes and mechanisms though
which intelligent action comes about. Subdisciplines are chiefly
about the processes and mechanisms through which human understanding
of causal relations comes about, the causes of our knowledge of
the causal structure of the world. And adequate theories of human
understanding require knowing what it is that people have when
they have causal knowledge, and how they come to have it.
Causal Structure and Bayes Nets
One of the main tasks of programmers is to learn causal structure.
Mr. Glymour suggests representing causal structures as Bayes Nets:
Bayes nets are type of neural net, which are a type of graphical
causal model. A Bayes net is a directed acyclic graph and an
associated probablity distribution satisfying the Markov Assumption.
If the graph is intended to represent causal relations and the probabilities
are intended to represent those that result from the represented
mechanism, the pair form a causal Bayes net.
The Markov Assumption says that the probability of state
occurring given a previous state sequence depends only on the
the previous n states. The probability of X occurring after
a sequence depends only on the previous state.
The Graph
The vertices represent features or variables. A directed
edge between two variables, X -> Y, means that for some values of all
of the other variables represented, an action that varies X will
cause variation in Y. Mr. Glymour uses the following example:
Clapping (yes/no)
/
/
/ /
TV(on/off) Light(on/off)
/ /
/
/
Electic Power (on/off)
The acyclic requirement bothers me as it seems many interesting problems
involve cycles.
The Probability Stuff
An interesting feature is that you can make probability predictions. The
probability that there is a clapping and the TV is on and the light
is off is:
pr(light is off | clapping) * pr (TV is on | clapping) . pr(clapping)
pr means probability. pr(x|y) is the probablity of x
conditional on y.
The Utility of Causal Bayes Nets
I don't know if the probability features are that important for programmers.
What matters more is that by considering the probabilites it forces you to build
a more complete model of causal relations, and it should help you discover
holes in your model, which should help you better solve your problems.
A causal model is still not an algorithm and programs must in the end
get down to the business of algorithms. Often though it's not the
algorithm that's the problem, but the main problem is not knowing what you
need to do. Building causal models should make it clear what needs
to be done.
9:19:36 PM
digg
reddit
|
|
|
|
Wednesday, March 13, 2002
|
|
|
The true story behind Mosaic and Netscape?
http://www.chrispy.net/marca/gqarticle.html.
Perhaps if Marc Andreessen was really as imaged Netscape
might have won. But MS was better, faster, stronger,
and The Race of the Browser went to the dark side.
2:50:02 AM
digg
reddit
|
|
Flow Chart for Project Decision Making
Not that i'm cycnical, but this is my favorite big picture of how
projects work. This diagram is from my c++ coding standards
page. Some people
have complained about the profanity, but i admire its directness.
In medieval times the majority of developers for all their brain power
would have been serfs. Few groups work so hard under such difficult
circumstances for the so unworthy. Can't complain about the pay,
but that's not all there is. Certainly some of us would be wizards or
alchemests or jugglers. A few of us like Galileo would have
cracked open the doors of the enlightment and then like Newton
blow the doors open. But most of us, myself included i think, would
have served our masters quietly tending our fields of code.
+---------+
| START |
+---------+
|
V
YES +------------+ NO
+---------------| DOES THE |---------------+
| | DAMN THING | |
V | WORK? | V
+------------+ +------------+ +--------------+ NO
| DON'T FUCK | | DID YOU FUCK |-----+
| WITH IT | | WITH IT? | |
+------------+ +--------------+ |
| | |
| | YES |
| V |
| +------+ +-------------+ +---------------+ |
| | HIDE | NO | DOES ANYONE |<------| YOU DUMBSHIT! | |
| | IT |<----| KNOW? | +---------------+ |
| +------+ +-------------+ |
| | | |
| | V |
| | +-------------+ +-------------+ |
| | | YOU POOR | YES | WILL YOU | |
| | | BASTARD |<------| CATCH HELL? |<-----+
| | +-------------+ +-------------+
| | | |
| | | | NO
| | V V
| V +-------------+ +------------+
+-------------->| STOP |<------| SHITCAN IT |
+-------------+ +------------+
2:20:48 AM
digg
reddit
|
|
Big Ball of Mud (http://www.laputan.org/mud/mud.html) is perhaps the
best article on the evolution of software ever written. I see it
every day. I am maker of mud balls. There. I said it. Please forgive
me.
From Big Ball of Mud:
Why does a system become a BIG BALL OF MUD? Sometimes, big, ugly
systems emerge from THROWAWAY CODE. THROWAWAY CODE is quick-and-dirty
code that was intended to be used only once and then discarded.
However, such code often takes on a life of its own, despite casual
structure and poor or non-existent documentation. It works, so
why fix it? When a related problem arises, the quickest way to
address it might be to expediently modify this working code,
rather than design a proper, general program from the ground up.
Over time, a simple throwaway program begets a BIG BALL OF MUD.
Even systems with well-defined architectures are prone to structural
erosion. The relentless onslaught of changing requirements that
any successful system attracts can gradually undermine its structure.
Systems that were once tidy become overgrown as PIECEMEAL GROWTH
gradually allows elements of the system to sprawl in an uncontrolled fashion.
2:07:07 AM
digg
reddit
|
|
|
|
Sunday, March 03, 2002
|
|
|
An interesting paper on strategic gossipping as a form of information
warfare in reputation based networks (http://cogprints.soton.ac.uk/documents/disk0/00/00/21/12/index.html).
A lot of systems on the internet, like slashdot.org, are using reputation
based ratings as a form of decentralized collective/community control.
With a few glitches it's a strategy that basically seems to work.
Interesting how an integrated war strategy might take advantage.
Reputation systems are fun because because they can be endlessly tinkered
and debated, acting as a proxy for designing the ideal society.
It's hard to create new governments in the meat world, but in the digital
world we can set them up anytime and in endless variation. The internet
is one vast experiment in self governance even to the extent of having
old colonial powers trying to assert their control.
Interestingly girls have been using strategic gossiping tactics forever
as stunningly shown in the great article Girls Just Want to be Mean
(http://www.nytimes.com/2002/02/24/magazine/24GIRLS.html). This
article made me very glad as a guy i could count on just being hit
or something equally obvious. Girls are much crueler.
5:57:06 AM
digg
reddit
|
|
|
|
Tuesday, February 26, 2002
|
|
|
Brad Cox (http://zdnet.com.com/2100-1105-845220.html) asserts that
we can't keep using http forever because it doesn't natively
support long transactions and is asymetric.
This is crap. It smells like a backdoor initiative for MS to take over the
the defacto internet transport protocol standard that is HTTP.
Long transactions are much better handled
via callbacks and/or status probes. A requestor can say in the
request here's my response URL, tell me when you are done. I'll
time you out when i want. You can time me out too. The client
and server are in complete control. No intermediary protocol
implementation is required.
HTTP is symetric because anyone can make a request of anyone
at anytime as long as they have the right URL. This is not a hack,
it's a very robust design. Servers are in no way limitted to
just responding.
It's interesting that the solution to asymetry and long transactions
would require a protocol of such complexity that it essentially
becomes a service that locks you in because only the service will have the
state to act as a broker between the client and server.
Wonder why microsoft would want this kind of solution?
4:00:04 PM
digg
reddit
|
|
|
|
© Copyright
2009
todd hoff.
Last update:
9/19/2009; 2:14:51 PM.
|
|
September 2009 |
Sun |
Mon |
Tue |
Wed |
Thu |
Fri |
Sat |
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
|
|
|
Dec Oct |
|