I am increasingly conscious of my state of mind during debugging. If I’m not watching out for it, I’ll just dive in for the quick fix. Perhaps the problem is an off-by-one error which can be easily assessed and repaired. And, perhaps it’s more complex. If I’m not watching myself, I’ll treat the compiler like a <really slow> code status indicator. If it compiles, well, you’re doing pretty good then, aren’t you. Trouble is the code has two target audiences: it has to compile in your own brain just as much as it has to compile on the computer.
So, say you are 80% sure that you have an off-by-one error, and you make the adjustment and recompile. With those kinds of odds, you probably will see that error go away. Trouble is, the off-by-one’s have other roots, if not fixed in your brain, will lead to additional errors. For example, say you have not completely decided whether to use the [0] element in an array, or whether it’s more important to have all your loops and MAX_SIZE declarations end at one nice pretty number [40] as opposed to [39]. Say you have initialization problems that throw things off by one, but inadvertently. Why would a programmer choose not to go in to deeper waters?
The aesthetic among programming is a functionality test. If it works, don’t fix it. This is reinforced by deadlines and culture. Deeper waters are where you go if you have time, as an option. Wouldn’t it be a rare thing if a dev lead sent back perfectly working code and said “put this code in more lines – not everyone will understand the recursion.”
10:53:42 AM
|
|