|A QA Guy's Radio Weblog
Thoughts from Dave Liebreich
Tuesday, January 28, 2003
Heading on out
I've accepted a job in Colorado, and will be driving out there next week. 1:48:57 PM
Sunday, January 12, 2003
Saturday, December 28, 2002
Between holidays and job search, I've not had much time nor inclination to blog. I have been writing, but not stuff to post in public.
I have started to help with a sourceforge project, and I'm reading the QSM series, so I'm keeping busy.
Here's to a new year, complete with maybe one or two posts. 10:17:05 AM
Monday, November 18, 2002
Back, and looking elsewhere
The AYE Conference was a blast. I met a lot of good people, co-presented a BoF session with Jerry Weinberg (ask me about it :-), and I'm now ready to turn all this newly-gained knowledge loose on some lucky company :-)
I'm expanding my job search to include the Pacific NorthWest, Colorado, and Atlanta. (If you know of jobs or contacts there for software testing, please send me mail ) 10:08:56 AM
Monday, November 11, 2002
I've updated my home page - please email me your impressions and/or suggestions.
Thursday, October 24, 2002
Seen on comp.software.testing, from someone who is not impressed with a particular certification course:
If one of my staff uses the term 'psycholomatic complexity', I'll sack them on the spot.
I like it better than 'cyclomatic complexity', don't you?
Friday, October 18, 2002
A New Disclaimer
Seen on a newsletter posting, paraphrased by me, original author unknown:
All opinions are mine and are not necessarily shared by my employer, but they should be.
Wednesday, October 16, 2002
I'm going to the AYE Conference in Phoenix this year, on my own dime. Whether I find myself eventually engaged as a QA/Test Architect or as a manager, I feel the techniques I can learn and the contacts I can make at this conference are well worth my time and money. 6:20:06 PM
Tuesday, October 08, 2002
Another Code Coverage Measurement
Have you caused the system under test to emit every possible error message? 10:08:44 PM
Saturday, October 05, 2002
I'll be moving some posts around today - please stay tuned. 9:30:49 AM
Saturday, September 21, 2002
Back on the Street
So to speak. I'm back to looking for gainful employment. Hopefully I will have time during this job search to finish a number of articles I've been not working on for a while. 7:17:11 PM
Tuesday, September 10, 2002
Kathy Iberle, paraphrasing a paper by Mary Shaw:
The Romans built lots of bridges without even
having the number zero to work with - they had collected a set of successful
patterns for bridges and when a situation came up that didn't fit one of the
patterns, they put in a ferry instead.
Sunday, September 08, 2002
Well, I thought I'd get time to work on my project...
Not this weekend either. I'm off to Albuquerque (is that how you spell it??) to attend the very last Problem Solving Leadership workshop.
More when I get back... [ronpih I guess...]
I made it up to #9 on the waiting list. Sigh. Have fun, Ron. 9:00:42 PM
Tuesday, August 27, 2002
When writing for a publication, don't write for the other authors. I've discouraged myself many times from writing an article because I felt the other authors already knew the topic, maybe better than I did.
Now I think of how I would explain the issue to someone in a different field. 10:56:49 PM
Monday, August 26, 2002
I've been called for jury duty this week, so posts and work are on hold. Posts, of course, won't notice the difference :-) 10:49:46 PM
Sunday, August 25, 2002
UI Test Automation Under Windows
Today I started working on an MSAA (Microsoft Active Accessibility)-based .NET component to drive Windows UI with the aim of using it to do automated testing (among other things). After playing around with a few different approches I have settled on this one:
It turns out that the needs of people making accessibility aids and the needs of people trying to implement automated testing tools are quite similar so this technology can be used to implement automated UI testing on Windows.
If you want to explore these two UI technologies there are tools that you can use to browse them. On the Win32 side you can use Spy++ (this tool ships with Visual Studio). The MSAA hierarchy can be looked at using the MSAA tools (available here).
More on my UI navigation approach next time... [ronpih I guess...]
Cool. 7:25:56 AM
Friday, August 16, 2002
A Great Image
What do you call it when you take a fundamentally weak and/or unstable architecture, and shore it up to fix each flaw (aka bug) that is discovered? Where some of the fixes push the core over in other areas, creating new bugs?
Jello, with scaffolding.
:-) 1:53:08 PM
Tuesday, August 13, 2002
My wife says I should write a book about my experiences leading test groups. The title she suggested? "Improvisational Management"
Gotta love her. 11:37:23 PM
Thursday, August 08, 2002
Oh yeah? Well, then I'll test THIS!
Attended a handoff meeting today, where a project was formally moved from Research to Development.
I asked for ongoing guidance on the target market, basically stating that I saw large areas of the problem space that would likely require major changes to the proposed solution architecture. You know, the whole "find the really really ugly bugs as early as possible" thing.
These, um, er, uh, "trade-offs" in architecture design would be acceptible if we are not planning to sell into the bad-bug areas. And, of course, if our decisions along those lines do not change.
And let's be honest. I have yet to meet a successful tester/qa person who does not take just a little too much pleasure when the team starts to squirm when they hear how you are going to test. 10:30:22 PM
Monday, August 05, 2002
Box? What box?
Today, a co-worker complemented my group for creative testing and finding important but not-necessarily obvious bugs in a recent release candidate. He said that they really "thought outside of the box."
My reply? "That's because I didn't give them one to start with." Working with a group of junior folks can be fun. 9:33:55 PM
Sunday, July 28, 2002
Start of a Thought?
I need to flesh this out more, but . . .
Are there some basic incompatibilities between testing and managing? As a tester, you go right for the weakest areas, trying to determine defects and faults.
As a manager, you find out what each person is capable of, and build on strengths. You most certainly do not keep poking at the weak spots of your staff.
How much of a problem is this dichotomy? 3:24:15 PM
Thursday, July 25, 2002
Sunday, July 21, 2002
Well, maybe "lied" is too strong a word.
If Linda is on the ball, she'll tell me that the customer for our software (COTS, embedded systems, OEM stuff, etc) is the product manager. He or she is the middleman, or reseller, who is paying us to develop the software in the believe that he or she can sell it to others and recoup that cost. Therefore, the PM's requirements are the parameters that should drive testing.
In some companies, the PM role is split across sales, marketing, and engineering. In other companies, the role is more formalized. 8:57:01 PM
In response to this StickyMinds article, I wrote
While I believe your analysis holds true for software developed in-house or on-contract for a business customer, I don't think it's a good approach for other testing situations.
Yes, testing against requirements (and testing the requirements) are part of customer-acceptance testing (or customer-acceptance testing by proxy, as would be the case for COTS, some OEM deals, and some embedded systems). But there is more to the testing (or QC) part of our jobs that relates directly to the process improvement (QA) part.
Testing is a measurement activity. We should strive to know as much as possible, in quantifiable ways, about the software under test. These numbers can then be used for risk assessment and management (a project-wide activity), and as feedback for process improvement (more in line with QA).
So *I* would want to hear you saying, "The cost of development of this product includes the cost of determining a risk profile, as well as the cost for us to measure our performance so that we can improve our efficiency on future projects." For projects with only one customer, the risk profile takes the form of the assurance you mentioned; for projects with many customers, it can take the form of a sales and profit projection (is it good enough so that we can see them at that price, and not lose our shirts of maintenance and support).
I hope it is not rejected by the editors. 1:51:03 PM
Saturday, July 20, 2002
Nothing wrong with eating dessert first
So, the test plan is a living document, but you have to have the basics of a test plan complete before you start in on test cases, right?
The only document you need first is the test strategy. And that doc can read:
1. perform a number of tests, and write them up as test cases
2. figure out ways to group test cases together
3. write up the test plan based on how the test cases can be grouped.
I wouldn't go too long without a test plan, as it provides an overview that can be reviewed for holes, duplication of efforts, etc. 11:00:21 PM
Testing, the Teflon Position?
Do many folks go into testing because they won't be held accountable for the quality of the product? After all, they didn't make the bugs . . .
How many testing organizations take away this safety net and hold testers accountable for their actual work?
I hope the organization I build remains firmly in the latter category. 9:29:29 AM
Sunday, July 14, 2002
Screenshots During Windows Install?
Anyone know of any utility to take screenshots during windows installation (other than VMWare:-)?
We've hit a number of error dialogs during foreign language installs - my current plan is to buy a digital camera, take a picture of the screen, then send the image file to a sales office for translation.
Please send me email if you have a better idea. 4:15:44 PM
At the Movies
Saw Minority Report last night. Enjoyed it very much. Did not even once think about how I would go about testing that user interface. 4:10:15 PM
Saturday, July 13, 2002
The company-wide goal that my group will support is "We know what we are sending out." In other words, when we send software out the door, we know our risk. We know our exposure. We know how likely it is for each area to have an undiscovered bug, and how severe those bugs might be.
We may be disappointed when a bug is found in the field, but we won't be surprised.
That's the goal, anyway. 12:15:58 PM
Tuesday, July 09, 2002
Lunch with a Celebrity
Today, I had lunch with Elisabeth Hendrickson, well-known author and testing consultant. I very much enjoyed our conversation, which covered a number of testing topics.
One of my favorite moments, though, was when she was taking a tour of our test lab, and commented that this was the first place she had seen where we needed to maximize the number of monitors per system - quite the reverse of the optimization most test labs strive for. No wonder I've been having such a tough time investigating benches. 7:37:21 PM
Wednesday, July 03, 2002
A Busy Week
My first week at the new job has been quite the whirlwind - I feel like my brain has been sandblasted clean as I struggled to record all my thoughts as I interviewed various folks.
No new insights into QA - just more evidence to support the basic tenets.
At least I have the next four days off :-) 9:33:28 PM
Sunday, June 30, 2002
Benefits of Developers Running Through the Test Cases
Here are two possible benefits of having developers execute the test cases while the tester watches:
1. The developer learns more about the actual testing process, and may end up being easier to work with.
2. The tester may be exposed to power-user techniques, and become a more efficient user of the system. 9:24:06 AM
It Takes Longer to Test When There Are Defects
Well, duh! But I'm thinking more along the lines of presenting time/effort estimates of the testing process to project management.
If it takes 10 minutes to run a test case from scratch to completion, and there are 6 test cases, then it will take 1 hour to run all the test cases.
If there are no defects found.
Let's say that it takes 20 minutes to write up a bug, and between 5-20 minutes to research it. Severe bugs take 2x time (on average), and trivial bugs take .5x time.
So, using these made-up numbers, each trivial bug found adds 20 minutes to the total test time, each average bug adds 40 minutes, and each severe bug adds 80 minutes. 9:19:44 AM
Monday, June 24, 2002
Can QA and Development be Friends?
So, you are out for a night on the town with a bunch of friends, and it turns out that you have a piece of spinach caught in your teeth. Who is the better friend: The one that does not tell you about it; the one that announces it to the whole world ("Dude! Gross! You got like Spanish Moss on your incisors!"); or the one that pulls you aside and discretely informs you about the situation?
The relationship between QA and Development can be like that. 8:13:50 PM
Thursday, June 20, 2002
Back in the Saddle
I accepted a job offer today as head of QA at a small company in the East Bay (SF Bay Area). My start date is 7/1.
Yay! 6:06:01 PM
Monday, June 10, 2002
Assembly-Line Testing Misses Bugs
"Iggy Pop" posted the following comment on an article about companies now being receptive to better quality processes:
... We are not 'breaking' the software as we used to. It's more of 'make sure basic things are in tact'. I don't remember last time I spent a week trying to break something. ... The 'art' of testing is disappearing quickly. There are no gurus of testing who spend days coming up with sick tests that would make developers loose sleep at night. I had one director tell me that he was going to shove a broom up you-know-waht during the development for all the agony I gave to his staff. But after the release, great reviews, and pats on the back he got, he told me that he wanted me on every project. Almost a decade later I write install scripts, setup nightly builds, write automation and CM on top of testing. Sure those are important, but nobody asks me to break things anymore. (In fact, people ask me not to break stuff.) WHERE'S THE FUN?
This is a good point. We've gotten better at doing the mundane, day-to-day, basic functional testing tasks, and even at measuring load capacity and some of the simpler advanced testing techniques. But if our years of experience in testing products gives us only the ability to do the simple tests really fast and really well, then we're missing out on something.
Then again, maybe the solution is better communication inside the testing world. We could all do the same type of testing, and when catastrophic failure occurs, we can all update our testing methodology (after much hand-wringing and spirited discussion, of course).
p.s. The article is good, too. 9:30:51 AM
Sunday, June 09, 2002
Here are two secrets to success - guard them well.
1 - If you are in tech support, especially in IT, learn to use all types of keyboards and all types of mice, in all types of configurations. Customers will think you are more competent if you don't reconfigure their workspace when you help them with a problem. This is especially true of people who have a left-handed mouse or a super-ergonomic folding keyboard thing.
(Corollary - If you can use more than one type of chorded keyboard, then people will think you are a genius)
2 - How to survive parenthood: Inguinal Canals. Learn what they are, and how to use them. 6:49:03 PM
Saturday, June 01, 2002
So here's a question: what would it take for a software product marketing team to advertise that fact that it was tested by its test team? In other words, how could a test team add enough value to a product that it was one of the major reasons to buy that product (and that reason was so compelling and understandable that it would make a difference in sales if customers knew about it)? [ronpih I guess...]
The answer is "Find customers who will pay for it."
Testing is a service, not a feature. It is a way to measure risk regarding the performance of the product, but it is not the only way to measure such risk.
At one end of the spectrum, a company could release a product, having done little or no testing on it. They then could fix bugs found in the field, and keep releasing new versions with the bug fixes.
Eventually, the measured risk of undiscovered, serious bugs in the product would be very low, due to the extensive field use and fix cycles completed.
But the company probably could not charge very much for the product at first. And by the time the product stabilized and the confidence level was high enough, other companies that chose to do more in-house testing would likely have new features ready, developed during the time saved by not having so many fix-release cycles.
At the other end of the spectrum . . .
Medical devices (and software for medical devices) must undergo regulated testing.
Telecom equipment, especially stuff that gets mounted in the central offices, must meet BellCore standards for testing (and development methodologies, and documentation, and a bunch of other stuff).
In both these cases, the immediate customer was willing to pay for the extra testing, because the combination of money saved by avoiding the cost of failures, and the amount of the up-front cost they were able to pass on down to the ultimate customers, more than covered the cost of the testing.
Testing provides information about risk - it does not in and of itself make the product better. If your customers are willing (or allowed, by the market) to pay (in time and money) for reduced risk, then you can do more testing. 2:35:01 PM
Wednesday, May 29, 2002
Variations on a Theme
Johanna Rothman, in a StickyMinds article about project risk, writes:
Explain that testers are the best risk identifiers the project manager has—that testers illuminate product and project risk with testing.
Tuesday, May 28, 2002
Too Funny Not to Share
I love my aggregator!
George Bernard Shaw. "A government that robs Peter to pay Paul can always depend upon the support of Paul." [Quotes of the Day]
More Good Thoughts
A first move from testing to QA - teach other folks about testing.
Stefan Steurs, on the swtest-discuss list:
I have also given up trying to convince managers (some of them with very
little hair indeed, you could say they have more hair than brains), it's not
the right audience. I go for the bottom line, the developers. They need
the help, the tools, the coaching, the explanation, the attention.
Developers are the largest audience so my investment in persuading people
will have a larger effect when I do it at this level. Developers are at
least in 50% of the cases that I am encountering very willing to start using
a test framework. They are willing to learn about test scripting, test data
preparation, test reporting, because they realise it is going to serve them
well. There is about 1 in 4 that doesn't seem to care and never will but I
guess that Darwinism, as Boris calls it, will take care of them. 11:46:42 PM
It's something I wasn't hired to do. I was hired to do system testing and
to lead a team of independent testers but a couple of years ago I started
realising it was such a waste of effort if all the testing I did would never
lead to a cure. It's easy to deal with the symptoms. Getting the
developers to properly unit, integration, or component test is the cure to
long and expensive system testing cycles. Of course that fight is not over
yet, more can and should be done.
Management is getting more and more supportive because they see results.
Sometimes you have to steal some time to get these results but that's
something they will forgive you.
Hooray! Another Software Testing Blog
ronpih I guess..., by Ronald Pihlgren. I'm always interested in reading about other people's experiences building and maintaining a test harness. 12:57:20 PM
Monday, May 27, 2002
I'm going through Structure and Interpretation
of Computer Programs, just because. My experience with lisp-like languages has been modifying existing elisp to do stuff I want in emacs, and I thought it might be interesting to see if I've been missing anything.
So far, I have learned that my higher math muscle is out of shape :-)
I installed scheme48 (and common lisp, from fink) on my iBook, and am doing most of my work from within emacsonaqua. So far, so good. 10:13:12 PM
Sunday, May 26, 2002
More QA Thoughts
Here are some snippets from a letter I wrote this morning:
I have a slightly different point of view. QA provides information on the
state of the product, in terms of risks. This information is used by the
product management team to determine where to go next in the overall
development process. Acceptance testing should be done by ops or the
customer (though it can be done by QA if it is spec'd out by ops or the
customer, and QA can help them write the spec). Regulatory testing is like
QA should not be the gatekeeper - that leads to high turnover in qa
What I do is figure out how to run the test effort. I can
identify the areas of risk, and staff appropriately. I can measure the
ongoing effort and identify problem areas. I can predict how long it will
take to test something. I can write killer documentation of the process.
I can figure out how to improve the testing process, both on-the-fly and
after-the-fact. And I can share with senior managers the benefits and
tradeoffs of particular testing approaches ;-) Finally, I do it all while
having fun, and making sure the testing team is having fun. 9:16:13 AM
All that is not easy, and I could make a long list of specific techniques
that I use to do it so that it's not seat-of-the-pants guesstimation.
Most senior managers don't know what they want from a qa effort. I treat
them as my customers, helping them to decide what works best.
Friday, May 24, 2002
A Great Practice
Rex Black, on the swtest-discuss list, writes:
Sharing of test cases, tools, and data between programmers, testers,
technical support, technical pubs, and other groups involved in pre-release
testing to reduce redundancy and rework in the testing process.
This also promotes the idea that everyone is working to a common goal - namely, ship good product. 12:57:21 PM
Thursday, May 23, 2002
Waiting for the Question
I'm just waiting for someone to ask me (during an interview) what I can bring to "the position" that others can not.
Answer: A 12-foot-long stuffed blue centipede that makes obnoxious noises when you squeeze its head, and glow-in-the-dark skull-bead curtains.
Quality is easier when you are having fun, after all . . . 11:22:09 AM
Failure-Free, not necessarily Defect-Free
Over on the swtest-discuss list, Gerold Keefer says:
a more fruitful concept for discussing
product quality than "defect free" is "failure free"
in the sense that if the product is used as specified
the user will experience "zero failures" or, technically
correct, close to "zero failures".
in this sense i would not mind people setting the
"zero failure" flag and this could be a benefit for
I like it. 10:52:41 AM
Thursday, May 16, 2002
I'd Vote for Them
In today's SF Chron, TIC writes:
Touring San Francisco with Mayor Willie Brown for an episode of "Extra" broadcast Wednesday night, John Leguizamo suggested himself as an ideal vice presidential candidate on a ticket with Brown for president.
Tuesday, May 14, 2002
My mornings (after breakfast) often start with
Over the hills, and far away ... and
Hey, now. Hey, wow!
Laptop in lap, going through the aggregator news items, keeping half an eye on the 1-year-old.
Exact timing is not important, thanks to TiVo.
This is the type of thing I will miss when I return to working full-time. Sigh. 10:07:03 AM
Sunday, May 12, 2002
Always Read the Whole Article
Ross Collard wrote a 3-part series on Speeding the Software Delivery Process
In the middle of part 2, he wrote the following:
Provide incentives for on-time completion of testing. Even if the test project leader has to change the budget, it is a good idea to provide incentives for timely work, such as a test retrospective review meeting on Kauai or at least a jolly good free lunch for the test team.
Refresh the testers. Improve test productivity by periodically sending the testers to expensive spas, to recharge their batteries and to show you really love them.
Better yet, go yourself and leave them to finish the testing. This shows them the rewards that can accrue when they are successful and get promoted to your elevated rank in the organization.
Can I be in charge over there? I could really use a weekend in Calistoga, or at the Sonoma Mission Inn right now . . . 12:55:50 PM
Thursday, May 02, 2002
About George Orwell
In Orwell's own mind there was an inextricable connection between language and truth, a conviction that by using plain and unambiguous terminology one could forbid oneself the comfort of certain falsehoods and delusions. Every time you hear a piece of psychobabble or propaganda -- "People's princess," say, or "collateral damage," or "peace initiative" -- it is good to have a well-thumbed collection of his essays nearby. His main enemy in discourse was euphemism, just as his main enemy in practice was the abuse of power, and (more important) the slavish willingness of people to submit to it. [LA Weekly, via Arts and Letters Daily] 11:10:20 PM
I love Perl. Here's one solution to an interview question I was just asked to solve:
Create a subroutine, in the language of your choice, that "compresses" the string "aaaabcccdd" to "4ab3c2d". The goal is an abstraction that could take any input of ascii characters.
So I wrote:
$string_to_be_converted =~ s/((.)\2+)/length($1) . $2/egs;
Then I wrote the (relatively) long, drawn-out algorithm of checking each character against the previous one while walking through the string. But the one-liner still makes me grin. 2:26:21 PM
Wednesday, May 01, 2002
A Really Tough Interview Question
I wonder how useful it would be to ask tester candidates to draw something during the interview.
Drawing as a knowledge management tool?. 2:37:55 PM
Drawing As Process. The benefits of drawing with an attitude of learning how to see, rather than learning how to draw move the mind from the nebulous impression of what a thing looks like to our becoming aware of how it fits together, how it works or how it expresses emotion. The multiplicity of the parts, no matter how minimally represented, become lucid and to make communication possible. [xBlog: Visual thinking linking | XPLANE]
Here's an unexplored area for knowledge management tools. "Learning how to see" is a central dimension of capturing, generating, and applying knowledge. That would make drawing a natural part of anyone's KM toolkit. [McGee's Musings]