Updated: 1/22/2004; 8:06:13 PM.
ronpih I guess...
Your guess is as good as mine...
        

Thursday, August 29, 2002

The weekend is approaching and it's almost time for me to continue working on my home project (UI test automation component).  For now I've named the component "aaf" for no particular reason other than it's only three characters and easy to type.  Now that I can find UI objects with it the next step is to be able to drive UI objects.

The first capability that I want is the ability to programmatically send keystrokes to a particular window.  You might think this is easy but if you want to write robust UI test code a whole can of worms open up.

I guess the first thing I should probably get out of the way is discussing my opinion on test robustness, verifications, and the misuse of the Sleep() function.  Ignore the following if you are just casually interested in UI test automation or if you don't care if your tests will always work no matter what platform you run them on.

<StrongPersonalOpinion>

In order to create robust automated UI tests YOU HAVE TO VERIFY EVERYTHING.  The ideal test never fails due to a test problem and can reliably diagnose problems it encounters in the program you are testing.  If your test continues on without verifying its assumptions you never know where it will finally fail if those assumptions turn out to be false.  To avoid this YOU HAVE TO VERIFY EVERYTHING.  If your tests need to run in a lab with all kinds of different computers and on all kinds of different platforms and under all kinds of different platform conditions and you don't want to spend your life debugging hard-to-find problems that may be different in every different environment then YOU HAVE TO VERIFY EVERTHING.

Many have been lured by the siren song of capture/playback UI testing.  This sounds particularly enticing to people who know they have to do testing but have never actually done it.  What could be better?  "I don't even need skilled testers!"  "I'll just get someone with time on their hands, turn on the recorder, and have them run through some scenarios.  When I need to rerun the test I'll just play back the recorded UI actions."  These are the people that, when you talk to them a year later, will tell you that UI test automation just can't be done.

The problem with capture/playback is that there is no verification of assumptions.  The program under test is bombarded with a series of UI actions that worked once under very specific conditions.  If any conditions change the test doesn't know about it.  It will just blindly throw commands to the UI whether they do what they are supposed to or not.  This might work if the program never changed and you only ever tested it on one specific platform but if the program you are testing and the environment you are testing it on never changes, why would you automate testing it?  Automated testing is way more expensive (even using capture/playback) than manual testing.

"Verify everything" implies a rule that you need to live by if you want to have robust test automation: No blind sleep()s.  What I mean by a "blind sleep()" is a call to the Sleep() function (or whatever function you have on your system that just stops execution for a specified time interval and then continues) rather than verifying something that tells you the UI action you executed had its desired effect.  For example, you invoke a menu command to bring up a dialog and then start sending keystrokes to the dialog.  When you run this function on the 2.2 GHz machine you develop your tests on you notice that the keystrokes start going out before the dialog is up so you add a 500 millisecond delay and everything works fine.  Now you deploy your test to the automation lab where it is run on a 600 MHz machine and on that machine it takes the dialog 2 seconds to come up.  The test results come out and you are off to the lab for a debugging session.

The ONLY acceptable use for a sleep is to give the program under test some time to execute up to the event that you then check to determine that your UI action had the expected result:

bool MyResult = false;
DoMyUIAction();
for (int i = 0; i < 20000; i += 500)
{
    if (MyUIActionWorkedAndIVerifiedIt())}
    {
        MyResult = true;
        break;
    }
    Sleep(500);
}
if (MyResult == false)
{
    ReportErrorAndFail();
}

True story:  Once I was working on an automated test (that sent keystrokes to the application I was testing) in the Visual Studio source editor.  My test just blindly sent keystrokes without doing any verifications.  While trying out the test, the keystrokes my test was sending got out of sync with what the test was supposed to do and it ended up activating a link in the Visual Studio online help to send an email to Microsoft product support.  The email client on my machine popped up and my test started sending keystrokes into the email message.  Before I could stop it, it my test sent the hot key for "send" and I had sent an email message to PSS containing the text I thought my test would send to the program I was testing...

</StrongPersonalOpinion>

OK, enough for tonight.  This weekend I'll try to get my component sending keystrokes.

 


11:02:34 PM    comment []

© Copyright 2004 Ronald Pihlgren.
 
August 2002
Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
Jul   Sep

Weblogs
Microsoft Testers
VC++ Bloggers
Links
Note:This is a personal weblog. All opinions expressed here are mine alone.


Click here to visit the Radio UserLand website.

Subscribe to "ronpih I guess..." in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.