For the record, it's usually a better idea to meet about changing requirements before you start testing, not a week before the code freezes so all the testing you've done for weeks isn't completely useless. However, I have not been at this job for nothing, or at least, I have learned how to be lazy even in a job where that is quite hard to do; I wrote the tests and used the test failures to prove requirements were wrong. Also, to be fair, I'm the one that insisted on requirement changes, so I can't complain too much when I deliberately went after this one like it insulted my mother.
(I will admit delicious' timing hitting in the middle of this set off a combination of "DID YOU HAVE THIS TESTED?" with professional scorn combined with a weird kind of relief and hilarity. It's only in the last year the state belatedly stopped looking at program testing for the next major budget cut since the vendor who creates and maintains our code and databases has a team of testers and why do double work and then were crash coursed into why having a team approximating actual user conditions and client conditions with real life scenarios was like, maybe not a bad idea. So there is amusement value that in this, at least, the state is somewhat ahead of some private companies in realizing that when making a program that people are supposed to use, it should be tested by people who will actually use it in actual user conditions.)
Weirdly, it's also made me somewhat more forgiving on some kinds of user issues online when I (think) I can guess what happened during testing (assuming there was testing). The thing is, program testing sounds a lot less structured and predictable than it actually is for the most part, but that is only obvious when you've either been trained to it or you've done it for a while; from the outside, it looks like a mess of random tests without rhyme or reason. Testing of new changes and everything can be something like that, though even that falls into some general patterns, but in some ways, the most important and most powerful part of testing is Regression -- that is, taking a list of basic, well-known, completely familiar, utterly boring scenarios and mindlessly walking through them because you've done them for years and holy God they are boring. What makes them useful is that they aren't, like you think, just tests that cover the most common things a user will do. They also include some fairly straightforward but rather obscure things (example would be a client denied TANF for time limits by receiving TANF for 12 months) moving to a county where that piece of policy isn't enforced due to the economy.
This is not a so-common occurrence that even a third of caseworkers will ever use it. But it's a real life scenario that could happen, but even moreso, it's one that hits a huge variety of places in the program that interact to see how they work together. Even more important than that, they test a lot of common interactions all at once that would take several tests to cover. If it fails, then it's a good sign of general system stability being not so great. And where it fails can tell you exactly what part of the system is going wrong. Some though, are what I call Failure Tests; they're often somewhat complicated, sometimes long, and require that all the separate programs send the information to each other correctly. These are usually end-to-end; you start as a client applying for benefits and end as a caseworker approving or denying a case; the space between is infinite. What makes these important--really important--is that in regular non-Regression testing, we don't often do end-to-ends because honestly, there's simply no time.
(example; an end to end for me takes about three to four hours from start to finish, on average: there are three to thirty tests per program change; there are between a hundredish to three hundredish program changes in a release; right now we're doing roughly between eight and twelve releases a year; none of this includes environment downtime or network collapse or Oracle being--itself. I did the math once, then just remembered last October-November when most of our non-Regression tests had to be end to end; we were overtiming every day and Saturdays for weeks because in an end to end, any failure at all, anywhere, means going right back to the very beginning and starting again.)
I can't actually tell if I'm good at my job, but I'm pretty sure I'm competent at it, at least from how much I'm given to do (or you know, not a lot of people to do the work could also be a factor), but I do like the fact that more personal interaction with a few of the developers--and their willingness to respond quickly and willingness to listen and watch a demonstration of a problem before making a proclamation on it (surprisingly rare) has been far more effective than I ever imagined in regard to getting a working program.
I admit, it probably didn't hurt that one day one of them called me and after a brief skirmish, I admitted I was wrong and while withdrawing, told him in a fit of relief that I loved when I was proven wrong; it lowered my workload and stress level tremendously.
A little background about the developer known as M.
M was one of my October/November defect victims when I was filing several a day on the online application for Food Stamps, Medicaid, TANF, and other programs that was released last year. That release was where I was line-editing the website for formatting, spelling, spacing, and style breaks and turned in defects that looked like novels; if anyone was reading here last year during this time, you might remember my LJ equivalent of rage blackouts. I do not think he liked me. I know for a fact that seeing my name in the defect list was causing some Pavlovian reactions of terror and dread among that entire unit no matter what unit got it, even ones unrelated to them. I was ruthless and exhausted and remember up above where I said we didn't run end-to-ends very often outside of regression except that one time? This was that one time. We each had something between sixteen to twenty end to end tests, and each of them I had from three to ten times I had to rerun it from the beginning because it failed so badly. Many people had more than that.
Count a few months from that (when memories had not forgotten but the heat of rage had lessened) and I filed a fairly benign defect on the program for a new release. Within hours I had back a painfully polite, detailed explanation and current and future changes to the program that required this, his general hope I'd find that acceptable, and a rather grimly resigned note that he hoped I'd have as much fun tearing apart the future build that would be a redux of October/November with massive program changes. I showed it to my IV, who bit her lip and grinned. "They're scared of you."
Like I said, I do know for a fact they got twitchy seeing my name; apparently, this was common knowledge. What I didn't know was that he was sincere; he was an artiste and he wanted this program to be perfect. Apparently, the general feeling was if it made it through me, it was a made program. I don't know if there is a moral lesson in there, but I was kind of appalled at myself for kind of liking him after getting the email, which sounded like someone was typing with clenched teeth and not-so-fond remembrances of certain defects where I embraced sarcasm without subtlety and at length but bound and determined to be polite and fair.
Over the months from that release to now, he got into the unheard of habit of wandering down with a minion developer or two, grabbing a chair, and talking about the state of the program, current defects, and subtle inquiries on how it was going and how we were doing, and expanded into discussions of future releases and in general doing his damndest to be the most friendly, open, cheerful developer ever when faced with the sworn enemy of development, the user (or rather, us, proxies of the user and defender of their ability to use a program). And then came the day where he plopped down between our cubicles and asked, a little wistful, if we had any problems? We hadn't filed any defects in a few days, and it was disconcerting everyone.
He's the only developer that I email now before I file defects; at first, to give him a heads-up and offer to do a demonstration so he could see the problem; more lately, to simply ask about something I found that could be a defect and see if there's a reason I don't know about to explain it. He's also the only developer I'll hold off filing a defect for, and that's because he's also the only developer who has ever been willing to say "I don't know" and will go and find out or direct me to someone who can explain.
Most recently, in part of the drama of the requiremetns thing I mentioned up top, after testing began and several defects were filed on SSP and the next morning, he and a couple of his developers were in my cubicle all armed with that list of requirements and grimly asking me for my copy because obviously, the one they were given was completely wrong.
It was much later that I realized he'd come down that day because he thought that the only logical explanation was that they were working off the wrong requirements when they wrote the code; he took as a matter of course that the scripts I wrote to test his code were right.
I pushed to get that requirement change in a meeting that was supposed to be about a very minor issue with one of my tests; a meeting that should have been ten minutes and that I spent an hour dissecting it and explaining how half of them needed to be changed and one of them didn't need to be there at all, and I was allowed to listen in on a later meeting where the people that wrote it agreed to every change I asked for. I did it because it was badly written and badly thought out and needed to be done, and even now, I'm not satisfied with the results. This wouldn't have happened if I hadn't written tests that I knew would fail, tests I used to prove what was wrong and how to fix it.
But I also did it because in my notebook where I keep the list of requiremetns, I also keep the CRD, which is a mockup and screenshots of how the site will look with those changes with every requirement documented by screenshot. By every requirement they fulfilled, the developers had to write out a logic chain for each requirement because they were so badly written that it took them weeks to work out what it was supposed to be and how to implement it. It was one of the best documented and most accurately represented program changes I have ever seen, and they wrote it without anyone explaining what it even meant; they had no idea of the policy and no one told them where to look. They did it after unanswered emails to people who didn't bother to answer their questions about what they were supposed to be doing. They got it right according to the BRD they were given, which was missing two requirements that one week into testing, they had to go back in and figure out a way to recode into the program, and worse, I did know the policy and I couldn't tell them why they were there.
I didn't like myself for the tests I wrote that resulted in defects, but M never so much as considered I could be wrong; I can't tell him that actually, I was. Our entire department in the agency is going to be audited and everything we do will be examined, every test will be checked, and every defect needs documentation--and every developer's code mistakes and fixes. M didn't make a single mistake, but the requirements were so badly written that the only thing he had were those logic chains that I couldn't be sure the auditors would understand; I understood them perfectly because I had to do the same thing for those tests. But a passed test does not result in requirements meeting; one failed doesn't either. But half of them failing? That will do it. Getting the rewrite protects the testers if we're audited; when I passed the tests the second time around, we cited exactly what requirement we used and how we interpreted it. It protects the users from what was going to be a mess of weirdness that would have confused everyone.
It also protects the developers, because the new rewrites confirm that what they coded was exactly right. Except for those two requirements that in months of development and coding they weren't given and didn't have.
I couldn't get rid of both of them, but I got rid of one of them. It was a bad requirement, yes, and I would have tried to get rid of it in any case because it was a potential invasion of privacy; but in all honesty, part of it was M going through my requirement list in my cubicle because when my test said his code was wrong, and despite the fact that it contradicted what he was given, he believed it over his own eyes and his own documentation and the fact that his design had been approved by the same people who approved those requirements. In the first week of testing on top of everything else, they had to do a redesign of functionality on the site on the fly.
I wouldn't have done anything different even if M hadn't been hit by it; on the other hand, if M had been at that meeting, he might have recognized the tester he first met a year ago when she line-edited an entire website page by page and turned in defects three times a day and refused to withdraw a single one. Then, like now, it was doing the right thing. In this case, the right thing included making sure that not only were clients protected and their privacy upheld, but that if the developers are audited, every test I wrote that was passed and all the documentation clearly confirm that M's coding was right.
In retrospect, they might have agreed to the changes even if I hadn't spent an hour doing a line by line on why they were wrong. They probably didn't need a line edit to make the changes; I needed to do it, so they had to admit exactly what was wrong. It was petty and it was deliberate and I'm not sorry at all.
I'm okay with this.
Posted at Dreamwidth: http://seperis.dreamwidth.org/106061.html. | You can reply here or there. | comments