Rex Black

  • redwerewolfhas quoted2 years ago
    One development engineer told me that he “felt very depressed” after reading the risk analysis documents my test staff had prepared. I find it more depressing, however, to ignore the possibility of failure during development, only to live with the reality of preventable failure after release.
  • redwerewolfhas quoted2 years ago
    The non sequitur interview involves asking some question that seems to be analogous or related to testing. The classic example is, “Tell me how you would test a salt shaker.” I find this silly. I call this a non sequitur interview style because it does not follow that, just because someone can craft a clever tale about testing simple real-world objects, they can test complex software and systems. You are hiring a test professional, not a raconteur.
  • redwerewolfhas quoted2 years ago
    I sometimes hire people whom I think will grow into a job, but I never hire people I think are trying to con their way into a job.
  • redwerewolfhas quoted2 years ago
    People, regardless of where they are, are not widgets. They want responsibility. They want respect. They want to be valued. They want us to be interested in developing their skills and their careers, even if they work for a third-party contractor. They want feedback.
  • redwerewolfhas quoted2 years ago
    The most dangerous kind of wrong is the kind of wrong that sounds reasonable, as I've said elsewhere in this book.
  • redwerewolfhas quoted2 years ago
    The actual coverage uses the same weighting, but counts only if the test has been run and resulted in a Pass, Warn, or Fail. Blocked, skipped, and queued tests don't count toward actual coverage.
  • redwerewolfhas quoted2 years ago
    In my experience, this is the type of situation that arises when too little component and integration testing was done prior to starting system test. A few large functionality and test-blocking bugs come up right away, preventing further progress. Once these bugs are resolved, a whole host of serious bugs affecting finer-grained areas of functionality and behavior will become visible.
  • redwerewolfhas quoted2 years ago
    In addition, while it seems to tell us something about quality, it is a great myth of testing that the percentage of passed and failed test cases is a reliable surrogate metric for product quality.
  • redwerewolfhas quoted2 years ago
    That is, we measure something we can measure—or can more easily measure—to shed light on something we could not measure (at least not easily) and which is the actual area of interest.
  • redwerewolfhas quoted2 years ago
    A clean, reproducible bug report is indisputable. In many cases, though, the price of certainty is too high. If you've ever had to perform a reliability test of software or hardware, you know how time-consuming such demonstrations of statistical confidence can be. If you tried to investigate every bug to the level of certainty, you'd never find half of them. It's important to keep in mind the need to progress. Testing computers, like all engineering, is not a search for truth—that's science. Engineering is about producing things, about making useful objects. Often as not, close enough must be good enough.
fb2epub
Drag & drop your files (not more than 5 at once)