Wednesday, May 28, 2008

Peer Reviews

"Peer reviews are a waste of time", "Peer reviews don't help", "We don't have time to peer review the", peer reviews are evil, blah, blah blah! I've heard this for years and all I can say to this is "Bologna!" A second or third pair of eyes on something is useful. Haven't you ever had a problem while debugging code and said "Hey Bob, can you take a look at this" and then either through your explanation or something Bob says you have an "a-ha" moment can get past the problem? A peer review is like trying to get to the "a-ha" moment before you realize you need to.

Statistics have shown that Peer Reviews do help if they're done right. (see the book “Best Kept Secrets of Peer Code Reviews” by Jason Cohen.

First, some good reasons to do peer reviews. The best reason is defects found early cost less to fix, simple economics. A second pair of eyes can help. Another reason is that less experienced programmers can learn from more experienced programmers at peer reviews. One note is that typically everybody can learn something from other peoples code.

Second, to shoot down some myths. People have a lot of reasons to not do Peer Reviews. Things like "my code is already good enough". To this I have to ask, have you ever had any problems with your code? If you answered No then you are better than any of the Rock Star programmers that I've heard about. Peer reviews are to get rid of problems before they get out.

How about “We don't have time for peer reviews.” Then how much time do you have to fix problems later? And how much will it cost?
Another myth is that peer reviews don't help. They don't help if you don't do them. They don't help if you don't really review the code. If you find just a few defects it can be worth doing it. My Mom and Dad used to say “Anything worth doing is worth doing right”.

There are alway excuses for not doing peer reviews, but no good reasons.
People have ego problems with peer reviews, too. There's the “Big Brother” effect where programmers feel like the peer review is to monitor their every move. This is not how it should be, it should be about removing defects and not about monitoring.

Earlier I said peer reviews help if they're done right. That means the code is really reviewed, the reviewers read over the code and look for potential problems, makes sure it meets the requirements, and is understandable. If this is done then defects are found and fixed. If the code is just glanced over, fewer defects are found, problems are found later and costs more to fix.

To close there are some truisms about Peer Reviews:


1) Hard code has more defects – the more complex the code, the more potential defects.
2) More time yields more defects – The more time spent reviewing the more defects found
3) It's all about the code – Review the code not the programmer
4) The more defects the better – Defects found early are cheaper to fix

The bottom line is a better product and that's what peer reviews should be all about. Happy reviewing!

Saturday, May 24, 2008

Technical Debt

I was browsing around the web and found a interesting concept, Technical Debt. This was a concept brought up by Ward Cunningham in OOPSLA '92. It is being taken over by the Agile people under Agile Project Management . I think it's a good concept for test engineers to take into account, too. The Agile portion and blogs by Steve McConnell (Technical Debt and Technical Debt Decision making) discuss it from a software standpoint. But for test engineers, it can be more than just technical debt in the software. The articles don't go into much detail on exactly what technical deficit is.

Here is what I think. For software some items that would cause technical deficit would be issues:

- Hacked together code or code with a lot of short cuts
- Spaghetti code
- Code that is complex or hard to follow
- Incomplete or inadequate error checking.
- Code to be implemented later
- TBD's or incomplete requirements.
- Poor or Incomplete design
- Anything “owed” to the software that is deferred.

I was contemplating technical debt for the rest of test and hardware can most certainly have technical debt, too. Again, it would be anything owed to the hardware. Some of this would be:

- Incomplete schematics
- Unspecified connectors
- Incomplete grounding or shielding
- Incomplete wire lists
- Incomplete wire specifications where they were needed.
- Again, anything “owed” to the hardware

The mechanical portions could have technical debt as well. It would be things like:

- Incompatible of incomplete layout
- Unspecified mechanical connections,
- Other items that are needed but left unspecified.

(I not as much an expert on hardware or mechanical aspects of test but I'm sure a lot of the hardware people can add to these lists.)

The blogs by Steve McConnell on Technical Debt equate it to financial debt because they have a lot of the same issues. Technical debts, as well as financial debts, have to be taken care of at some point in time or they will bite you in the rear end. If they are not the hardware and mechanical technical debt can be disastrous at integration time or when test operators actually use the equipment. The Software technical debts can come back at anytime, like an unpaid bank loan. Some software debts may bite you at integration time, or worst validation time, or even worse, while the equipment is in use on a production line. To keep it in financial terms, this would be like a bankruptcy. You could get past the technical bankruptcy but your reputation could be ruined for a while.

Also, the hardware technical debts can cause the software have a technical deficit if a hardware debt has to be fixed in software. It's like the hardware defaulting on a technical loan co-signed by the software. This seems to happen quite often.

Overall, when technical debt is taken on, it needs to be taken into account for future releases. If there is to much, it can cause a burden on your future of your tests. If it's not taken care of it could bring everything crashing down, potentially during validation or production.

The scary thing is if you don't realize you're taking on technical debt you have bigger problems and your people need to be trained or replaced.

Tuesday, May 20, 2008

NI Week - I'll get there some how!

It's all but official, I'm Not going to NI Week through my company this year. Due to training budgets shrinking and a "one person can train everyone else" philosophy, the company is thinking of only sending one person down to collect information and then let him pass it on to the rest of us. [sarcastic voice] I know how well that always works. [/sarcastic voice]

The reality of it is that I AM going to go on my own, if nothing else on the "Exhibition Only" pass. I live in the Dallas Tx area which is about 3 hours (2 1/2 hours the way I drive) away. I'm planning on leaving one morning, early, with a gallon of coffee in hand, making it there for the opening address that day, seeing the exhibits, then coming back that evening. Simple...right!

I may get a cheap motel and stay one night but I have a while to figure that out.

The bottom line is I'm will be at NI Week, one way or another. This is something I'm passionate about! Even if just for one day. It is an experience that everyone should have. I was fortunate enough to go last year and I'm thankful for that. If I can find the money, I'm going to register for some of the other options, you know, things like food or the sessions for one day. However, as a single Dad with a Son in college and a Daughter about to go to college (UT Austin, near NI), I can't justify much more than the gas for the trip on my own right now.

I would like to blog some more on NI, since I've been pulled off to write processes I haven't had a lot of testing to write about. (In 2009, I'm going to make sure that's not the case!)

I will blog about my road-trip and my experiance at NI Week.

Here's the link to register for NI Week, sign up early. June 1st is the deadline for the Early Bird Special.

Friday, May 16, 2008

The most important tool - Good People

I've talked about tools that can help people be more efficient and do a better job. But a good software engineer/programmer will beat all the poor programmers with good tools hands down, every time.

A person who has had software classes on “How” to program and what is a good program and not just coding is much better than the guy who has had one or two classes and thinks he's a programmer. NOTE: I do want say that there a many really good programmers who are self-taught, they understand the what, the how, and the intricacies of how. Theses are the code ninjas (without getting into the pirate vs. ninja discussion) These are the guys that you should really find, but I don't know if they would be willing to work in test.

Failing on finding the code ninjas out there and convincing them test software is the place to be, you need the good software people who loves to operate the hardware. I don't agree with programming tests at interviews but I do believe Test Software people need the skills and knowledge of programming in order to do the job right. Even if they are doing both the software and hardware parts of the job. Programming skills should be a requirement to do test software.

Another aspect of the people portion of the job is “can we all get along”. Engineers are notorious for personality quirks. Major problems can happen if a teams quirks don't mesh. In this world of diversity we're all suppose to be accepting of each other. But sometimes, it just doesn't work. Especially if there's a time crunch or big technical hiccup. So, even with diversity, sometimes teams don't work due to personality conflicts. If there is a toxic personality on the team, it can be just as bad as having all bad engineers on the team. (When I hear Diversity I always think of the Dilbert cartoon where someone says “The longer I work here, Di Verse it gets”, say it out load and you'll get it)

The point I want to make sure I get across is that People are really and truly the most important asset, and are more important than all the tools in the world.

Sunday, May 11, 2008

NI Week is Coming!!

NI Week is getting close!

August 5 - 2008

NIWeek 2008


I went to last years NI week, and it was incredible!! It was a very valuable experience. I would love to go to this years but typically the company I work for won't let the same person go to something like this two years in a row. They tend to want to send different people every year. I'm going to try convince them I should go, I guess we'll see how it goes in the next few weeks.

Below is a link to the preliminary program for this year. By the way, a quote from me is on page 27.



View the NIWeek 2008 Preliminary Program

Testing using simulations

I've written a couple of times on unit test, however, I've heard the old Test Engineers mantra of “our software is different, you can't unit test it!”. I say Bull! I will agree that Test Equipment software is different but if you say “you can't unit test it” then you're lazy, or just don't know much about software.

The main reason TE software is different is because a lot of it runs against hardware, it calls hardware drivers to read information from hardware and control the hardware. It does take work but it's testable. Some situations where you don't have hardware to test with, can't induce all the error's you need to check, or just want a good software product, you need to unit test.

Some common failures that happen during the “Go” path testing are typically checked because they come up during regular development. But not all faults are checked. These off nominal paths are not easily checked with standard UUTs on a test set.

Some tools for more comprehensive testing or fault testing would be:

  • IVI drivers simulation mode

  • Other simulations (I.e. DAQmx in MAX with Simulated drivers)

  • Inserting error data.

  • Software tool code testing KlocWork


The easiest of these test methods is a Software Tool Code tester, such as KlocWork. This is because, after the tool is set up, you just run it and let it tell you about the potential (or certain) failures. I’m just starting to learn about KlocWork so I don’t know all the specifics at the moment. It appears to be able to capture a lot of the logic and path problems. It goes past the ability of LINT to verify standards and does checks along paths. I’m not sure how it works if you use LabWindows CVI functions or if that’s a non-problem.

Another way of testing is using simulation. Some easily available ways to do this are the simulation features in some IVI drivers, tools such as Agilent''s virtual rack, or with DAQmx and NI's Measurement &Automation Explorer (MAX). I haven't used virtual rack but it sounds like it will simulate instruments as if you were actually running tests with a rack of instruments. I don't know about the setup or operation of the, but the presentation I saw made it look like it could be useful. Virtual Rack IVI drivers

The simulation typically built into IVI drivers allow for instruments to be run in simulation mode. However, a more powerful simulation tool, at least for some NI instruments is the measurement explorer. For NI's DAQmx instruments, you can set up a simulated instrument and the test software operates as usual. The simulated instrument can set up to return various values, as needed. The main problem with this is that if an instrument is simulated using measurement explorer you have to go back to the measurement explorer to check it. In other words someone could take out a card and simulated an the software wouldn't know.

One brute force method of simulation is to comment out the call to and instrument driver and just set the return variable to a value. Without some software discipline, this can be dangerous. If one of these is left in the code then test won't return a valid answer. If this method is employed either a comment tag (I usually use //JAV) should be put in where the test is or a compiler directive should be used, like an #ifdef. I typically use this during developmental testing but I always make sure that, before the end of the day, all of these are out so that I don't forget about it before the next day.

My main point is to use tools that are readily available to make sure you put out the best product possible.

Thursday, May 1, 2008

Code and Unit Testing

Last time I talked about Unit Testing or external types of test. Now some thoughts on internal testing.

IDEs (Integrated Development Environments) using basic debugging techniques (i.e. single step, view variables, break points, etc) are the front line of internal testing. They give developers insight in to what's going on in a program. A program is run, it doesn't work, you debug. But there are other ways, more powerful ways, of doing debugging internal to a module that aren't as time consuming.


First, there is an ASSERT statement in most C/C++ languages. In NI's CVI in the toolbox it is a DoAssert statement (include toolbox.h to use this). The DoAssert statement is used to help find problems during development that don't happen that often. Basically, it is a condition passed to the DoAssert (Eample: i>1) and if the condition evaluates to TRUE, execution continues. If the statement evaluates to FALSE, module information is printed and execution stops. It prints out the module name a number like the line number (remember __LINE__ is the current line number in the program) and a message, typically with some debug information.

I use ASSERT or DoAssert (CVI) to check for out of tolerance conditions that happen once in a blue moon (good 'ol southern saying that means “not very often”).

Another way of internal testing is to do logging. Some people only use logging as a last resort after problems are found but I advocate putting in logging as your coding, putting log statements in at potential problem spots. Compiler directives can control whether the logging is executed or not. (#ifdef and #endif)

I have some logging routines laying around that I always use. They use the vaprintf style of functions so the logging is more like a printf statement in C. The LogOpen function is called at the start that opens the log file. A LogClose function is called at the end to stop the logging. It does I/O during execution by flushing the buffer every time the LogData is called but that can be controlled with compiler directives. The flush can be turned off if you don't want the I/O delays.

And, like I started with, there is always the standard debugging tools. These are just a couple of ways to do debugging internal to the module. I just want people to step out of their little box and think “How can I make my code better?” and “What can I do to keep from giving incorrect results or give errors?”