New Guide “Acceptance Test Engineering” or “How to decide if your software is ready”

The Patterns & Practices team has been pumping out a lot of great content lately. So much, that I haven’t been able to keep up with it all. What I like about their new model, is that they publish their works in draft or beta form to CodePlex.com, so that they can gather community feedback.

The latest guide from them is titled “Acceptance Test Engineering”, and can be downloaded from here. Please read it, and provide some feedback, any feedback back to the team. P&P is a great example of how some groups in Microsoft are embracing the community, and leveraging their input to make their products better.

I think this guide will be of interest to lead developers, consulting managers, project managers, test leads, and architects.

This guide focuses on the skills around Acceptance Test, and how to implement them. The first section focuses on thoughts behind AT’s, and why we need help defining ‘done’ on our projects.

The second section reads like a patterns book. It is chock full of effective practices. Best part? They explain scenarios as to when each practice would and would not be a good idea. For example, they cover a process known as ‘Test Last’. I have heard this called TED by TDD developers. TED stands for Test Eventually Development. They cover the practice, and are clear about its pros (are there any?), and major cons. They cover when you are likely to see this, such as when a supplier is not able or willing to provide incremental builds throughout the process. The big limitation they outline is

Significant shortcomings may be found too late to do anything about them in the current product release. 

On page 13 they talk about how they tried to define acceptance testing by developing a model, and then executing test against that model, to see if it was a good model to be used in the book. The great thing is, the first models failed their tests. I guess doing that is like writing a VB compiler in VB.

When I was in consulting, our project teams spent some of their time on ‘Story Tests.’ The idea was to do some testing at a story level. We felt that testing at purely a code or functional level left us blind to whether the software was achieving it’s desired goals. We tested across the system, relating to the story of a user. This is covered in the guide on page 170, but they call it “Soap Opera Testing.” I LOVE that title! I wish I had thought to use it all those years ago.

The third section contains a series of examples. I find this to be lacking in a lot of guides that try to teach you a new practice. I think real samples, and scenarios are important in learning when to use, and when NOT to use a practice or skill.

The sample story around Soap Opera testing is truly a soap opera. It involves a CEO, credit card fraud and identity theft, and ex-wife and a girlfriend.

You should check the guide out, provide feedback, and consider folding the patterns that are covered into your process.

Comments

Popular posts from this blog

Farewell

How does an Architect pack?

Job security is a myth, and how IT Pros are Thriving