Requirements Metadata Attributes – part 2

September 8, 2010

To recap from part one, the essential attributes (aka metadata) to be associated with a requirement include Requirement ID, Source ID, Requirement Text, Date, Effectivity and Verification Method.

Beyond these essential attributes there are some that I have found to be very desirable, at least in some situations.  These highly desirable attributes include:

  • Requirement Short Text – An abbreviated text of the requirement can be useful when tracing the requirement, either within the requirement set, or into the design and implementation.
  • Type – A code identifying the type of requirement.  A common type set is that identified by the acronym FURPS+.
    • Functional – includes things like feature set, capabilities and security
    • Usability – includes things like human factors, aesthetics, consistency and documentation
    • Reliability – includes things like mean time to failure, recoverability from failure, predictability, accuracy and availability
    • Performance – includes things like speed, efficiency, through-put, response time
    • Supportability – includes things like testability, extensibility, adaptability, maintainability, configurability, installability and localizability
    • The plus includes constraints, physical, interface, and implementation requirements
  • Risk – this attribute allows you to identify the relative risk level of each requirement.  Requirements identified as high-risk should be evaluated for inclusion in the technical performance measures.
  • Verification Level – The level at which the requirement will be verified.  Some requirements may be evaluated at the component or sub-system level in order to reduce the amount of time and effort in full system evaluation.

There are an additional set of attributes or metadata that might be employed to support metrics collection and analysis.

  • Funded – An indication that the requirement was a funded change.  It may also be advisable to track the reason for an unfunded change.
  • Deleted – Requirements that are deleted are one indication of volatility.  While some volatility is expected and may mean increased understanding of the problem, too much volatility may indicate the project is in trouble.
  • Modified – a Boolean value that is another indication of volatility.
  • Verification Issue – a general category used to track verification problems and concerns.

All of these requirements metadata elements may not apply to your particular system development situation.  On the other hand, each of them should be considered for inclusion as you work through the requirements development process.  Part three of this series will conclude with some occasionally useful attributes that you may wish to consider.



Change Case Analysis

August 21, 2009

A while back, another architect mentioned the topic of change cases to me when we were discussing ways to evaluate software architectures.  That was the first time I had ever heard the term, and I assumed it was a new technique.  I filed it away as a topic to research later.  Recently I was asked to provide some training on the topic of architecture evaluation and I decided to do that postponed research.

The two years I spent as a Guide at About.com taught me a lot about internet searches, so I was surprised when I was unable to turn up much in the way of relevant material (if you need to change text from upper to lower case or vice versa, I can tell you where to look).  The only truly relevant information was on Scott Ambler’s Agile Modeling site – http://www.agilemodeling.com/artifacts/changeCase.htm.  The most significant thing there was a link to the book Designing Hard Software by Douglas W. Bennett (http://www.amazon.com/exec/obidos/ASIN/0133046192), an out of print 1997 book.  The only reasonable thing for me to do was buy one of the used books available on Amazon.

Bennett does a good job of identifying categories of change cases.  Even if some of the specific examples seem a little out-dated, the concepts translate quite readily into problems being faced by systems under development today.  What’s missing is any indication of what the content of the change case should be, and how they should then be used in the evaluation of a software system architecture and design.  Scott Ambler offers the following as a change case template:

  • Name:  (One sentence describing the change)
  • Identifier (A unique identifier for this change case, e.g. CC17)
  • Description (A couple of sentences or a paragraph describing the basic idea)
  • Likelihood (Rating of how likely the change case is to occur (e.g. High, Medium, Low or %))
  • Timeframe (Indication of when the change is likely to occur)
  • Potential Impact (Indication of how drastic the impact will be if the change occurs)
  • Source (Indication of where the requirement came from)
  • Notes

That is a start on an answer to the first question – what is the content of a change case?  But that content list raises a question of its own – how do we decide on the potential impact?   Also, we still need to consider how we use this information.  Is there a structured format for evaluating the impact of a change case?

One possibility is a variation on Robustness Analysis.  This technique comes from some of Ivar Jacobson’s early work, as elaborated by Doug Rosenberg in the ICONIX process.  In addition to the books that include this topic (http://iconixprocess.com/books/) there’s an article on the ICONIX web site (http://iconixprocess.com/iconix-process/analysis-and-preliminary-design/robustness-analysis/) and one on the Dr. Dobb’s web site (http://www.ddj.com/architect/184414712).  Robustness Analysis is applied to use cases in order to verify their completeness.  A variation on this technique might help to identify what objects/components will need to have additional responsibilities and what new components might need to be added where it doesn’t make sense to change or extend an existing component.

A less structured approach might be to emulate the analysis of growth scenarios in the SEI’s Architecture Trade-off Analysis Method (ATAM)  (http://www.sei.cmu.edu/architecture/ata_method.html).  SEI evaluators often talk about pulling a thread to drill down wherever it is necessary to determine whether or not the architecture will support the growth.  This approach works well for the experienced evaluators at the SEI, but may be quite challenging for someone without their level of experience.

My plan is to experiment with these techniques.  If you have any suggestions for an alternative method, or a way to apply these two techniques, then please feel free to comment.