Showing posts with label Requirements. Show all posts
Showing posts with label Requirements. Show all posts

Saturday, December 21, 2019

So, you've got to write an RFP


Spoiler alert! This posting may be dry to the taste...

Ever been asked to write an RFP (request for proposal)?
It may not be as easy as you think.

My metric is about 2-3 hours per finished page, exclusive of specifications. Specs are normally just imported from an engineering, marketing, or product team. So, it could take you the best part of a week to put an RFP in place.

My outline is given in the Slideshare presentation below

Beyond the outline, here's a few things to think about.

Source identification: Sources, per se, are not part of the RFP, but sources are its audience. And the RFP is written for a specific audience, so sources certainly influence the RFP

Source identification, or better yet: source identification and validation (vetting), is both a science and an art. The science part is an objective list of source attributes; the art part is judgment, admittedly not objective.

Among sources, there's sole source (the only one in the world who can do it) or selected source (the only one in the world you want to do it), but more often an RFP regulates competition among multiple sources.

Award criteria: How are you going to decide who wins?
  • Lowest cost, technically acceptable (pass/fail) is the easiest and most objective.  Just open the envelop of all those with passing technical grades and take the lowest cost -- no questions, no fuss
  • Best value is not easiest and not entirely objective, but it might get you the most optimum solution. Rather than pass/fail, you get to consider various innovations and quality factors, various management possibilities, what the risk is to you, schedule, and, of course, cost.

    Best value may not be lowest cost. That's always controversial, since cost is the one value proposition everyone understands. No one wants to spend more than the value is worth.

    The flexibility of best value (or, glass half empty: lack of objectivity) comes with its own risk: the award, if in the public sector, is subject to protest about how the best value was determined.
Risk transfer: what technical, functional, and business risks are you going transfer to the contractor, and are you prepared to pay for that transfer? In effect, you're buying insurance from the contractor.

All other risks are your to keep; you can be self-insured, or pass them off to an insurance party.

Risk methods: Here are some risk methods you can put in the RFP:
  • Contract incentives, penalties, cost sharing, and other contract cost-schedule-scope controls: all are forms of risk management practises.
  • Liquidated damages: you get paid for your business losses if the contractor screws up
  • Indemnity: your contractor isolates you from liabilities if something goes wrong
  • Arbitration: you agree to forgo some of your legal rights for a simpler resolution of disputes
Statement of work (SoW): This is the part most PMs know something about, so a lot of words aren't needed. It may be the first thing an interested contractor reads.
The SoW answers this question: what is it you want the contractor to do? A top-level narrative, story, WBS, or vision is usually included.

The SoW is where you can say how agile can the contractor can be interpreting the vision. Think about how anchored to your story you are.

Typically, unless you are pretty confident you are right, you don't tell the contractor how to do the job, but only what job needs to be done.

Specifications: How's and what's for compliance: It is certainly necessary to speak about how something is to be measured to prove compliance; or you can speak to what is to be measured to verify compliance to the specification.

Requirements: And last but not least, the book of requirements is tantamount to the infamous Requirements backlog.

Give these ideas about requirements some thought
  • Are they objective, that is: having a metric for DONE
  • Are they unambiguous? ... almost never is the answer here. Some interpretation required. 
  • Are they complete? ... almost certainly never is the answer here. But, can you admit the requirements are incomplete? In some culture and context, there can be no such admission. Or, you may be arrogant about your ability to obtain completeness. My take: arrogant people take risks and make mistakes...the more arrogance, the larger the risks... etc

    But, where you can say: Not complete, then presumably the contractor can fill out the backlog with new or modified requirements as information is developed?
  • Are they valid? Can you accept that some requirements will be abandoned before the project ends because they've been shown to be invalid, inappropriate, or OBE (overtaken by events)?
  • Are they timely? Can you buy into the idea that some requirements are best left to later? ... you really don't have enough information at the beginning. There will be decision junctions, with probabilistic branching, the control of which you simply can't




Buy them at any online book retailer!

Friday, May 24, 2019

Feasibility and risk assessment guide



A posting from Matthew Squair put me onto a nice one page pdf template for requirements feasibility and risk assessment.

The chart links compliance, achievability, and technical consequences.

Sidebar:
You could, with just a little effort, build an an interlinking matrix, something like a QFD (quality function deployment) house of quality :


One note:
If you don't recognize the shorthand scientific notation for numbers, like 1.0E-3, this is read as 1 with an Exponent of -3, meaning 1/1000, or 0.001. Similarly, 1.0E+3 is read as 1000. 

Frankly, I would ignore the number rankings. You can make up your own, or have none.

The important content is in the categories and the category definition.




Buy them at any online book retailer!

Thursday, January 31, 2019

I've got to write a RFP



Ever been asked to write an RFP (request for proposal)? It may not be as easy as you think. My metric is about 2-3 hours per finished page, exclusive of specifications. Specs are normally just imported. So, it could take you the best part of a week to put an RFP in place.

My outline is given in the Slideshare presentation below

Beyond the outline, here's a few things to think about.

Source identification: Sources, per se, are not part of the RFP, but sources are its audience. And the RFP is written for a specific audience, so sources certainly influence the RFP

Source identification, or more better, source identification and validation (vetting), is both a science and an art. The science part is an objective list of source attributes; the art part is judgment, admittedly not objective.

Of course, there's sole source (the only one in the world who can do it) or selected source (the only one in the world you want as your contractor), but more often an RFP regulates competition among multiple sources.

Award criteria: How are you going to decide who wins?
  • Lowest cost, technically acceptable (pass/fail) is the easiest and most objective.  Just open the envelop of all those with passing technical grades and take the lowest cost -- no questions, no fuss
  • Best value is not easiest and not objective but it might get you the most optimum solution. Rather than pass/fail, you get to consider various innovations and quality factors, various management possibilities, what the risk is to you, schedule, and of course cost.

    Best value may not be lowest cost. That's always controversial, since cost is the one value proposition everyone understands. No one wants to spend more than the value is worth.

    The flexibility of best value (or, glass half empty: lack of objectivity) comes with its own risk: the award, if in the public sector, is subject to protest about how the best value was determined.
Risk transfer: what technical, functional, and business risks are you going transfer to the contractor via the RFP, and are you prepared to pay for that transfer? In effect, you're buying insurance from the contractor. All other risks are your to keep; you can be self-insured, or pass them off to an insurance party.

Here are some risk methods  you can put in the RFP:
  • Contract incentives, penalties, cost sharing, and other contract cost-schedule-scope controls: all are forms of risk management practises.
  • Liquidated damages: you get paid for your business losses if the contractor screws up
  • Indemnity: your contractor isolates you from liabilities if something goes wrong
  • Arbitration: you agree to forgo some of your legal rights for a simpler resolution of disputes
Statement of work (SoW): This is the part most PMs know something about, so a lot of words aren't needed. It may be the first thing an interested contractor reads. The SoW answers this question: what is it you want the contractor to do? A top-level narrative, story, WBS, or vision is usually included.

The SoW is where you can say how agile can the contractor can be interpreting the vision. Think about how anchored to your story are you?

Typically, unless you are pretty confident you are right, you don't tell the contractor how to do the job, but only what job needs to be done.

Specifications: How's and what's: Specs are where you can speak to measures of "how" insofar as you tell the contractor how something is to be measured; or you can speak to measures of what insofar as you tell the contractor what the metrics are and what the metric limits are

Requirements: And last but not least, the infamous Requirements backlog or matrix. Requirements are usually made a part of the SoW  as detailed "what's" or they can be a specification onto themselves.

Requirements are where problems surface on almost every project:
  • Are they objective, that is: having a metric for DONE
  • Are they unambiguous? ... almost never is the answer here. Some interpretation required. 
  • Are they complete? ... almost certainly never is the answer here. But, can you admit the requirements are incomplete? In some culture and context, there can be no such admission. Or, you may be arrogant about your ability to obtain completeness. My take: arrogant people take risks and make mistakes...the more arrogance, the larger the risks... etc

    But, where you can say: Not complete, then presumably the contractor can fill out the backlog with new or modified requirements as information is developed?
  • Are they valid? Can you accept that some requirements will be abandoned before the project ends because they've been shown to be invalid, inappropriate, or OBE (overtaken by events)?
  • Are they timely? Can you buy into the idea that some requirements are best left to later? ... you really don't have enough information at the beginning. There will be decision junctions, with probabilistic branching, the control of which you simply can't



Buy them at any online book retailer!

Wednesday, July 4, 2018

Getting timely


And so this headline pops up: "Time split to the nanosecond ... ".

It's only a foot
Per se, nanoseconds are not particularly new and unique to projects, and most in a PMO know its a pretty short time, about the time required for light to travel a foot (11.8" at sea level, but let's not be too picky on this one)

However, we now learn that worldwide computer networks need to be time synchronized to a nanosecond of accuracy.  That is actually a tall order, given network latency, media differences in propagation, and so forth.

Did mention requirements?
And where did this requirement come from? (Being PMO types, we always ask for the requirement first. If there is just smoke and mirrors, we can write this one off and go onto the next big thing)

No less than Wall Street (Actually, Time Square in NYC where the NASDAQ is).
Shocking -- shocking! -- as it seems: It's all about the money!

It's a scheduling problem
And, it's about schedule (something we understand) and specifically sequencing tasks within schedules, setting up the right dependencies (Hey! this stuff is right down town for PMOs)

When you're responsible for executing financial trades, timed to the nanosecond, in the correct order -- and order counts when you are looking a market volatility at nanosecond rates -- you had better get it right. Or else!

So, break out your critical path analysis, you PDM charts, and to work on the NASDAQ. There's money to be made!



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, November 7, 2016

The verification game, or gaming verification


At Critical Uncertainties I read a post that I hope is meant in the best of humor, but actually it might be quite serious

Here's the setup:
  • Customer states requirement
  • Customer states requirement verification protocol
  • Project team implements protocol
  • But wait ... protocol is only statistically applicable
And here's what Critical Uncertainties writes:
".... one realises [SIC] that requirements are 'operationally' defined by their associated method of verification. ..... Now if you're in luck ..... you propose adopting a statistical proof (because it's such a tough requirement and/or there's variability in the process, weasel weasel) of compliance based on the median of a sample of tests.
Using the median is important as it's more resistant to outlier values, which is what we want to obfuscate (obviously).
As the method of verification defines the requirement all of a sudden you've taken the customer's deterministic requirement and turned it into a weaker probabilistic one."

This last thing is the key and merits repeating:
"As the method of verification defines the requirement all of a sudden you've taken the customer's deterministic requirement and turned it into a weaker probabilistic one."
OMG! Did you pull one off on the customer, or did you simply introduce the customer to the realism of the verification protocol?


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, February 17, 2016

Did I mention requirements?



This we learn from a recent posting at herdingcats:
"We must be crystal clear here. Requirements may emerge, but the needed capabilities at the needed time are a critical success factor for any project, no matter the domain. As Yogi reminds us. If you don't know where you're going, you might not get there."

Of course, the posting goes on to cite some sources, among which I find myself, for which I am gratified to be included:

[1] The Requirements Engineering Handbook, Ralph R. Young, Artech House, 2004
[2] Requirements Engineering: A Good Practice Guide, Ian Sommerville and Pete Sawyer, John Wiley & Sons, 1997
[3] Succeeding with Agile: Software Development Using Scrum, Mike Cohn Addison-Wesley, 2010.
[4] Project Management the Agile Way: Making it Work in the Enterprise, 2nd Edition, John C. Goodpasture, J. Ross, 2015.
[5] Agile Estimating and Planning, Mike Cohn, Prentice Hall, 2006
[6] Agile Project Management for Government, Brian Wernham, Maitland & Strong, 2012.

What more on agile? Available now! The second edition .........


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Saturday, September 19, 2015

Ah, those requirements!


We may not know much about the requirements; our backlog may be slight; but we do know this about that:
1st requirements paradox:
  • Requirements must be stable for reliable results
  • However, the requirements always change
2nd requirements paradox:
  • We don't want requirements to change
  • However,  because  requirements  change  ....  is  a known risk, we try to provoke requirements change as early as possible

So writes Niels Malotaux in an interesting 16 page paper describing his version of Agile, titled: "Evolutionary Project Management Methods: how to deliver quality on time in software development and system engineering projects".

Here's what Malotaux calls "Magic Words"
'

•   Focus
Developers tend to be easily distracted by many im-
portant or interesting things. Some things may even
really  be  important,  however,  not  at  this  moment.

•   Priority
Defining  priorities  and  only  working  on  the  highest
priorities guides us to doing the most important things
first.

•   Synchronise [sic]
Every  project  interfaces  with  the  world  outside  the
project.  Active  synchronisation [sic]  is  needed  to  make
sure that planned dates can be kept.

•   Why
This  word  forces  us  to  define  the  reason  why  we
should do something, allowing us to check whether it
is the right thing to do.

•   Dates are sacred
In most projects, dates are fluid. Sacred dates means
that if you agree on a date, you stick to your word.

•   Done
To make estimation, planning and tracking possible,
we must finish tasks completely. Not 100% finished is
not done. This is to overcome the “If 90% is done we
continue with the other 90%” syndrome.\

•   Bug, debug 
A bug is a small creature, autonomously creeping into
your  product,  causing  trouble,  and  you  cannot  do
anything about it. Wrong. People make mistakes and
thus  cause  defects.  The  words  bug  and  debug  are
dirty words and should be erased from our dictionary.


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, March 12, 2015

V and V in the Agile domain


Here's some draft text from chapter 5 of my up-coming 2nd edition to "Project Management the Agile Way"

Traditional V&V: the way it is
Traditional projects rely on validation and verification (V&V) for end-to-end auditing of requirements:
  • Validation: After structured analysis, and before any significant investment in design, the requirements ‘deck’ is validated for completeness and accuracy. If there are priorities expressed within the deck, these priorities are validated since priorities are influenced by the dynamics of circumstance and context
     
  • Verification: After integration testing, the deck is verified to ensure that every validated requirement was developed and integrated into the deliverable baseline; or that changed/deleted requirements were handled as intended.
Agile V&V: the way to do it
 
Agile projects are less amenable to the conventional V&V processes because of the dynamic and less stationary nature of requirements. Nonetheless, the spirit of V&V is a useful and effective concept, given the danger of misplacing or misstating:
 
  • Validation: After the business case is set, some structured analysis can occur on the top level requirements. Typically, such analysis is an Iteration-0 activity. As in the traditional project, and before any significant investment in design, the requirements ‘deck’ is validated for completeness and accuracy insofar as the business case defines top level requirements.
  • If there are priorities expressed within these business case requirements, these priorities are also validated since priorities are influenced by the dynamics of circumstance and context
  • Conversational requirements are also validated, typically after the project backlog or iteration backlog is updated. However, individual conversations often don’t have sufficient context for effective validation. Thus, some judgment must be applied. Multiple conversations are aggregated into a larger scope scape and validated for completeness, accuracy, and priority.
  • Verification: After integration testing, the deliverable functionality is verified to ensure that every validated conversation was developed and integrated into the deliverable baseline; or that changed/deleted conversations were handled as intended.
  • During development, we can expect some consolidation of stories, and we can expect some use (or reuse) of common functionality. Thus, we are not suggesting that Agile is to maintain a fully traceable identify from the time a conversation is moved into the design and development queue to the time integration testing is completed. However, the spirit of the conversation should be there is some form. It’s to those conversational forms that verification is directed.
  • In some organizations, verification is seen as just a part of integration testing; the last thing you do before signing off on a completed test.
 


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Sunday, December 21, 2014

Require then acquire


Ashton Carter, the soon-to-be U.S. Secretary of Defense, has an essay in Foreign Affairs -- January/February 2014 -- in which he describes measures of the last few years to insert agility into the weapons procurement process.

We're not talking software in the Agile sense -- but some of those ideas scaled up (way up!) are applicable. Carter is talking about timeliness: being able to deliver in a timeframe that which is effective for the mission. But, of course, agilists know what that entails -- see the Agile Manifesto for guidance.

Carter goes on:
The usual process of writing “requirements,” an exhaustive process to determine what the military needs based on an analysis of new technology and future threats, would not suffice in Afghanistan and Iraq. That is because the system known inside the Pentagon as “require then acquire” demands complete information: nothing can be purchased until everything is known
Ooops! All of us who've studied these things and have had real experience in a real war time acquisition know that "require and acquire" is going to get you there fast enough.

Solution: State the mission in functional system terms; provide the timeline. Then give latitude to make it work, and then delegate to the lowest possible level to increase velocity. After all, velocity and management layers are an oxymoron -- to have one requires that you ditch the other.

Or, as Robert Gates, former US Defense Secretary said:
The troops are at war; the Pentagon is not

Thus was born the ...  "Warfighter SIG, which became the Pentagon’s central body for senior officials to weigh solutions to battlefield problems, locate the necessary resources to pay for them, and make the right acquisitions."

And, guess what they discovered: In urgent situations, the Pentagon will have to settle for an imperfect solution that nonetheless fills a gap.

The lesson learned is the same lesson learned and re-learned: To get it done, the day-to-day bureaucracy -- or the matrix system in projects -- always marches too slowly to the wrong drumbeat. Thus, special organizations, SWAT Teams, Task Forces, etc are stood up to get the job done, sweeping aside the protections of due process, taking risks, and driving for payoff.

Would that it work that way and produce results without a war to drive it!


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, December 15, 2014

Cascading risks


Everyone who's done risk management failure analysis for a while on large scale systems has likely done it the "reductionist" way with failure mode analysis, decomposition or reduction into trees of interconnected components, and the like. All good stuff, to be sure, and reduction methods fit the model of subsystems with defined interfaces, or if in the software domain, then: API (application programming interfaces)

Now comes a posting from Matthew Squair with a keen observation: Rather than a hierarchy of subsystems that is the architecture of large scale systems, we are more likely to see subsystems as networks with perhaps multiple interconnecting nodes. Now, the static models used in reductionist methods may not predict the most important failures.
"You see the way of human inventiveness is to see the potential of things and race ahead to realise them, even though our actual understanding lags behind. As it was with the development of steam engines, so too with the development of increasingly interdependent and critical networks. Understanding it seems is always playing catchup with ability ...

A fundamental property of interdependent networks is that failure of nodes in one network can cause failures of dependent nodes in other networks. These failures can then recurse and cascade across the systems. "

Of course, such an idea has been in the PMI Risk Practice Manual for some time in the guise of linked and cascading cause-effects where myriad small effects add up a big problem. And in the scheduling community we've recognized for years that static models, like the PERT model, are greatly inferior to dynamic models for schedule network analysis for just the point Squair makes: performance at nodes is often the Achilles' heel of schedule networks; no static model is going to pick it up and report it properly.

Squair goes on to tell us:
"... we find that redundant systems were significantly degraded or knocked out, not by direct fragment damage but by the failure of other systems."

Ooops! Attention project managers: what this guy is telling us is that it's not enough to right down a hierarchy of requirements, and in the software domain its really not enough to write a bunch of user stories... if that's all you do, you'll miss too important elements
  • Architecture that doesn't fail is a lot more complex than just segregating data from function, or bolting one subsystem to another
  • Requirement "statements" and "gather requirements" rarely give you the dynamic requirements that are the stuff of real system performance
What to do: of course a reference model is always great, especially if its a dynamically testable model; if you don't have a reference model, then design test procedures and protocols that are dynamic, ever expanding as you "integrate" so that you've got a more accurate prediction of the largest scale possible.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 29, 2014

Requirements entropy framework


This one may not be for everyone. From the Winter 2014 "System Engineering" we find a paper about "... a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development."

Recall from your study of thermodynamics that the 2nd Law is about entropy, a measure of disorder. And Lord knows, projects have entropy, to say nothing of requirements!

The main idea behind entropy is that there is disorder in all systems, natural and otherwise, and there is a degree of residual disorder  that can't be zero'd out. Thus, all capacity can never be used; a trend can never be perfect; an outcome will always have a bit of noise -- the challenge is to get as close to 100% as possible. This insight is credited to Bell Labs scientist Claude Shannon

Now in the project business, we've been in the disorder business a long time. Testers are constantly looking at the residual disorder in systems: velocity of trouble reports; degrees of severity; etc

And, requirements people the same way: velocity and nature of changes to backlog, etc.

One always hopes the trend line is favorable and the system entropy is going down.

So, back to the requirements framework. Our system engineering brethren are out to put a formal trend line to the messiness of stabilizing requirements.

Here's the abstract to the paper for those that are interested. The complete paper is behind a pay wall:
ABSTRACT
This paper introduces a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development.

The REF treats the requirements engineering process as an open system in which the total number of requirements R transition from initial states of high requirements entropy HR, disorder and uncertainty toward the desired end state of inline image as R increase in quality.

The cumulative requirements quality Q reflects the meaning of the requirements information in the context of the SE problem.

The distribution of R among N discrete quality levels is determined by the number of quality attributes accumulated by R at any given time in the process. The number of possibilities P reflects the uncertainty of the requirements information relative to inline image. The HR is measured or estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process.

The requirements information I increases as HR and uncertainty decrease, and ΔI is the additional information necessary to achieve the desired state from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels.

Current requirements volatility metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF measures the quantity of requirements changes over time, distinguishes between their positive and negative effects in terms of inline image, and ΔI, and forecasts when a specified desired state of requirements quality will be reached, enabling more accurate assessment of the status and progress of the engineering effort.

Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The additional engineering effort ΔE needed to transition R from their current state to the desired state can also be estimated. Simulation results are compared with measured engineering effort data for Department of Defense programs, and the results suggest the REF is a promising new method for estimating engineering effort for a wide range of system development programs


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, August 28, 2014

Product owner's council


I got this idea from a student; I think it has merit, so I'll pass it along here:
Our backlog is managed by our Product Owners Council, made up of 7 voting members representing the users in the global markets. The voting members are responsible for translating their constituent user requirements into user stories for review by the council.

The council members bring their user stories to council meetings where they are reviewed using a specified criteria before admission to the backlog. We have a large backlog that is prioritized by the POC for each release based on the velocity the development team can deliver, usually 100 points per sprint per scrum team - there are multiple sprints and scrum teams per release.

.... in our case, there are constant trade offs taking place in what gets developed from the backlog. When functional requirements are the focus of a release, the POC meetings become very political with each member arguing for their backlog of stories. The final vote includes a consolidated view from each POC member on the user story's impact on our customer, regulatory environment and user productivity along with a hi, medium, low ranking.

When the global deployment was in progress, all user stories that enabled the countries to go-live on the system where prioritized ahead of functional requirements, including some less critical defects.

Our current direction from executive stakeholders is to standardize our cloud based systems and any requirement that enables the standardization will have precedence over functional requirements.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, December 10, 2013

Necessity drives invention


From one of my agile students:
Traditionally my organization -- a healthcare insurance company -- adopts the waterfall approach, but with the projects related to healthcare reform and participation in the new Health insurance exchanges we had to adopt the Agile methods, especially because lack of clarity in requirements and ever changing rules and regulations from the state and federal government.

Check out these books I've written in the library at Square Peg Consulting

Thursday, April 18, 2013

Storytelling with 'big data'


Jeff Bladt and Bob Filbin wrote a concise post at HBR.org about the way they go about applying big data in their business. Their idea:
A Data Scientist's Real Job: Storytelling
 
Data is dull; information may be interesting; but stories can be captivating.
It's sales 101: there's the messenger and there's the message. The combination, well made, is what gets the point across.

Authors Bladt and Filbin are talking about how to analyze and present information from a data warehouse or other large repository of data.

They've devised a process in three steps:
  1. Look only for data that affect your organization's key metrics
    This seems obvious on the face of it, but confusion, ambiguity, and incoherence affect all too many data analyses. Look to the business scorecard and/or the project scorecard for the key metrics (a.k.a. Key Performance Indicators, KPI) that really count
  2. Present data so that everyone can grasp the insights
    As the authors say: "...never show a regression analysis or a plot from R. In fact, our final presentation had very few numbers
  3. Return to the data with new questions"
    This step is continuous improvement -- CI -- applied to storytelling, using feedback from the audience to refine the story and find new chapters
Project management effect
From the project management perspective, these data stories may well come around as requirements. The agilists will handle them as 'user stories' and use cases -- maintaining the conversational character. The traditionalists will structure them into 'shalls' and 'wills'.

Here's my take:
The popular vernacular is 'narrative'. So in the context of data, what's a narrative? Simply said, it's discreet facts -- call them 'data dots' -- sequenced, related, and linked in such a way that a theme emerges and a story -- a narrative -- is evident: a beginning, middle, and ending. In other words, the dots are connected.

Usually, there's a self-evident purpose for constructing a specific story from the data dots, though it's likely that more than one narrative is possible. Just put the dots back in the pot, draw them out again, and connect them differently: same facts, different story!

In any event, the good news for the project is that there is a data source to go back to; the bad news is it's not stationary, subject to updates from business operations.

But data changes...
The project might want to think about capturing a data image and putting it away as a 'static' version. This image capture can be the baseline for requirements.  And, even though static, if accessible by an project analysis engine, the project can continue to probe for derived requirements

Check out these books I've written in the library at Square Peg Consulting

Thursday, January 10, 2013

Requirements feasibity and risk assessment guide


A posting from Matthew Squair put me onto a nice one page template for requirements feasibility and risk assessment. The chart links compliance, achievability, and technical consequences. You could, with just a little effort, build an an interlinking matrix, something like a QFD (quality function deployment) house of quality :


One note: if you don't recognize the shorthand scientific notation for numbers, like 1.0E-3, this is read as 10 with an exponent, in this case -3, meaning 0.001. Similarly, 1.0E+3 is read as 1000.  Frankly, I would ignore the number rankings. You can make up your own, or have none.

The important content is in the categories and the category definition.

Sunday, December 2, 2012

The agile conversation


It's a conversation now
User stories are a real shift in the way customer/users express themselves.

Stories are a move away from the world of "shall and will" structured requirements and into the world of "want and need" conversation.

Thus, agile is a domain of conversational requirements. As such, it's much less dogmatic about requirement structure. The downside is that verification of design and validation at delivery requires that the customer/user be in touch along the way, or they may lose touch with the conversational thread.

Customers in shock
Of course, this may come as a big shock to customers: they may not be accustomed to, or expecting to be embedded, always at the ready, and empowered to rule in near real time. It takes a savvy dude to exercise this power wisely, effectively, and knowledgeably.

Some training (of the customer) may be required! (Don't try this at home)

Keeping track for V&V
And, it's much more difficult to record a conversation, structure it to remove ambiquity, vagueness, redundancy, and relate it to other requirements. These difficulties beget test driven development, both tactically and strategically. In the TDD paradigm, the test script/scenario/procedure captures and documents feature, function, and performance.

Quality
And, what's more, if done to the requisite standard, quality in the small sense (conformance to standard) and quality in the large sense (conformance to need and want) come along also.


Thursday, November 8, 2012

Conversation through contracts


In the agile business, we've pretty much done away with the 'shalls' and 'wills' of traditional structured requirements, replaced by the conversational story. Even the use case, certainly structured by the UML paradigm, is not so much a 'shall and will' device.


That said, how do you convey a conversation through the vehicle of a contract? Off hand, it sounds like an oxymoron to be sure. There's certainly no more structured device in the project business than a contract. Is it the right framework to converse with?

My recommendation over the past several years that I've been talking about this is that the right contract is a fixed price framework with either fixed or cost reimbursable work orders, each work order corresponding to a iteration. Obviously, to keep the overhead down, a really lean process for writing work orders is Job 1.

First conversation
The time to have the first conversation is before the framework is put in place. This is when you explain the vision and discuss the top level narrative and the overall value proposition. Subsequently, the framework captures the contractual elements of the overall project...if you're building a bridge, that's one thing; if you're building an ERP, that's another.

Retrospective conversation
Then, as part of the retrospective review of the backlog, iteration by iteration, there are more conversations, each one then put more or less into the job order. The JO should still have some flexibility to handle unforeseen backorder problems. However, the foresight of the JO need not be but a few weeks, so it's risk of the truly unknown is foreshortened.

For more, give this a read:


Delicious Bookmark this on Delicious

Wednesday, July 25, 2012

Must do vs Should do

In my classes about agile methods and risk management, I entertain discussions about requirements management, and specifically whether or not students buy into Must Do/Should Do requirements--either in the RTM or the backlog

This is another way of talking about MoSCoW (except I've shorted this discussion to just the M and S)
  • M - MUST: Describes a requirement that must be satisfied in the final solution for the solution to be considered a success.
  • S - SHOULD: Represents a high-priority item that should be included in the solution if it is possible. This is often a critical requirement but one which can be satisfied in other ways if strictly necessary.
  • C - COULD: Describes a requirement which is considered desirable but not necessary. This will be included if time and resources permit.
  • W - WON'T: Represents a requirement that stakeholders have agreed will not be implemented in a given release, but may be considered for the future.

So, most of my hardward project students say: Once the customer has decided on the RTM, there's no debate and no M vs S; everything's a M;

And, most of my software project students say just the opposite: there's room for the S's in the backlog; and

Most of my students who deal in the public sector or work through contracts, regardless of the technology, say: a contract is a contract... there's no room for the S's.

What do I say? I say requirements (a synonym for scope) need slack just like the schedule. A plan without slack is more a hope than a plan. If both scope and schedule have slack, then by extrapolation the budget has some slack. The payoff for slack is predictability.

Some may call it sandbagging: fair enough, sometimes there are sandbaggers that give the slackers a bad name. But nobody can predict with anything other than prayers where a no-slack project is going to wind up.

I used to build hardware; a lot of it and complicated stuff. I never met a RTM that didn't have some flexibility in it, even with a contract. Now, an enlightened contract will be an award fee contract. An award fee contract rewards (with an award of fee) thinking and innovation. What's not to like about that?

I never accept "never" in the project business. We're not doing six-sigma production; we're doing stuff for the first time, and so we can expect 'stuff' to happen. That's where slack comes in... to handle the 'stuff'

Delicious Bookmark this on Delicious

Saturday, May 26, 2012

Requirements (again!)

Matthew Squair is a safety guy. When he writes a requirement, it's serious stuff. People and systems can hurt themselves if he gets it wrong.

In a recent posting he linked to a paper he has co-authored about writing good requirements and specifications, especially for safety and fail-safe systems.

Here's an example from that paper, entitled "What Happens when Engineers get it Wrong"

The following is a real example consequences of a requirements error:
• As written – “The system shall ignore all
anomalies 20 seconds prior to shutdown”

• As built – “The system actually cleared the
anomalies list 20 seconds prior to shutdown”

• What was needed – “The system should have
ignored all anomalies occurring in the 20
seconds prior to shutdown”

• What happened - The system detected an
anomaly within the window of vulnerability,
responded and as a result destroyed itself.

While this example is a safety related one, such
errors can be no less costly in terms of time and
money for less critical applications.

He then continues with some advice about constructing a requirement, saying in part:

"The most basic construction of a requirement can be expressed as an actor (the thing of interest), performing some act (the action to be taken) upon a target (the focus). So in the preceding example the system is the actor, the act is to ignore and the focus is the all anomalies. The requirement also has a constraint applied, in that the act can only occur in the 20 seconds prior to shutdown."

This really isn't too far from writing a use case in a structured way. One reference I like on use cases is by Karl Wiegers entitled "Software Requirements". In spite of the title it's quite a good tome on how structure good requirements, whether emergent, incremental, evolutionary, or foreseen.

Delicious Bookmark this on Delicious

Monday, February 6, 2012

The death of "shall" and "will"?

"Shall" and "will" have been my guys for a long time! From the moment I took Requirements 101 I learned a few things:
  • The buyer "will" do this and that (for the seller) Seller = developer; or contractor if selling to a project office
  • The seller "shall" do this and that (for the buyer) Buyer = customer; or project office if buying from a contractor
  • Never put two required outcomes in the same statement; in other words, no compound requirements that confuse "or" and "and".

But  the agilists are push aside "shall" and "will" as relics of mid-20th century project doctrine. Now, we have "conversations":

"As a {role}, I want {some capability, feature, functionality, or performance capability/capacity} so that {something can be accomplished}"

And, the conversation, often written informally on a card posted on a wall in the war room is just the beginning. From the card ensues the real conversation, ultimately documented by developers in design level test scripts and by business analysts in business scenario scripts. The former are used for verification, and the latter for validation (The ole V & V of the "V" model).

The little card on the war room wall all but disappears; it not retained or maintained. What's the memorial to requirements? Test scripts.

Is this a bad thing? Actually, no. On large scale projects, like an ERP installation, I've found that the "shall" and "will" business is less effective than business scripts supported by process diagrams and workflow tables.

So, perhaps the time has come to retire "shall" and "will" from a whole class of projects.

  Delicious Bookmark this on Delicious