A Better Project Model than the “Waterfall”

This happens every day: A “solution” is handed to a team to build. The team determines the scope of the work, develops a project plan and promises a set of features by a specific date. It is assumed that these features will solve some business problem or meet an executive directive because someone with a paygrade higher than the execution team has blessed the work. And with this set of features and project plan the waterfall cycle begins again.

The team begins to define the work in painstaking detail, outlining every possible scenario, use case, back-end integration point, and business rule. The design team takes it from there, visually articulating the placement of every pixel and the logic behind every input field, check box, and submit button. Soon the software engineers will get their turn at bat followed closely by their compatriots in quality assurance and then, finally, after the t-shirts have been printed and the launch parties have died down, the company’s customers will get their first chance to use the new product.

Despite the confidence inspired by all of this upfront planning and design, the only guaranteed thing to come out at the end of this waterfall process is the wrong solution. It’s not that the project sponsor chose an inadequate feature set or that the system’s user experience was poor. The team delivered the wrong solution because the project was based on unvalidated assumptions. These assumptions were presented as requirements and as such were not positioned as something that could be disproved or even questioned.

Example of a requirement:
We will build three new conversion paths using three new content verticals to attract new customers.

The problem with requirements is that they are often wrong. If a team isn’t given the opportunity to find out if the prescribed features solve a need that actually exists for real customers the effort is doomed to fail on the day it goes live. My business partner Josh Seiden likes to say, “In software development, reality bats last.” What he means is that no matter how optimistic a team may be about the product it’s building, the only thing that will prove the project successful or not is the reality of real customers using it to solve real problems. Working in a waterfall style delays reality’s turn at bat until the end of the project when all resources have been spent. This is extremely risky since any fundamental failure in the solution — be it a feature customers didn’t actually need or customers’ difficulty in using it — is extremely costly to correct this far downstream.

Instead of presenting their teams with lists of features to build in a sequence, organizations should present their teams with business problems to solve. Provide constraints for the solution and any assumptions that currently exist about that business problem and its target audience. Most importantly, each team should be handed specific success criteria. These criteria should be quantifiable and point to specific outcomes that prove the customer’s need has been met and the business problem has been solved.

Example of a problem statement:
Conversion rates have dropped to 1.12% with 18- to 34-year-old frequent online shoppers. This decline is driven largely by a similar decline in the return visitor rate for this group (currently at 9%). This indicates a decrease in value of our offering to these customers. The ecommerce business needs to generate at least $61.2MM this year with 20% coming from this demographic. We need to reach a 40% return visitor rate, which should get us to the 5.4% conversion rate needed to hit our revenue targets.

By shifting the conversation from output of features to explicitly measurable outcomes, the team can now begin figuring out the best solution to this problem. Then, instead of writing a set of rigid requirements, the team begins writing a series of hypotheses. These hypotheses state the team’s assumptions in a testable way.

Example of a hypothesis:
We believe that building three new content verticals will increase the number of new and returning customers. We will know we are successful when new visitor numbers increase by 5% and returning visitor numbers double from their current state.

Working in short, iterative cycles the team can now begin testing ways to prove out this hypothesis. Instead of biting off large sets of features, the team conceives, designs and builds “first drafts” of ideas that are deployed quickly. These “first drafts” are measured with the target metrics and if they show promise (i.e., the numbers are moving in the right direction) they are refined in the next iteration. If they don’t perform, they are reworked or scrapped in favor of the next idea.

These tight learning loops allow the team to bite off significantly smaller chunks of risk while giving reality a chance to take many turns at the plate. The team may learn very quickly that building three new content verticals is overkill and only one is needed. Alternatively, it may learn that three new verticals won’t make a difference at all in their success metrics. At which point they will have to conceive a new hypothesis.

When the project timeline elapses, the team may not have shipped as many features as they would have in the old waterfall way but those they did ship are the right features. Working with hypotheses instead of requirements allows the team to figure out which solutions have the most promise of success while minimizing the amount of time spent developing ideas that are wrong.

A Better Project Model than the “Waterfall”

Research & References of A Better Project Model than the “Waterfall”|A&C Accounting And Tax Services
Source

error: Content is protected !!