Recently I integrated a business function across multiple subsidiary companies to create a unified process and came across an interesting situation: the net new process included more activities than one of the original subsidiary flows. Although the net new process enhanced the overall functionality, and was meant as a straw man for the detailed business design, it got me to thinking: when does a process cross the line from ‘ re-engineered’ to ‘over-engineered?’
A well re-engineered process reduces cost, time, or effort to complete a task (measured by performance metrics like process step counts, total or partial execution time, cost per unit, etc.); an over engineered process returns only marginal improvements or a decline in performance. There is a fine line between reengineering and over-engineering though, and determining when to stop process reengineering to avoid process over-engineering is a decision often faced by Business Analysts.
A key to making sure a process isn’t over-engineered is to keep it as simple as possible; including only what is necessary and nothing more. Not only should this optimize the process, but it will also provide several advantages that may not be as tangible:
- Easy to implement – training resources to execute a simple process reduces overhead, minimizes onboarding time, and increases response capability
- Easily repeatable – there aren’t a lot of nuances to address each time it is executed
- Easy to maintain – enhancements are easy to integrate
- Easily leveraged – it is difficult to expand the application of an overly complex process across subsidiary businesses or similar functions, reducing scalability
To optimize the returns simplicity provides I begin by creating a stripped down process flow that is clear and concise with as few decision points as possible. First I identify the process trigger and desired outcome, and then elaborate the process by integrating essential business activities. I consider this draft the ’Happy Path’ - the route through the process that we wish every event followed.
Once the ‘Happy Path’ is established vigilance is required to not over engineer the process. To adequately map the process the essential alternate paths and exception cases must be addressed and non-critical diversions should be skipped. Because reengineering a process should focus on simplicity, eliminating unnecessary paths and exceptions should be a goal. The introduction of too many non-critical alternate and exception paths can very quickly take the process from simple and straightforward to overly complex and sluggish.
Aside from overcomplicating a process, when too much interference is introduced, exception becomes the rule. When that happens, incorporating another detour is only marginally more disruptive to the flow, so it may not be thoroughly scrutinized. Not thoroughly reviewing secondary paths and identifying their root causes inflate a process by accommodating and propagating problems, not solving them. (Analyzing digressions may also lead to opportunity for improvement beyond the current scope.)
Obviously it is just as detrimental to omit critical alternate scenarios as it is to over indulge. When deciding where to draw the line between what makes the cut and what doesn’t, I rely on Pareto’s Principle. Pareto’s Principle says that 20% of the problems cause 80% of the work. By analyzing the exception cases with this in mind, a determination can be made about which cases to address to get ‘the most bang for your buck.” Including exception cases that occur rarely and require significant effort to resolve will not optimize the process, so these cases should be handled outside of the process to minimize complexity and maximize productivity.
To support this analysis and demonstrate the cost vs. benefit of accommodating rare situations and one off experiences I use a Process Performance Matrix. The Process Performance Matrix captures the following process attributes:
- Trigger – the event that ‘kicks off’ the process
- Expected outcome – the result of the execution of the process
- Frequency – how often a process is executed (cycles per day, week, month, etc.)
- Throughput – how many units of work are completed (per day, week, month, etc.)
- Step count – how many distinct steps are required to accomplish the activity
- Duration – how long the process takes to execute start to finish
- Effort – any significant work or business knowledge required
Documenting the attributes in a standard format allows ‘apples to apples’ comparison, helps assess the scenario’s criticality and priority, and helps demonstrate the advantages and disadvantages of accommodating specific scenarios. This framework will drive unbiased decisions, which is especially helpful when the ‘one time in 1994’ mentality prevails where someone wants every exception case to be accommodated. Furthermore, consistently applying this methodology will provide clear justification for the decisions around what activities were included in the core process and which were omitted.
After Pareto’s Principle has been applied to the entries in the process performance matrix and a determination has been made about what to include in the core process, integrate the scenarios into the process map. When the process flow is completed, anything not included should be documented, including the reason for exclusion and a remediation plan.
When all is said and done you should have a simple, well-engineered process with optimal throughput, and a well-documented plan to address rare events that will transpire external to the core process. After the reengineered process is implemented, it should be monitored and maintained to ensure your decisions were correct and improvements are realized. Although additional scenarios may have to be included in the future, occasionally expanding a process to accommodate change is better than having an overweight process with a lot of untraveled paths.