Category Archives: Systems Thinking and Measures

Service Metrics: What You Need to Understand

I overheard a fellow say “I’ve got Deming’s principles down pat, now all I have to do is understand this variation thing.”  Hmmm, Dr. Deming was a statistician and his philosophy did come from his understanding of variation as taught to him at the Western Electric plant (Chicago, IL) in the late 1920s from Dr. Walter Shewhart.  What W. Edwards Deming learned was how to evaluate data using a statistical process control (SPC) chart.  To me, the difference between knowledge and tampering or guessing.

Early in my career I was a corporate director of operations where I learned to evaluate income statements and compare last months revenue, expenses, etc. to this months and all types of dictates and commands came from this naive view of data.

After attending Dr. Deming’s 4-day seminar and learning from the likes of Dr. Don Wheeler and Dr. “Frony” Ward, I learned a better way to manage with data.  In statistical terms understanding the differences between common and special causes of variation.  Let’s pretend we have sales of 15, 19, 14, 16, 12, 17, 15, 17 and 11 (in thousands).  A manager might conclude that the month with 19,000 in sales is a celebratory moment best month on record and the last month with 11,000 is reason to “bark” at the salespeople for poor sales. 

By plotting data using the SPC chart (below), we can tell that we can expect anywhere from 5.1 (LCL-Lower Control Limit) to 25.1 (UPL-Upper Control Limit) with an average of 15.1.  A manager celebrating 19,000 or getting upset over 11,000 is foolishness.  In a matter of fact, we can expect between 5,100 and 25,100 (the control limits) in sales and it wouldn’t be unusual.  This is called common cause variation.

Common Cause Variation

Conversely, if the next month showed 28,000 in sales (see chart below) this would be outside the UCL (Upper Control Limit).  The $28,000 month is unusual (outside the limits) meaning we have a special cause.  Something unusual has happened.  Now is the time to investigate the reason there is overwhelming evidence that we should investigate the “special cause.”  There are other indicators of special causes (run of 8 and others) that need to be accounted for, but this is a blog.

Not understanding the differences between common and special causes leads a manager to tamper with the system.  Dr. Deming outlined two types of mistakes:

  1. Reacting to an outcome as if it came from a special cause, when it came from common causes of variation.
  2. To treat an outcome as if it came from common causes of variation, when it was from a special cause.

A systems thinking organization (or any other organization) must understand the differences between special and common causes of variation in order to manage effectively.  Leadership development, organization change management programs and even technology implemented devoid of these basics are keeping service organizations from making better decisions.  This isn’t just for Lean Six Sigma Black Belts and Master Black Belts, we all use data.  We must know how to use this data to make better decisions and avoiding the mistakes Dr. Deming warned us about.

Tripp Babbitt is a speaker, blogger and consultant to service industry (private and public).  He is focused on exposing the problems of command and control management and the termination of bad service through application of new thinking . . . systems thinking.  Download free “Understanding Your Organization as a System” and gain knowledge of systems thinking or contact us about our intervention services at [email protected].  Reach him on Twitter at www.twitter.com/TriBabbitt.

Share This:
facebooktwitterlinkedin

Benchmarking: What is it Good For?

Absolutely nothing! 

My counterparts in the UK have an article that is a worthy read Systems Thinking and the Case Against Benchmarking that discusses benchmarking in the public sector for housing repairs.  In this article Paul Buxton outlines four conditions that must be met in order to be able to learn from another organization.  They are:

  1. That the other organization is operating in a comparable environment.
  2. That the other organization’s performance is better than your own.
  3. Be able to understand the reasons that the performance is better.
  4. The lessons learned can be applied to your organization.

Paul points out that there are “significant difficulties” in meeting any of these conditions and “all are necessary to be sure that performance is not made worse.” 

My background in customer service consulting has allowed me to observe that organizations using benchmarking either gives the organization a false sense of security (my metrics compare well) or creates tampering with the system when the metrics are determined to be sub-optimal.  In the latter case, I have seen where the “industry benchmarked standard” for call answer rate is 93.49% and a service organization is at 85%.  There are problems with this comparison (as an example). 

  • What is the operational definition of “answer rate”?  I came from a background of defining SLAs favorably for the Fortune 500 companies that hired me.  What is counted and not counted in that answer rate? If the phone system is down that may not get counted or any of a number of other scenarios where we take out things that might make the answer rate % lower. Believe me data gets manipulated all the time to put companies in a positive light.
     
  • Is my service really better or worse if my answer rate is lower?  Besides the operational definition problem, there are other problems.  I could be answering 100% of calls and still not be providing good service.
  • By taking action, I risk sub-optimization.  My pursuit to get 100% calls answered, may negatively impact other parts of the system leading to increased total costs.
  • Different customer demands than the “benchmarked standard” may effect call answer rate.  Every service company has a different set of customers. 

The command and control thinker likes the idea of having the benchmark to create the “standard” and usually the next step is to use the standard as a target.  The organization than sets its resources (people, technology, process,etc.) to achieving the “benchmarked” target.  The target neither guarantees better service or reduced costs.  There are too many unknowns about what we are comparing against making benchmarking a waste of resources.

Worse, service companies try to copy competitors when their systems are completely different  . . . different people, culture, technology, processes, etc.  leading to disastrous consequences.  Based on faulty assumptions that benchmarking promotes.

There is a better way.  A service organization can identify measures that relate to purpose and acting on causes related to variation.  W. Edwards Deming taught us this.  What is your system capable of achieving and what are the causes of variation for your system.  The 95 Method (my choice of method) promotes performing “check” to gain knowledge of your system (purpose, measures, demand, flow and value).  This systems thinking approach will give you a strategic change strategy that will allow you to achieve business improvement.

So, what is benchmarking good for? Absolutely nothing! . . . say it again.

Tripp Babbitt is a speaker, blogger and consultant to service industry (private and public).  He is focused on exposing the problems of command and control management and the termination of bad service through application of new thinking . . . systems thinking.  Download free Understanding Your Organization as a System and gain knowledge of systems thinking or contact us about our intervention services at [email protected].  Reach him on Twitter at www.twitter.com/TriBabbitt.

Share This:
facebooktwitterlinkedin

SLA = Stupid Limiting Agreements

SLAs seem to be the staple for the customer management process for contracts, performance and operations.  The first time I heard the word SLA I was consulting for a Fortune 500 IT company and they needed to have a group of metrics because of the poor service they had been delivering to their banking customers.  I already was a student of the statistics of Shewhart and Deming, meaning I understood the difference between “common” and “special” causes of variation and also understood that having a service level agreement (SLA) didn’t improve the performance of the organization.  I used SPC (statistical process control) to determine the differences in variation.  All basic to improving the system.

The problem . . . I was the only one focused on improving the partnership.  The IT vendor and the customer were focused on the service level and not the system.  The customer wanted penalties and the IT vendor wanted rewards (and to avoid penalties).  The two groups spent an inordinate amount of time dickering over what the rewards and penalties should be and I (working for the IT vendor) was to be sure that the operational definition of the metrics was such that the vendor could not fail.  The slew of waste (manipulation, reward/penalty setting, etc.) between the IT vendor and the customer was astonishing.  No one was interested in working together to improve method or even discuss the validity of the original measures. 

SLAs are no more than targets and create what I believe to be adversarial relationships and distrust, focusing on results not method.  This is no different when the SLAs are internal. I see this between departments and units. “I will get you my work in 2 days or less.”  The problem is the measure is not tied to any customer metric it is all internally focused.  Additionally, the amount of manipulation begins when you hear things like “the clock doesn’t start until I open your request” and they don’t open their email for a week . . . did they really hit the SLA?

A better “systems thinking” way is to understand purpose from a customer perspective, derive measures and then find “new” methods.  This avoids the waste associated with measures that do not matter.  Workers that understand good customer metrics and expectations can be creative in changing method.  Partners (like my Fortune 500 IT company and their customers) can achieve continual or continuous improvement by working together on method, not SLAs.

Tripp Babbitt is a speaker, blogger and consultant to service industry (private and public).  He is focused on exposing the problems of command and control management and the termination of bad service through application of new thinking . . . systems thinking.  Download free Understanding Your Organization as a System and gain knowledge of systems thinking or contact us about our intervention services at [email protected].  Reach him on Twitter at www.twitter.com/TriBabbitt.

Share This:
facebooktwitterlinkedin

The New SPC for Service: System Performance Capability

One of the things that is integral to a systems thinking approach is the use of data, but not in the command and control way of thinking.  I have already heard from many of you moving away from traditional (and destructive) call center management measures like talk time and other productivity related measures.

We also need to understand the data on a service organization’s ability to perform against customer demand.  These are better measures regarding the performance of the system as a whole and not the performance of a unit or department.  A systems thinker understands that if the customer expects to get something within a time frame (say a week) and the service isn’t performed during this time frame it will create failure demand (chase calls).

The data from the customer expectation is typically “end-to-end” and crosses (potentially) multiple units or departments.  In a command and control organization the measures are by individual, unit , or department and not “end-to-end.”  These end-to-end times have a nominal value (what matters to the customer) and in command and control organizations are often ignored.

The new SPC for service needs to be Service Performance Capability using statistical process control.  What is the customer expectation around a service and how well does the service organization perform around it.  This “outside-in” approach is key to the systems thinking organization.  The command and control organization will pass down financial and performance metrics from the top-down and never consider the customer perspective.

The measures needed to achieve business improvement are concerned with the demand and flow:
Demand – The type and frequency of demand that customers put on the system.  The predictability of failure and value demand.
Flow – The capability of the system to handle demand in one-stop. If customer demand has to go through multiple hand-offs what is the capability of that demand as defined by the customer.

In future blogs I will walk through the statistical definition of capability for our new SPC system.

Tripp Babbitt is a speaker, blogger and consultant to service industry (private and public).  He is focused on exposing the problems of command and control thinking and terminating bad service through application of new thinking . . . systems thinking.  Download Understanding Your Organization as a System and gain knowledge of systems thinking or contact us about our intervention services at [email protected].

Share This:
facebooktwitterlinkedin