I visited a service team yesterday who reduced penalties paid to customers for late deliveries from 6000 euros a month to 100 last month. How did they do it? They did a VSM map of the process, narrowed down on the bottleneck and reconfigured a new process. NOT. I'm pulling your leg (got you, didn't I? :)) When I asked them how they did it they told me: we followed the procedure.
Okaaaay. What they actually did is chart the late deliveries (and associated penalties) and then looked into every single instance of late delivery. They spotted minor problems creating hiccups and solved them one by one. In the process, without making a big thing out of it they did change some parts of the process, but in no ways that would qualify as redesign.
It turns out that grains of sand really do accumulate into major blockages. It also turns out that in a service environment every grain of sand is different in nature, so it's hard (and silly) to come up with structural solutions. The CEO of this company was astonished to find that many small hiccups could add up into such a visible over cost, and yet, we see it in many, many cases. Our thinking that a process with a few random hitches will have an acceptable exceptional costs burden is just plain wrong.
Any process should deliver flawlessly, period - because this means we understand the process in its context. Random mishaps are a sign of not groking specific cases. As the people in the team solved all issues, as they came, without ranking or prioritizing, they learned about specifics, specifics, specifics. They now understand about boundary conditions of their procedure and when to call for help or do something different. 1x100% change doesn't give you 100%, but 90 at best, because of more heat than light, where 100x1% will deliver 99%. This can be the difference between making and losing money.