In a recent discussion among friends, we briefly debated the idea of the lack of training in probabilistic thinking in our schools. We discussed how of all disciplines we learn about in school, statistics and probability has by far been the most useful in practice. Being a Six Sigma Black Belt in my previous job, I have myself undergone training in thinking statistically and have attempted to apply it to solving practical problems at work. The structure of thinking methodically appealed to my engineering bent of mind. Through the multiple projects that I undertook to solve with statistical tools, I found that in many, solutions popped out of the middle of nowhere. It just seemed that we had not exhausted the range of possibilities while structuring our various experiments. A tool called the Design of Experiments – extensively used to optimize processes, helped us only within the narrowest of narrow confines.

The claim that Six Sigma tools, achieved break-through processes for companies rang a little hollow. No doubt, the tools were useful in narrow confines where all outcomes had been identified – however genuine breakthrough happened not because of structure but inspite of it. I always felt there was something amiss. The idea of boxing everything into closed-ended models seemed very limiting and restricted experimentation. I could not put my finger on what it was. I gradually learnt on the job the concept of 'do – check – select – do again'. While seemingly inefficient, this empirical approach to problem solving in the unknown was my first step in discovering the nature of the world we operate in. I could never have put it better than this beautifully written extract from an article (“Why it is better to be roughly right than precisely wrong” by Lars Syll):

The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the inter-relatedness of its organic parts prevent the possibility of treating it as constituted by 'legal atoms' with discretely distinct, separable, and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

I write my monthly blog to give you – my client, an idea of how I think. This nugget of wisdom (not knowledge) of 'better be roughly right than precisely wrong', is one guide I use to steer myself in the world I find myself in today. Sure there is a need to be precise in situations where all outcomes are exhaustively known. You cannot fill in an approximate name in your visa application form, nor can you have a roughly correct email login id or password and you of course cannot submit an approximate income tax return form! All the outcomes in these situations are known and exhaustive – and in such cases being precise is required.

The investing world on the other hand is far from deterministic. Valuation models are – deterministic point estimates which are in my opinion simply self-defeating and self-cheating. In such cases having an approximate idea of the quality of a business, the certainty of future cash flows, competitive intensity, management signals, a rough gauge of the growth potential, and inverted thinking (for example deducting from the price what the market is expecting) are more powerful than coming up with a single valuation figure. By building large buffers around these factors, it is possible to be roughly right in not all but in at least more than 50% of our investing decisions. As this thinking works through the portfolio it is our hope that it will result in positive outcomes over the long run.