Subjective Probability Estimation

This article is an elaboration on assigning chance subjectively - the hardest bit for many people - in the article <Uncertainty and Chance>. This article is by no means serves as a deep-dive about risk or whatever, it is about pointing out which parts of chance assignment is hard and how to resolve them.

 



Probability Estimation

From the previous article, we came across the word "risk profile", which sounds something to do with risk. In fact, the risk profile is nothing but another table illustrating the consequence resulting from different possible outcomes, incorporating chance, where you can view and compare all potential outcomes in one go.

The hardest part is to assign probability - to estimate probability with human brains subjectively - also known as "subjective probability". It expresses the degree of confidence of a belief is true subjectively. 

We humans are wired not to think with numbers when it comes to chance. We won't say, "I trust this guy only 30%". Rather, we'd say, "I don't trust this guy that much". We would use words to represent what we mean. Here, the word is "that much". But what do you mean "that much"? Can you quantify "that much"?

Or put it another way, how could you accurately translate the descriptive words to the percentage likelihood (i.e. probability or chance). For example, what are the probability of occurrence of the descriptions: 'likely', 'less likely', 'more likely', 'not likely', 'most likely', 'least likely', 'moderately',  respectively? Such a conversion may be different to different people. 

Also, we human beings are subject to heuristics and biases, such as representative heuristic, availability bias, confirmation bias, etc

Refer to 主觀機率 (Subjective Probability) in article <貝氏定理 (1): 理論 (Bayesian Theorem)>.

============================


Below are the resolutions I could think about:

 

(1) When we don't have any prior knowledge about the situation, we could use the Principle of Insufficient Reason (also known as Principle of Indifference) to assign probability for alternatives

  • Just as what I did in  the Example 4 in <Decision Tree>, I broke the range of monetary award into 4 components and assign an equal probability (25%) to each component, such that it sums up to 100% in total. 

 

(2) As I mentioned, we humans are subject to various kind of heuristics and biases. When we estimate the odds of occurrence of an event, we might be suffered from central tendency bias, which is a common tendency for humans to rate the middle of the scale. 

One way to resolve this is to start your estimation from the two extremes, instead of average. This way you may avoid the bias to the centre.

  • Start from the highest (mostly possible) and lowest (least possible, rare) events first. 
  • Then, work back to the middle (average, mean, median) event. 
  • Somewhere between middle and the highest extreme, and between middle and the lowest extreme, will become easier to estimate.

This is on one hand can ensure the kurtosis of the distribution, on the other hand can ensure the symmetry of both sides of the probability distribution.

 

(3) We should understand what we or other people mean the descriptive words in regard to probability. If possible, try to avoid using descriptive words in probability assignment. If you have to, you should adjust the scale after your initial estimation.

Example: You may define what the word descriptions mean beforehand:

  • Most likely or very likely: > or = 90% chance of occurrence
  • Least likely or less likely: < or = 10% chance of occurrence
  • Moderately: around or = 50% chance of occurrence
  • More likely: around or = 75% chance of occurrence
  • Less likely: around or = 25% chance of occurrence

So, next time when you say "very likely", you should think what are the most likely and least likely events like? And, what is the moderately like? Then, you come back to your "very likely" and judge the accuracy of this to what your really mean when you estimate the probability of the event occurrence. 

Steps in (2) and (3) are called, "optimisation", I would say.

Or, you can interrogate yourself by the conversation I propose in Clarifying Expert's Estimation in the article of <Never Fully Trust Experts or Gurus>.

When thinking about probability, you can think the frequentist's way - use the frequency of occurrence from historical events, in light of evidence. Try to avoid the subjective probability weighting that tilts the objective probability by human biases, which is described by Prospect Theory. <展望理論 Prospect Theory (2): 機率加權函數 (Probability Weighting Function)>


(4) We should improve our estimation accuracy* by training. Put down the rationale why you guess the way it is and how you assign the chance, based on what you come up with the estimation, and ask yourself "why". 

  • If things turn out the way as you predicted, with sufficient large sample size (N ≥ 30), then your thinking (which is mental model) / forecasting technique/ information gathered in terms of type, number, time period selected seem okay, up to the moment. 
  • In contrast, if things always go the opposite, then it is a good indication that the information used may not be accurate, appropriate, up-to-date or relevant enough. And perhaps, your mental model used may not be as proper as it could be. 
  • If the prediction results go wild ride (fluctuates), reasons may be because: 
    • (i) of pure bad luck (subject to randomness); 
    • (ii) you don't update your model soon enough to adjust your conclusion as new information becomes available; and 
    • (iii) of pure heuristics and biases. 
      • Where only (ii) can be improved by work. For (iii), though we can acknowledge them and reduce their influence, we can never fully eliminate heuristics and biases.

* Accuracy means the degree of your estimation can approximate reality, including error and uncertainty. This usually comes from past events. 


(5) Mind the assumptions. Often, we need to make assumption(s) before we can do our forecast or probability estimation. The assumption is usually where the error comes from. No matter the assumptions are set from us or from others, we should always question ourselves about the way, the ground, the source of the assumptions. Try to give answers to these questions. 

  • As in (4), Q&A is one of the ways to help us stay objective, by which hopefully we can arrive some reasonable and logical assumptions. We should avoid the noise from media at all cost.


(6) Do not ignore rare events with little probability of occurrence. By Murphy's law, when the event is repeated by the number of time that is large enough, any rare event will happen.


(7) Besides, we should budget for estimation error. 

We know arbitrarily about our judgmental errors from daily events in the past. Say, if you know that you usually have the guess of chance that is higher than the truth, then next time, you should adjust your expected value of chance a bit lower after your initial guess. By how much of the adjustment, it also involves uncertainty. This is what I called, "Calibration". Same as in (4).

We have to remember that there is no 100% certainty in forecasting any future events - absolutely not! That means, (i) you should embrace uncertainty for every judgement and forecast; and (ii) you should budget for error, i.e. expect that there could be error/mistake for the decision made. 

 


Conclusion

Probability estimation is a big topic, perhaps it is the hardest part to get it right when it comes to decision making. Machine learning algorithm can do it better than humans. But the logic of thinking is crucial and vital for us to grasp in order to better handle the minor or major decisions on a daily basis. Hence, this is the skill we have to keep on our finger tips. With the skill, we can rapidly run through our tools to make a probability estimation in an attempt to minimise errors, though it can't be completely avoided. One of the most important prerequisites for a reliable estimation is the proper information or evidence obtained.

Therefore, you should learn to:

  • quantify the result as in (2); 
  • present the range of result; 
  • present the range of forecasting result with uncertainty




Related Topics:

Principle of Insufficient Reason

貝氏定理 (1): 理論 (Bayesian Theorem)

貝氏定理 (5): 貝氏更新 (Bayesian Updating)

Never Fully Trust Experts or Gurus

Uncertainty and Assign Chance

Decision Tree 

展望理論 Prospect Theory (2): 機率加權函數 (Probability Weighting Function)

機率思維 | 大數法則, ⼩數定律, 賭徒謬誤, 墨菲定律 

第三層思維模型: 從凱因斯選美看估值

我的書架 | 思考的框架 (4): 第二層思維 (Second-Order Thinking)

不確定情況的主觀判斷: 準確 vs 精確 (Subjective Judgement under Uncertainty)

 

==================================

Disclaimer

No part of the post may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the blog creator, Snow Hill 高山雪.

This is a personal blog, the blog creator does not warrant, guarantee, or assume responsibility for the accuracy or reliability of any information posted. Under no circumstances shall the blog creator has any liability to you for any loss or damage of any kind incurred as a result of the use of, or reliance on, the information of the site or blog posts. The use of the site and your reliance on any information on the site is solely at your own risk.  


 © Copyright 2020-2022 高山雪 Snow Hill. All rights reserved.

 

留言

熱門文章

有一派投資叫「動能投資」

展望理論 Prospect Theory (1): 價值函數 (Value Function)

風險決策的兩個理論: 期望值 & 期望效用

展望理論 Prospect Theory (2): 機率加權函數 (Probability Weighting Function)

電影筆記 | First Do No Harm - (1) 故事描述

期望投資回報: 計算方法

成熟也有指標 (Emotional Maturity)

機率思維 | 大數法則, ⼩數定律, 賭徒謬誤, 墨菲定律