Probability Calculator

Bayesian Probability (Bayes' Rule) Calculator for Updating the Prior Probability of a Hypothesis Using One or Multiple Pieces of Evidence (Conditionally Independent Variables)

For basic usage instructions, click the "Show Explanation" buttons under the Prior Probability and Observation (Evidence) sections of the calculator below. For more detailed information on what this calculator is for, how it works, and how to best use it, read on here:

Background: Bayes' theorem provides a useful method of rationally determining probabilities by considering a prior probability and updating that with new information. Using this method, this calculator will tell you how likely a hypothesis is to be true based on the available evidence. To understand the operation of Bayes' theorem, consider the question, did you pick a weighted coin out of a bag containing both weighted and fair coins? If there are three weighted coins and seven fair coins, you know that there is a three in ten chance you took out a weighted coin. But where can you go from there to determine which of those possibilities is true? Well, you can flip the coin! If 80% of the time a weighted coin lands on heads, and the coin you flipped landed on heads, you have some new information. You know that what you observed would occur from the 7 fair coins half the time and that it would occur with the weighted coins 80% of the time. What you observed, then, is represented either by 3 × 0.8 weighted-coin-heads-flips or by 7 × 0.5 fair-coin-heads-flips. That is a 2.4-to-3.5 for the unfair to fair coin ratio, updated from the 3-to-7 prior ratio, now that a coin flip has been taken into account. So to convert that ratio into a percentage for whether you have a weighted coin now that you saw what happened from a coin flip, 2.4 / (2.4 + 3.5) = 40.7%. Congratulations, you have just performed a calculation using Bayes' theorem.

With Bayes' theorem, first a hypothesis with some existing probability or credence for being true is considered. Then based on one or various pieces of evidence (that is, tests, events, or observations of things that would inform the likelihood of the hypothesis being true), for example the results of individual flips of the coin in question, and how probable those observations would be when the hypothesis is true or not true, the probability that the hypothesis is true can be calculated. Using Bayes' theorem for updating probabilities based on new evidence is called Bayesian inference. (This is also the principal behind Bayesian reasoning in general: It is a reminder to not be overwhelmed and completely change our beliefs when any new evidence disagrees with the old, and it's a reminder to not stubbornly stand by our priors. Rather, we can revise our beliefs in the combined context of our old priors and new evidence.)

Specifically, Bayes' theorem tells us that the probability that a hypothesis is true given a new piece of data (in other words, the "posterior probability") equals the previously determined probability for the hypothesis (called the "prior probability") multiplied by the probability that the data would be as it was if the hypothesis was indeed true, and dividing that by the total probability that the data would have been as it was regardless of if the hypothesis is true. Mathematically, that can be expressed as P(H|E) = [P(E|H) × P(H)] / P(E) where P(H|E) is the probability of a hypothesis being true given some evidence (or "datum"), P(E|H) is the expectation of the evidence being as it is assuming the hypothesis is true, P(H) is the prior probability that the hypothesis is true, and P(E) is the total expected probability for the evidence being as it is. The value of the total expectation of the evidence P(E) can itself be calculated by adding the expectation for the datum given the hypothesis times the prior probability together with the expectation for the datum assuming the hypothesis is false times the prior probability that the hypothesis is false. Thus, Bayes' theorem can be written out as P(H|E) = [P(E|H) × P(H)] / [P(E|H) × P(H) + P(E|¬H) × (1-P(H))] where P(H|E) is the probability of a hypothesis being true given some evidence, P(E|H) is the expectation of the evidence being as it is assuming the hypothesis is true, P(H) is the prior probability that the hypothesis is true, and P(E|¬H) is the expectation of the evidence being as it is assuming the hypothesis is not true. Notice that this is the exact calculation that was performed in the opening example about coin flips! (Learn more about Bayes' theorem at Bayes' theorem (Wikipedia), Bayes theorem (YouTube: 3Blue1Brown), You Know I’m All About that Bayes: Crash Course Statistics #24 (YouTube: CrashCourse), A visual guide to Bayesian thinking (YouTube: Julia Galef), and Bayes’ Theorem Problems, Definition and Examples (Statistics How To).)

How the calculator works: With the equation P(H|E) = [P(E|H) × P(H)] / [P(E|H) × P(H) + P(E|¬H) × (1-P(H))], the calculator needs only three distinct inputs in the calculation of the posterior probability: It requires the prior probability P(H), the expected probability for the evidence if the hypothesis is true P(E|H), and the expected probability for the evidence if the hypothesis is false P(E|¬H). This calculator takes those inputs, performs the calculation, and displays the resulting posterior probability in the Results section. In cases where more than one evidence section is used, the result from calculating the first section becomes the new prior probability for calculating the next section, and so on. In such cases, however, it is important that each piece of evidence is independent, as explained below. (Note that this particular calculator does not need or accept a value to input for P(E), but rather it determines that value based on the inputs of P(H), P(E|H), and P(E|¬H). Note also that this calculator is only for comparing the probabilities of whether a hypothesis is true or false; it was not designed to calculate a probability distribution of a range of possibilities or accept a prior probability distribution.)

In addition to providing a posterior probability, this calculator also displays the Bayes' factor, which is the ratio of the expectation for the evidence assuming the hypothesis is true over the expectation for the evidence assuming the hypothesis is false. This Bayes' factor can be regarded as an indicator for the strength of the evidence itself, regardless of the prior probability, with values greater than 1 indicating evidence in favor of the hypothesis and values between 0 and 1 indicating evidence against the hypothesis.

Choosing inputs: This calculator can be used for any hypothesis you wish where probabilities can be updated based on evidence (events, observations, or tests), but not all hypotheses will be able to be calculated with the same objectivity. Some hypotheses (such as a coin being fair and not weighted, or a patient having a medical condition given some test results) will have well informed statistics available, and the calculator will in turn provide a reliable probability. Other hypotheses on the other hand may be subject to more subjective and speculative input, like the probability that extraterrestrials have visited Earth based on anecdotal accounts, or the probability that you got the job based on your interview performance.

Even when dealing with more subjective cases, Bayes' theorem can still be used. It is normal and often necessary to work with some sorts of credences about subjective matters with limited objective data, and these credences can be revised and improved by factoring in new evidence as it becomes available. However, it should be kept in mind that it will not provide perfectly reliable probabilities, but rather the results will simply be reflective of your existing beliefs and the information you have to work with. Additionally, for such hypotheses, if you are inputing more extreme values like 0.9999 or 0.0001, make sure you have justification for your numbers and that you're not being driven by personal biases. Beware of cognitive biases such as confirmation bias which can cause people to interpret information in a way that is more favorable towards their existing beliefs. Too much biased input can result in very skewed results. A single decimal place can make the difference between your hypothesis being calculated as probably true and probably false. (Note especially that in the evidence sections, not counting the Prior Probability section, numbers very close to 0 will affect the final results more than numbers very close to 1. Or more particularly, a large ratio between the two inputs in an evidence section will affect the results more.) For best results in assessing a subjective or controversial hypothesis, take care to input reasonable and intellectually honest values into the calculation as best as possible.

For each section, you can input a value between 0 and 1. 1 correlates to 100%, 0.5 correlates to 50%, 0.00001 correlates to one chance in 100,000, and so on. (In the Prior Probability section the input must be more than 0 and less than 1, and in the evidence sections the inputs must be more than 0, because otherwise the results would be absolutely 100%, 0%, or undefined, and there would be no need to calculate probabilities.) The number may have as many decimal places as you like.

In the Prior Probability section, enter a value for how likely you would expect the hypothesis to be true prior to considering the points of evidence calculated later. For example, if your hypothesis is that you picked a weighted coin from a bag, the prior probability would represent the proportion of weighted coins in the bag. If 3 out of 10 coins are weighted, the prior probability would be 0.3. The prior probability may reflect different levels of knowledge, depending on the situation. It may be based on reliable or on limited data, it may be based on a reasoned or subjective judgment, or it may reflect total ignorance of any reason to judge the hypothesis as being any more probable to be true or false. The final results should be read with the limitations or uncertainty of the prior probability in mind. (Even still, even if there isn't enough of a basis to have a prior of a confidently high or low credence in the hypothesis, the results will ultimately be more informed by the evidence that is factored in.)

For each of the remaining sections, there are two input boxes. The left input box is "Expectation If Hypothesis Is True," which recall is represented as P(E|H) in the equation. Here, enter a value between 0 and 1 (which equals 0% to 100%) to represent how likely you would expect the given evidence to be as it is assuming the hypothesis is true. For example, if you flipped the coin and it landed on heads, and there’s an 80% chance that a weighted coin would land on heads, enter 0.8 into this field. Entering a higher value in this field represents a higher likelihood that the hypothesis is true.

In the box on the right, "Expectation If Hypothesis Is False," P(E|¬H) in the equation, enter a value for how likely you would expect the given evidence to be as it is assuming the hypothesis is false. Continuing the coin example, if you didn’t pick a weighted coin, then it would have had a 50% chance of landing on heads, so enter 0.5 into this field. Entering a higher value in this field represents a lower likelihood that the hypothesis is true.

By default, the calculator displays one evidence section for calculating the effect of one event, test, or observation. If you would like to factor additional observations or pieces of evidence in to the calculation, press the "Add a section" button at the bottom. You can use this to add extra sections and thereby, for this example, factor in the effect of additional coin flips to update the probability with the extra information. If you have added a section and you would rather not factor that particular item into the equation, simply leave both values at 0.5. Any section where the "Expectation If Hypothesis Is True” and the "Expectation If Hypothesis Is False" values are equal will not affect the results of the calculation.

Conditionally independent evidence (for multiple pieces of evidence): One important note here is that the calculation requires the evidence in each section to be conditionally independent given the hypothesis. Meaning, the expected likelihood of the evidence or observation being as it is given the hypothesis being true and given the hypothesis being false should be unaffected by other evidence considered in the calculation, so the same evidence will not influence the results twice. Another way to think about what it means for the evidence variables to be conditionally independent is that if you were to know only whether the hypothesis is true or false, and based on that you have some assigned expectation for the existence of evidence datums E1 and E2, then if you learn evidence E1, that discovery should give you no additional knowledge about the expectation of E2.

To illustrate this with an example, let's say you wanted to calculate the probability of the hypothesis that a bathroom is empty (before knocking on the door). To do this, you may consider two pieces of evidence: E1, that you notice the door is ajar, and E2, that you hear no noises from inside the bathroom. Now, assuming your hypothesis that the bathroom is vacant would be true, let's say you may expect something like an approximately 60% chance of the door being ajar with P(E1|H) = 0.6 and a 99.99% chance of not hearing noises from inside with P(E2|H) = 0.9999, while if it was instead occupied you may expect something like an approximately 0.5% chance of the door being ajar with P(E1|¬H) = 0.005 and a 10% chance of not hearing noises from inside with P(E2|¬H) = 0.1. And so here, if you would hypothetically suddenly know for certain that the bathroom is indeed vacant, then you know to expect the 99.99% chance of hearing no noises from inside (or if you knew it was occupied, you would know there was the 10% chance of hearing no noises from inside), and if you would then see that the door to the bathroom is ajar, that would not tell you anything more about how likely to expect no noises from inside, and vice versa. This shows that the variables used as evidence are conditionally independent, and this calculator can be used in this situation. If there's a prior probability that at any given time there's a 90% chance that the bathroom is vacant, i.e. P(H) = 0.9, then factoring in the independent pieces of evidence E1 and E2 by considering the values we assigned for P(E1|H), P(E1|¬H), P(E2|H), and P(E2|¬H) above, we would come up with an (approximately) 99.9907% estimated probability that the bathroom is indeed vacant. If, on the other hand, you wanted to use the observation that the door is ajar and that the door is unlocked, those pieces of evidence would not be conditionally independent, because if you hypothetically knew that the bathroom was occupied, finding the door to be ajar would be indicative that it's less likely to be locked than you would have otherwise known. Instead, these two observations would need to be combined as a single point of evidence, namely that the door is both ajar and unlocked.

Note also that such decimal precision in the above example would not mean that it is perfectly accurate to that precision. Results are only as good as the inputs, and without more reliable statistics, such an example would only be using estimates for the inputs. In such cases, trying both high end and low end estimates for the input fields can give a range of plausible estimated results.

Take care when deciding what the hypothesis is, what the evidence is, and what probabilities should be assigned into each field, so that the calculation works properly. It is easy to make mistakes in these calculations leading to results that don't make sense. It's easy to accidentally mix up the inputs about the expectation assuming the hypothesis is true and the expectation assuming the hypothesis is false. It is easy to frame the hypothesis in terms of whether the evidence is true rather than framing it in terms of the question that the evidence is about. It is easy to have two pieces of evidence that are not fully independent. If the results of the calculation don't look quite right, take your time to check it for mistakes and to think carefully about how the calculation itself is framed.

Disclaimer: This calculator is not guaranteed to tell you perfect probabilities. Usable results depend on proper usage and inputs. By using this calculator, you accept all responsibility for the results of the calculation and any actions you take based on usage of this calculator. We disclaim all responsibility for any consequences, direct or indirect, arising from use of this page or any features thereof.

Prior probability

Before considering the evidence (i.e. the events, tests, or observations) below, what is the probability that the hypothesis is true? For example, if you are determining the probability that you picked an unfair coin out of a bag with 7 fair coins and 3 unfair coins, the prior probability would be 0.3 (which represents 30%).

Another example of applying Bayes' theorem is in analyzing cancer screening results. For such an example, if you are screening for a particular cancer where 1 out of 40 people in that demographic have that cancer, and you want to know the probability that a person has cancer based on cancer screening results, enter 0.025 (which represents 2.5%) for the prior probability.

If your hypothesis is about a more subjective issue, like the probability that your house is haunted for a more out-there example, you may want to make your prior probability something like the product of how likely you initially expect that houses can be haunted (consider, does the idea seem plausible to you, or has any house been empirically demonstrated to be haunted?) by how frequently you would most likely expect them to be haunted, but keep in mind that for hypotheses such as this the results will be very subjective and speculative and only represent how likely you believe it to be true as opposed to an objective probability of how likely it is to be true.

In the number box below, enter a decimal number somewhere between 0 and 1 (which represents 0% to 100%), and this will be the prior probability—which is represented as P(H) in the equation. For example, 0.5 would represent a 50% initial expectation that the hypothesis is true, or 0.001 would represent a 0.1% initial expectation.

+ Add a section

Probability Results

Probability results should display in this area when the page finishes loading.

If it does not, please make sure you have JavaScript enabled.

If this error message is displayed even when using other browsers and devices, please let me know about it here so I can investigate.

Share Results
Copy Shareable Link

Disclaimer: This calculator is not guaranteed to provide perfect probabilities. Usable results depend on proper usage and inputs. By using this calculator, you accept all responsibility for the results of the calculation and any actions you take based on usage of this calculator. We disclaim all responsibility for any consequences, direct or indirect, arising from use of this page or any features thereof.

Support This Project: This Bayesian Probability calculator is made available for public use free of charge and without ads. If you find this calculator useful, please consider taking a moment and making a voluntary donation to help support my efforts in providing and maintaining free projects like this. Click here to donate to support this Bayesian Probability Calculator project on PayPal. Thank you!

Privacy & Cookies: The calculations performed with this tool operate on the user's web browser with client-side JavaScript, so it does not process or save your inputs on our servers. If you choose to send feedback using the Feedback form, the form will collect and send your name, email address, message, referring URL, current page URL, and anonymized IP address, for the purposes of addressing the provided feedback. This webpage may use a functionality cookie if the light color theme was selected to remember your choice. Cookies are small pieces of information commonly placed in your web browser by websites which can allow your browser to remember settings or help websites recognize your web browser. This webpage uses Clicky Web Analytics which records basic information including referring URLs, pages visited, anonymized IP addresses, and browser details, as part of its standard web analytics functionality. You may choose to opt out of Clicky Analytics online tracking. For more information, refer to the Clicky Analytics Privacy Policy. This webpage offers a link to support this project by giving a donation via the PayPal platform. Donations made via that platform are subject to PayPal's privacy policy.