## Squid Game conditional probabilities

Earlier we analysed the probabilities for the bridge-crossing scenario in the Squid Game episode “VIPs”, which has “deadly high stakes” according to the Netflix blurb for the series. 🙂 So far, we made the assumption of no foreknowledge. This means our results for the players’ progress describe their chances as they stand before the game begins. Equivalently, if the game has started, our results assume the analyst knows nothing about prior contestants, and cannot view the state of the bridge.

But now, suppose we are told only that a specific player numbered i died on step number n. (That is, they stood safely on column n – 1, but chose wrongly amongst the next pair of glass panels on column n, breaking a pane and plummeting downwards.) Then the next player is definitely safe on step n, but has no information about later steps, so the game is essentially reset from that point. Hence the “conditional probability” that player I > i is still alive on step N > n is simply:

Recall is the chance player i′ is alive on step n′ (given no information nor conditions). We labelled as the chance they died on step n′ specifically, so analogously:

Now, suppose we are told only that a specific player I will die on step N. What is the probability for an earlier player’s progress? Bayes’ theorem says that given two events A and B, the conditional probabilities are related by , which in our case is:

The powers of 2 cancelled. The Table below shows some example numbers.

 step: n = 1 2 3 4 5 6 7 8 player: i = 1 4/7 2/7 4/35 1/35 0 0 0 0 2 0 2/7 12/35 9/35 4/35 0 0 0 3 0 0 4/35 9/35 12/35 2/7 0 0 4 0 0 0 1/35 4/35 2/7 4/7 0 5 0 0 0 0 0 0 0 1

In general, on any given row (fixed player i) the entries are nonzero only for n between i and N – I + i inclusive. This forms a diamond shape. For the row sum computer algebra returns a hypergeometric function times two binomial coefficients, which appears to simplify to 1 (for integer parameters) as expected, since player i must die somewhere. On any given column which is independent of n, meaning each step has equal chance that some player will die there. In particular the first entry itself takes this value: .

We examine other properties and special cases. By construction the last row and column are zeroes apart from ; our general formula does not apply for n = N. If we are told where the second player I = 2 died, then player i = 1 has an equal chance 1/(N – 1) of dying on any earlier step. Also from the definition it is clear:

so the table is symmetric about its central point. The ratio of adjacent entries follows from the binomial coefficients:

It follows that at step n, player i = (I – 1)n/N and the subsequent player have the same “fail” chance. Presumably the maximum lies within this range. Physically we require the indices i and n to be integers. For the chosen Table parameters above, the relation just given is simply in/2, so every second column contains an adjacent pair of equal values. For the steps (columns) on the other hand, on n = (i – 1)(N – 1) / (I – 2) and the following step the “elimination” chance is equal. Note these special index values are linear functions of the other index (i or n respectively), where we regard I and N as fixed.

By rearranging terms we can write equivalent expressions for the chance to be eliminated, such as:

For suitably large parameters, the probability resembles a gaussian curve. We can apply the de Moivre-Laplace approximation (with parameter p := ½ say) to the binomial coefficients. This gives a gaussian for a fixed step number n, as a function of the player number. I omit the height, but its centre and width are determined from the exponent which is:

The spread is maximum at n = N/2, in this approximation. Now to obtain a gaussian approximation for a fixed player i, apply the results of the previous blog post using the substitutions , , , and . The centre is . One option for the height of the gaussians — when looking for a simple expression — is to use the sums 1 and (I – 1)/(N – 1) determined before. Recall for a normalised gaussian, the height is inversely proportional to the standard deviation.

There are other conditional probability questions one could pose. Suppose we are given a window, bounded by the events that player J died on column L, and later player K dies on column M? Inside this window, the probabilities reduce to our above analysis: the chance i dies on n is just , where we also substitute and . As another possible scenario to analyse, we might be informed that player I is alive on step N. Then we would not know how far they progressed, just that it was at least that far. Or, we might be told player I died on or before step N.

A concluding thought: Bayes’ theorem is deceptively simple-looking. I tried harder ways beforehand, trying to puzzle through the subtlety of conditional probability on my own. But with Bayes, the main result followed easily from our previous work.

🡐 asymptotics | Squid Game bridge | ⸻ 🠦

## Gaussian approximation to a certain product of binomial coefficients

Consider the following function, which is the product of a certain pair of binomial coefficients:

We take abX >> 1 to be constants, and x to have domain [a – 1, Xb + 1] which implies Xab – 2 at least. As usual , and this is extended beyond integer values by replacing each factorial with a Gamma function. Note the independent variable x appears in the upper entries of the binomial coefficients. Curiously, from inspection f is well-approximated by a gaussian curve. To gain some insight, for integer values of the parameters f is the polynomial:

This has many zeroes, and sometimes oscillates wildly in between them, hence the domain of x specified earlier.

Now the usual approximations to a single binomial coefficient (actually, binomial distribution) are not helpful here. For example the de Moivre–Laplace approximation is a gaussian in terms of the lower entry in the binomial coefficient, whereas our x is in the upper entries. More promising is the approximation as a Poisson distribution, which leads to a polynomial which is itself gaussian-like, and motivated the previous post incidentally. However we proceed from first principles, by estimating the centre point and the second derivative there.

At the (central) maximum of f, the slope is zero. In general the derivative is , where the H’s are called harmonic numbers. There may not exist any simple explicit expression for the turning points. Instead, the ratio of nearby points is comparatively simple:

using the properties of the binomial coefficient. The derivative is approximately zero where this ratio is unity, which occurs at:

This should be a close estimate for the central turning point. [To do better, substitute specific numbers for the parameters, and solve numerically.] It is typically not an integer. Our sought-for gaussian has form . We set the height . Only the width remains to be determined. The gaussian’s second derivative evaluated at its centre point is . On the other hand:

which uses the so-called harmonic numbers of order 2, and I incorporate the function and its derivative (both given earlier) for brevity of the expression. Matching the results at yields the variance parameter :

using . (At large values the series may give insight into the above.) But alternatively, we can approximate the second derivative using elementary operations. By sampling the function at , , and say, a “finite differences” approach gives approximate derivatives. We can use the simple ratio formula obtained earlier to reduce the sampling to one or two points only, which might gain some insight along the way (though I currently wonder if this is a dead end…).

Now , which becomes:

after using the ratio formula to obtain in terms of C. Similarly it turns out is the negative of the above expression, but with a and b interchanged. Then a second derivative is: , but the combined expression does not simplify further so I won’t write it out. The last step is to set , which is different to the earlier choice.

A slightly different approach uses , which may be expressed in terms of another sampled point . Similarly . The estimate for the second derivative follows, then later:

The expression is a little simpler in this approach, but at the cost of a second sample point. The use of and instead leads to the same result.

## Gaussian approximation to a certain polynomial

Consider the function:

where the independent variable x ranges between 0 and X, and the exponents are large: . [We could call it a “polynomial”, though the exponents need not be integers. Specifically it is the product of “monomials” in x and Xx, so might possibly be called a “sparse” polynomial in this sense.] Surprisingly, it closely resembles a gaussian curve, over our specified domain .

The turning point is where the derivative equals zero. This occurs when x is the surprisingly simple expression:

at which the function has value:

An arbitrary gaussian, not necessarily normalised, has form: . This has centre D which we equate with , and maximum height C which we set to the above expression. We can fix the final parameter, the standard deviation, by matching the second derivatives at the turning point. Hence the variance is:

Hence our gaussian approximation may be expressed:

The integral of the original curve turns out to be:

This uses the binomial coefficient , which is extended to non-integer values by replacing the factorials with Gamma functions. We could then apply Stirling’s approximation to each factorial, to obtain:

though this is more messy to write out. On the other hand, the integral of the gaussian approximation is:

We evaluated this integral over all real numbers, because the expression is simpler and still approximately the same. The ratio of the above two expressions is .