More Squid Game probabilities

Difficulty:   ★★★☆☆   undergraduate

Last time I analysed the bridge-crossing scenario in the series Squid Game. In this fictional challenge called “Glass Stepping Stones”, the front contestant must leap forward along glass panels, choosing left or right each time, knowing only that one side is strengthened glass while the other will shatter. At least later players may learn from the choices of their forerunners. Here I use combinatorial arguments, derive a recurrence relation for the chance to die on a given step, and obtain an analytic solution with a hypergeometric function.

Again, write a_{i,n} or equivalently P(i,n) for the probability the ith player is still alive on the nth step. We showed these probabilities satisfy the recurrence relation a_{i,n} = \frac{1}{2}(a_{i-1,n-1}+a_{i,n-1}), along with initial conditions a_{1,n} = 1/2^n, and a_{i,1} = 1 for all players after the first. Equivalently, we can start from a_{0,n} := 0, and a_{i,0} := 1 for i \ge 1. This is a bit like Pascal’s triangle. Rather than adding the previous two terms, we take their average — which of course is the sum divided by two. And rather than 1’s at the sides, we have 0’s and 1’s.

Let’s write b_{i,n} for the likelihood the ith player will die upon landing on the nth step. Then b_{i,n} = a_{i,n-1} - a_{i,n}. These values satisfy the same recurrence relation as before:

    \[\begin{aligned} b_{i,n} &=a_{i,n-1} - a_{i,n} \\ &= \frac{1}{2}(a_{i-1,n-2}+a_{i,n-2}-a_{i,n-1}-a_{i-1,n-1}) \\ &= \frac{b_{i,n-1}+b_{i-1,n-1}}{2}.\end{aligned}\]

Only the initial conditions are different: b_{1,n} = 1/2^n, and b_{i,1} = 0 for all players after the first. It is aesthetic to begin a step earlier: b_{i,0} := 0 =: b_{0,n}, except for b_{0,0} = 1. The Table below shows a few early entries.

Table: Probability player i will die on step n itself, given no foreknowledge
step:

n = 1

2 3 4 5 6 7 8
player:

i = 1

1/2 1/4 1/8 1/16 1/32 1/64 1/128 1/256
2 0 1/4 1/4 3/16 1/8 5/64 3/64 7/256
3 0 0 1/8 3/16 3/16 5/32 15/128 21/256
4 0 0 0 1/16 1/8 5/32 5/32 35/256
5 0 0 0 0 1/32 5/64 15/128 35/256
6 0 0 0 0 0 1/64 3/64 21/256

Alternatively, there are elegant combinatorial arguments, for which I was initially inspired by another blog . For player i to die on step n, the previous i – 1 players must have died somewhere amongst the n – 1 prior steps. There are n – 1 choose i – 1″ ways to arrange these mistaken steps, out of 2^{n-1} total combinations of equal probability. Given any such arrangement, the next player has a 50% chance their following leap is a misstep, hence:

    \[b_{i,n} := P(i\textrm{ dies on }n) = \binom{n-1}{i-1}2^{-n}.\]

(I originally found this simple formula in a much more roundabout way, as often happens!) If i > n, the probability is zero. By similar reasoning, the chance that precisely i players have died by step n (inclusive) is:

    \[P(i\textrm{ players died by }n) = \binom{n}{i}2^{-n}.\]

A draft paper (Henle+ 2021 ) gives this result. It may also be obtained by summing over the previous formula: \sum_{n' = i}^n b_{i,n'}/2^{n-n'}. Note if player i died on n′, the next player must make nn′ correct guesses in a row, so that no-one else dies by the nth step.

Now the probability the ith player is alive at the nth step or further, is the probability any number of previous players died by step n or before. (So what is ruled out is i or more dying by this stage.) This is just a sum over the previous displayed formula: \sum_{i'=0}^{i-1}P(i\textrm{ died by }n), which computer algebra simplifies to:

    \[P(i,n) = 1-\binom{n}{i}2^{-n}\cdot{_2F_1}(1,i-n,i+1;-1).\]

Here _2F_1 is called the “(ordinary) hypergeometric function”. I gave a limited table of these probability values in the previous blog. For fixed integer i \ge 0, the entire expression reduces to a polynomial in n of order i – 1 with rational coefficients, all times 2^{-n}. For example the likelihood the 5th player is alive at step n is:

    \[P(5,n) = 2^{-n}\frac{1}{24}\big( n^4-2n^3+11n^2+14n+24 \big).\]

In general, for n < i we have a_{i,n} = 1, so players get some steps for free. The diagonal terms are a_{i,i} = 1-2^{-i}. This makes sense because for player i to not be alive on the ith step, every leap by previous players must also have been a misstep. Some results like these may also be shown using induction and the recurrence relation. I give more special cases in the next blog post. Yet the general formula works even for non-integers, though this is not physical, as the Figure below shows. For negative parameters (not shown) it has a rich structure, with singularities, and some probabilities values negative or exceeding 1.

probabilities plot
Figure: Probability that player i is alive at step n. The blue dots are for integer values of the parameters, which are physical. The probability decreases with step number. Visually, it is as if contestants start from the plateau at top-left, then slide down a hill of death 🙁

An alternative derivation of the probabilities is based on where the previous player died (if at all). If that player i – 1 died on step n′ < n, their follower must make nn′ correct guesses in a row to reach step n safely. Now sum the result from n′ = i – 1, which is the earliest step upon which they may conceivably die, up to n′ = n – 1. Add to this the chance the player was still alive at step n – 1 (which is one minus the sum of chances they died on step n′) as this guarantees the following player i is alive at n. Numerical testing shows the result is indeed equivalent. Hence rather than summing over players for a fixed step, one may instead sum over steps for a fixed player.

🡐 recurrence relation | Squid Game bridge | asymptotics 🠦

Squid Game bridge probabilities

Difficulty:   ★★★☆☆   undergraduate

In the popular Korean series Squid Game, one episode features a bridge-crossing game, whose probabilities are a fun challenge to calculate. (Warning: partial spoilers ahead.) In this cruel fictional scenario, called “Glass Stepping Stones” in the English subtitles, glass panels are suspended above a long fall. Contestants must leap between them. At each step the leading player is forced to choose left or right, knowing only that one panel is made of ordinary glass which will shatter, and the other is strengthened (“tempered”) glass which will hold. Later contestants cross the same bridge, and watch all previous attempts, so can learn the successes and failures.

The odds are simple for the first player. On each leap forward, there is a 50% chance they will fall to their death. Hence the chance of surviving N steps is 1/2^N, an exponential decrease. In the show (~30 minute mark), one player actually calculates this: 15 untested steps remain ahead of him, for a horrifyingly low 1/32768 chance of survival from that point. (Actually this is the third player, but more on that later.)

Squid Game bridge
The third contestant on the Squid Game bridge accurately calculates his chances as 1 in 32768

But suppose we do not know the outcome of earlier players. At the start, before anyone has moved, what is the probability a_{i,n} say, that player number i will still be alive on step number n? We showed a_{1,n} = 1/2^n. For player 2, it is certain they will survive step 1, by copying the first player if they were successful, or switching to the opposite pane if not. By extension player i is certain to survive the first i – 1 steps, hence a_{i,n} = 1 for all n \le i - 1.

In general, we set up a recurrence relation. But consider firstly the case i = 2. What is the chance they are alive at step n? If the first player died on step 1 (I mean, they leaped from the starting platform to an ordinary glass panel at step 1), then their successor must guess n – 1 tiles to reach step n successfully (I mean, to still be alive on panel n, and not fall through it). The probability of this combination of events is (1 - a_{1,1})/2^{n-1}. Similar reasoning applies to any step up to n – 1. However if the first player is still alive on n – 1, their follower is guaranteed to reach step n successfully. (Any later performance of the first player is irrelevant to their successors at step n.) The overall probability is the sum over these possibilities, which for an arbitrary player is:

    \[a_{i,n} = a_{i-1,n-1} + \sum_{k=1}^{n-1} ( a_{i-1,k-1} - a_{i-1,k} ) / 2^{n-k}.\]

This gives the probability in terms of the previous player. (Note the term in parentheses is the chance the previous player will die on step k precisely.) Hence starting from the initial conditions given earlier, we may build up an array of values using a spreadsheet, computer program, or computer algebra system. The latter choice preserves exact fractions, which feels very satisfying. Also we define a_{i,0} = 1 for convenience, where “step 0” may be interpreted as the ledge contestants safely start from. The Table below gives the first few values.

Table: Probability player i is still alive by step n, given no foreknowledge
step:

n = 1

2 3 4 5 6 7 8
player:

i = 1

1/2 1/4 1/8 1/16 1/32 1/64 1/128 1/256
2 1 3/4 1/2 5/16 3/16 7/64 1/16 9/256
3 1 1 7/8 11/16 1/2 11/32 29/128 37/256
4 1 1 1 15/16 13/16 21/32 1/2 93/256
5 1 1 1 1 31/32 57/64 99/128 163/256
6 1 1 1 1 1 63/64 15/16 219/256

In Squid Game the bridge has 18 (pairs of) steps. The probability of crossing the entire bridge is the probability of being alive on step 18, as the next leap is to safety. In theory the 9th player has nearly even odds of making it: a_{9,18} = 53381/131072 \approx 0.41, and the next player likely will: a_{10,18} = 77691/131072 \approx 0.59. In the show, 16 players compete in this challenge, so the last player has excellent odds, supposedly: a_{16,18} = 65493/65536 \approx 0.9993.  However our analysis does not account for human behaviour! In the show, time pressure, rivalries, and imperfect memory compete with logical decision making and the interests of the group as a whole. On the other hand, some players claim to distinguish the glass types by sight or sound, which would give an advantage. These make interesting plot elements, but would spoil the simplicity and purity of a mathematical analysis.

Returning to the recurrence relation, it simplifies to:

    \[a_{i,n} = \frac{a_{i,n-1} + a_{i-1,n-1}}{2}.\]

Hence each term is just the average of two previous terms. However I wanted to derive this via direct physical interpretation, not algebraic manipulation alone. This is intuitively satisfying. With the end result in mind, we relate P(i,n) \equiv a_{i,n} to the previous step and previous player. [Update, 21st April: A simpler way is to consider step 1. If player 1 guesses breaks it, there are i – 1 remaining players for the next n – 1 steps. If player 1 instead guesses correctly, there are i players for the next n – 1 steps. This gives the recurrence relation.] Consider the three cases for player i – 1: they (A) died before step n – 1, (B) died on step n – 1, or (C) made it safely to step n – 1 or further. The total probability is the sum of these cases:

    \[P(i,n) = P(i,n|A)P(A) + P(i,n|B)P(B) + P(i,n|C)P(C).\]

The first term for example is the “conditional probability” that i is alive at step n, given that case A occurred; times the probability of case A itself occurring. There is a similar decomposition to the above for P(i,n-1). Now most parts of the expression are straightforward. If i – 1 died at step n – 1, then the next player is definitely safe at that step, but may only guess at the following step, so P(i,n-1|B) = 1 and P(i,n|B) = 1/2. If i – 1 was safe at step n – 1 or further, then the next player is safe for an extra step: P(i,n-1|C) = 1 = P(i,n|C). For case A the conditional probabilities are more difficult, but we do not need to calculate them. Observe that if the previous player died before n – 1, then steps n – 1 and n are uncharted territory. Hence the chance the following player makes it to n safely, is half of whatever it was for them to reach n – 1 safely: P(i,n|A) = \frac{1}{2}P(i,n-1|A). Hence the decomposition becomes:

    \[P(i,n) = \frac{1}{2}P(i,n-1|A)P(A) + \frac{1}{2}P(B) + P(C).\]

But this is just \frac{1}{2}P(i,n-1) apart from the C term, as seen from expanding out the conditional cases. Now P(C) = P(i-1,n-1) \equiv a_{i-1,n-1}. It follows P(i,n) = \frac{1}{2}(P(i,n-1) + P(i-1,n-1)) as before. We did not need to evaluate P(A) or P(B), though this is straightforward.

Now that the reader (and author!) have more experience with conditional probability, let’s return to the third player in the Squid Game episode. Before anyone moved, he had chance a_{3,18} = 43/65536 \approx 1/1500 of surviving the bridge. This would seem to contradict the earlier calculation, which gave a lower chance by a factor of 21½, a surprising contrast! The black-masked “Front Man” said to the VIP observers, “I believe this next game will exceed your expectations” (~12:30 mark), but in this sense it did not 😆 . The distinction is the information learned. Conditional probability is a subtle and beautiful thing. If we know nothing about the previous attempts, nor the state of the bridge, the probabilities are our variables a_{i,n}. But if we are given the information player I died on step N for instance, then the following player has no information about later steps, and the bridge scenario is essentially reset from that point onwards. Hence P(3,18|2\textrm{ died on }3) = a_{3-2,18-3} = 1/2^{15}.

This scenario has been a valuable learning experience, as I had not worked with conditional probabilities before. Probability is very important in physics, particularly quantum physics where it is intrinsic (it is usually assumed). I originally came up with an incorrect recurrence relation, but realised this upon comparison with an article in Medium , which uses an elegant combinatorial argument. The scenario had already captured my attention, but realising my flaw drove further my need to understand. A related article  is also helpful; I recommend these if you find my discussion hard to follow. There is even a draft paper  on the Squid Game bridge probabilities! Presumably this is all little more than a specific application of textbook combinatorics. Still, it is fun to rediscover things for oneself.

🡐 ⸻ | Squid Game bridge | general solution 🠦

Coordinates adapted to observer 4-velocity field

Difficulty:   ★★★☆☆   undergraduate

Suppose you have a 4-velocity field \mathbf u, which might be interpreted physically as observers or a fluid. It may be useful to derive a time coordinate T which both coincides with proper time for the observers, and synchronises them in the usual way. Here we consider only the geodesic and vorticity-free case. Define:

    \[dT := -\mathbf u^\flat.\]

The “flat” symbol is just a fancy way to denote lowering the index, so the RHS is just -u_\mu. On the LHS, dT is the gradient of a scalar, which may be expressed using the familiar chain rule:

    \[dT = \frac{\partial T}{\partial x^0}dx^0 + \frac{\partial T}{\partial x^1}dx^1 + \cdots,\]

where x^\mu is a coordinate basis. Technically dT is a covector, with components (dT)_\mu = \partial T/\partial x^\mu in the cobasis dx^\mu. Similarly -\mathbf u^\flat = -u_0dx^0 -u_1dx^1 -\cdots, so we must match the components: \partial T/\partial x^\mu = -u_\mu. For our purposes we do not need to integrate explicitly, it is sufficient to know the original equation is well-defined. (No such time coordinate exists if there is acceleration or vorticity, which is a corollary of the Frobenius theorem, see Ellis+ 2012 §4.6.2.)

The new coordinate is timelike, since \langle dT,dT\rangle = \langle -\mathbf u^\flat,-\mathbf u^\flat\rangle = -1. One can show its change with proper time is dT/d\tau = \langle dT,\mathbf u\rangle = 1. Further, the T = \textrm{const} hypersurfaces are orthogonal to \mathbf u, since the normal vector (dT)^\sharp is parallel to \mathbf u. This orthogonality means that at each point, the hypersurface agrees with the usual simultaneity defined locally by the observer at that point. (Orthogonality corresponds to the Poincaré-Einstein convention, so named by H. Brown 2005 §4.6).

We want to replace the x^0-coordinate by T, and keep the others. What are the resulting metric components for this new coordinate? (Of course it’s the same metric, just a different expression of this tensor.) Notice the original components of the inverse metric satisfy g^{\mu\nu} = \langle dx^\mu,dx^\nu\rangle. Similarly one new component is g'^{TT} = \langle dT,dT\rangle = -1. Also g'^{Ti} = \langle dT,dx^i\rangle = -u^i, where i = 1,2,3. The g'^{iT} are the same by symmetry, and the remaining components are unchanged. Hence the new components in terms of original components are:

    \[g'^{\mu\nu} = \begin{pmatrix} -1 & -u^1 & -u^2 & -u^3 \\ -u^1 & g^{11} & g^{12} & g^{13} \\ -u^2 & g^{21} & g^{22} & g^{23} \\ -u^3 & g^{31} & g^{32} & g^{33} \end{pmatrix}.\]

The matrix inverse gives the new metric components g'_{\mu\nu}. The 4-velocity components are: u'_\mu = (-1,0,0,0) by the original equation. Also u'^T = \langle dT,\mathbf u\rangle = 1, and the u'^i = \langle dx^i,\mathbf u\rangle = u^i are unchanged. Hence u'^\mu = (1,u^1,u^2,u^3).

Anecdote: I used to write out dT = -u_0dx^0 - u_1dx^1 - \cdots, rearrange for dx^0, and substitute it into the original line element. This works but is clunky. My original inspiration was Taylor & Wheeler 2000 §B4, and I was thrilled to discover their derivation of Gullstrand-Painlevé coordinates from Schwarzschild coordinates plus certain radial velocities. (I give more references in MacLaurin 2019  §3.) I imagine that if a textbook presented the material above — given limited space and more formality — it may seem as if the more elegant approach were obvious. However I only (re?)-discovered it today by accident, using a specific 4-velocity from the previous post, and noticing the inverse metric components looked simple and familiar…

Total angular momentum in Schwarzschild spacetime

Difficulty:   ★★★☆☆   undergraduate

In relativity, distances and times are relative to an observer’s velocity. Hence one should be careful when defining an angular momentum. Speaking generally, a natural parametrisation of 4-velocities uses Killing vector fields, if the spacetime has any. In Schwarzschild spacetime, Hartle (2003 §9.3) defines the Killing energy per mass and Killing angular momentum per mass as:

    \[e := -\langle\mathbf u,\partial_t\rangle, \qquad \ell_z := \langle\mathbf u,\partial_\phi\rangle.\]

The angle brackets are the metric scalar product, \phi has range [0,2\pi), and we will take \mathbf u to be a 4-velocity.  I have relabeled Hartle’s \ell as \ell_z. While \partial_t and \partial_\phi are just coordinate basis vectors for Schwarzschild coordinates, as Killing vector fields (KVFs) they have geometric significance beyond this convenient description. [\partial_t is the unique KVF which as r \rightarrow \infty in “our universe” (region I), is future-pointing with squared-norm -1. On the other hand \partial_\phi has squared-norm r^2\sin^2\theta, so is partly determined by having maximum squared-norm r^2 amongst points at any given r, which implies it is orthogonal to \partial_t, although the specific orientation is not otherwise determined geometrically.]

In fact \ell_z is the portion of angular momentum (per mass) about the z-axis. In Cartesian coordinates (t,x,y,z), the KVF \mathbf Z := \partial_\phi has components (0,-y,x,0). Similarly, we can define angular momentum about the x-axis using the KVF X^\mu := (0,0,z,-y), which in spherical coordinates is (0,0,\sin\phi,\cot\theta\cos\phi). For the y-axis we use Y^\mu := (0,-z,0,x), which is (0,0,-\cos\phi,\cot\theta\sin\phi) in the original coordinates. Then:

    \[\ell_x := \langle\mathbf u,\mathbf X\rangle, \qquad \ell_y := \langle\mathbf u,\mathbf Y\rangle, \qquad \ell_z = \langle\mathbf u,\mathbf Z\rangle.\]

Hence we can define the total angular momentum as the Pythagorean relation \ell^2 := \ell_x^2 + \ell_y^2 + \ell_z^2, that is:

    \[\ell^2 := \langle\mathbf u,\mathbf X\rangle^2 + \langle\mathbf u,\mathbf Y\rangle^2 + \langle\mathbf u,\mathbf Z\rangle^2.\]

This is a natural quantity determined from the geometry alone, unlike the individual \ell_z etc. which rely on an arbitrary choice of axes. It is non-negative. I came up with this independently, but do not claim originality, and the general idea could be centuries old. Similarly quantum mechanics uses J^2 and J_z, which I first encountered in a 3rd year course, although these are operators on flat space.

One 4-velocity field which conveniently implements the total angular momentum is:

    \[u^\mu = \bigg( \frac{e}{1-\frac{2M}{r}}, \pm\sqrt{e^2-\Big(1-\frac{2M}{r}\Big)\Big(1+\frac{\ell^2}{r^2}\Big)},\frac{\ell}{r^2},0 \bigg).\]

In this case the axial momenta are \ell_x = \ell\sin\phi, \ell_y = -\ell\cos\phi, and \ell_z = 0, for a total Killing angular momentum \ell as claimed. There are restrictions on the parameters, in particular the “\pm” must be a minus in the black hole interior. Incidentally this field is geodesic since \nabla_{\mathbf u}\mathbf u = 0. It also has zero vorticity (I wrote a technical post on the kinematic decomposition previously), so we might say it has macroscopic rotation but no microscopic rotation. Another possibility is in terms of \ell_z and \ell:

    \[u^\mu = \bigg( \cdots, \pm\frac{\sqrt{\ell^2-\ell_z^2\csc^2\theta}}{r^2}, \frac{\ell_z}{r^2\sin^2\theta} \bigg),\]

where the first two components are the same as the previous vector. The expressions are simpler with a lowered index u_\mu.

Swing boats and pump tracks

Difficulty:   ★★☆☆☆   high school

Possibly my favourite amusement park ride I have ever tried was called a Schiffschaukel, which is German for “ship swing”. It was powered only by the rider, with no motor of any kind. With enough skill, you could make a complete 360° vertical loop! In place of the rope or chain links in an ordinary swing, it was supported by rigid struts, which helped achieve a complete rotation. I loved the physical challenge of it. It was easy to start it rocking. I also managed to get my body above the horizontal, but not to make a complete loop. However one man I watched could not only achieve a loop, but gauged his speed so as to hang upside down for a few seconds, neck craned forward to watch the ground, before he gradually tipped over.

ship swing
Schiffschaukel or ship swing, from Oktoberfest.net. The one I rode was in a local Volksfest (“people’s festival”) in a town outside of Munich, in 2007. It looked a lot like the one in this photo, with a few cosmetic differences: as I recall I had no waist harness but just one foot strapped in, there were two supporting struts rather than four, no “boat” decoration, and the swing was shorter.

In my experience most people, including Germans, have never heard of it. Wikipedia calls it a “ship swing”, in contrast to a “pirate ship” ride which is motorised. I have been on a couple of the latter rides in Australia: huge structures which seat dozens of people, and are quite different to the self-propelled swing. The unpowered type date from the 1800s apparently; many held two people, and had ropes to pull on. The German language Wikipedia has more information: here is an automatic translation. Also it turns out there is a modern Estonian sport kiiking (meaning “swinging”) which is the same concept. The current world record for a revolution is a swing with radius just over 7 metres, achieved by an Olympic medal-winning rower.

But how is it even possible to make a loop? I found it counter-intuitive, like pulling yourself up by the bootstraps, as the proverbial saying goes. Similarly, I recently learned of “pump tracks” for bicycles, from my brother. It is possible to propel yourself around such a course, which consists of mounds and banked corners, without pedalling! The energy comes from raising or lowering your body with the correct timing. In fact you can even propel yourself on flat ground this way, by making appropriate turns and body maneuvers.

angular momentum for ship swing
Conservation of angular momentum along a circular arc. The rider travels from left to right, with their centre of mass initially following the middle arc. After standing up, their centre of mass follows the inner arc, and their velocity is increased. (The solid arc is the ground or base of the ship, say.)

Conservation of angular momentum explains both scenarios. Consider a circular, concave segment of track, as with the ship swing. Approximate the person, plus bike or “ship”, as a point object with mass m. Suppose this point, their centre-of-mass, moves on a circle with radius R (note this is less than the radius of the track arc). The angular momentum, as determined from the centre of curvature, is R\times mV, where V is the speed. (Technically this is a vector cross-product, but in this simple example where the vectors are at 90°, we can more or less treat it as an ordinary multiplication of numbers.) Now suppose the rider stands up straighter, so their centre of mass moves a “height” h towards the centre of curvature. The angular momentum is (R-h)\times mV', where V' is the new speed. But since angular momentum is conserved, this must match the previous expression, hence:

    \[\frac{V'}{V} = \frac{R}{R-h} = 1 + \frac{h}{R-h} > 1.\]

The speed has increased! Note the rider put work in, by not only resisting the centrifugal acceleration V^2/R, but moving against it, in the opposite direction. The forces can be severe. For a swing to barely reach the top, it must have a speed V = \sqrt{2g\cdot 2R} at the bottom of the arc, by conservation of kinetic and gravitational potential energy. Here g \approx 9.8m/s^2 is the “acceleration” due to gravity. The centripetal acceleration at the lowest part is V^2/R = 4g, which is independent of the radius. Including the weight due to gravity gives a total of 5g — that is, a g-force of 5!

The swing rider should probably bend their knees when reaching the maximum height of their arc, to reverse the process and complete the cycle. If their speed is zero at this point (so a full revolution is not achieved), crouching has no effect on the speed, which is zero after all. In this case the maximum speed — measured when at the lowest part of the circle — grows by the fixed proportion R/(R-h) with each swing. This is exponential growth! Once revolution is achieved, the rider can gain further speed by crouching at the top of the circle. While this reduces their speed by the same proportion R/(R-h), over one revolution there is a net gain, since the speed at the bottom is greater due to gravitational fall.

1/2 m Vlow^2 = mg2R + 1/2 m Vtop^2

Vlow^2 = 4gR + Vtop^2

If Vtop is speed before crouch, then should really use Vtop/ratio. So:

Vlow^2*ratio^2 = 4gR + Vtop^2/ratio^2

Suppose speed V just before bottom. Then V*ratio after standing. At top, before crouch, Vtop^2 = V^2*ratio^2 – 4gR. After crouch, new speed^2 is: Vtop^2/ratio^2 = V^2 – 4gR/ratio^2. At bottom, before standing, Vlow^2 = V^2 + 4gR(1-1/ratio^2).

1/ratio = (R-h)/R = 1-h/R. So 1-1/ratio^2 = 1 – (1-h/R)^2 = 2h/R – h^2/R^2

So 4gR( … ) = 8gh – 4gh^2/R = 4gh(2-h/R).

So with each revolution, the kinetic energy increases by a fixed amount (not fixed proportion). This is linear growth. Hence the speed increases indefinitely, but slowing rate. Of course this ignores friction, the rider’s strength, etc., assumes instantaneous repositioning

On a pump track the strategy is analogous: in a valley you should stand up, and over a bump you should crouch down, as one webpage explains. In both cases you move closer to the centre of curvature. In internet forums, people say similar motions arise naturally in skateboarding, surfing, snowboarding, and other sports. Returning to the ship swing, I was not told any strategy at the time. On certain swings I sensed my speed diminish, so knew I had made a mistake. At the bottom of the swing, when the sum of centrifugal force and gravity is maximum, it felt safer to “go with the flow”, and unnatural to resist. But resist is exactly what I should have done.

🡐 ⸻ | everyday physics | ⸻ 🠦

Affine connection for axial symmetry

Difficulty:   ★★★★☆   graduate

Suppose you have an axially symmetric vector field. Can we define an affine connection which keeps the vectors “parallel”, under rotation about the axis? For example, we wish the vectors illustrated below to get parallel-transported around the circle:

an axially symmetric vector field
We seek an affine connection which declares vectors at a given radius “parallel”, for any vector field with circular symmetry / cylindrical symmetry / rotational symmetry.

Take Minkowski spacetime in cylindrical coordinates (t,r,\phi,z), with metric -dt^2+dr^2+r^2d\phi^2+dz^2, and consider a vector field \mathbf u whose components are independent of \phi:

    \[u^\mu = (A{\scriptstyle(t,r,z)},B{\scriptstyle(t,r,z)},C{\scriptstyle(t,r,z)},D{\scriptstyle(t,r,z)}).\]

The covariant derivative in the tangential direction \partial_\phi has components:

    \[(\nabla_{\partial_\phi}\mathbf u)^\alpha = \Big(0,-rC{\scriptstyle(t,r,z)},\frac{B{\scriptstyle(t,r,z)}}{r},0\Big).\]

We want this to vanish, but first a quick recap (Lee  §4, 5). Recall a connection is defined by \nabla_{\partial_\mu}\partial_\nu = \Gamma_{\mu\nu}^\alpha \partial_\alpha, in terms of our coordinate vector frame (\partial_t,\partial_r,\partial_\phi,\partial_z). This extends to a covariant derivative of arbitrary vectors and tensors, also denoted “\nabla”. The derivative of \mathbf u above assumed the Levi–Civita connection, which is inherited from the metric: it is the unique symmetric, metric-compatible connection. In that case the set of \Gamma are called Christoffel symbols, but in general they are called connection coefficients.

The offending Christoffel symbols which prevent our vector field from being parallel-transported are: \Gamma_{\phi\phi}^r = -r and \Gamma_{\phi r}^\phi = 1/r. But we are free to simply define a new connection for which these vanish: \tilde\Gamma_{\phi\phi}^r := 0 =: \tilde\Gamma_{\phi r}^\phi! Given a frame, any set of smooth functions \tilde\Gamma yields a valid connection (Lee, Lemma 4.10). It is natural to hold on to the other Christoffel symbols, to accord whatever respect remains for the metric. In fact only one is nonzero, \Gamma_{r\phi}^\phi = 1/r. To set this to zero would essentially deny the increase in circumference with the radius. Incidentally, even with keeping \Gamma_{r\phi}^\phi, the new connection is flat, meaning its associated Riemann curvature tensor vanishes.

The new connection may be expressed as the Levi–Civita one with a bilinear correction:

    \[\tilde\nabla_{\mathbf v}\mathbf u = \nabla_{\mathbf v}\mathbf u - \big( \frac{1}{r}\partial_\phi\otimes d\phi\otimes dr - r\partial_r\otimes d\phi\otimes d\phi \big) (\mathbf v,\mathbf u),\]

where \mathbf v and \mathbf u are arbitrary vectors, to be substituted into the 2nd and 3rd slots respectively of the (1,2)-tensor in parentheses. This is much simpler than it looks, as the terms just pick out r and \phi-components, and return basis vectors. Equivalently, the correction may be written -\langle d\phi,\mathbf v\rangle \big( r^{-1}\langle dr,\mathbf u\rangle \partial_\phi - r\langle d\phi,\mathbf u\rangle \partial_r \big), where the angle brackets mean contraction of a 1-form and vector. Notice from here and the two “offending” Christoffel symbols mentioned earlier, that only (the component of) the derivative in the \phi–direction is affected.

These expressions obscure some beautiful symmetry. Let’s raise one index and lower another, in the correction term:

    \[\tilde\nabla_{\mathbf v}\mathbf u = \nabla_{\mathbf v}\mathbf u - \frac{1}{r}\langle d\phi,\mathbf v\rangle \cdot (2\partial_r\wedge\partial_\phi)\lrcorner\mathbf u^\flat.\]

Here 2\partial_r\wedge\partial_\phi := \partial_r\otimes \partial_\phi - \partial_\phi\otimes\partial_r is a wedge product, \mathbf u^\flat is just the 1-form with components u_\mu, and “\boldsymbol\lrcorner” is a contraction. The correction’s components are simply -r^{-1}v^\phi\cdot(0,-u_\phi,u_r,0). This is a vector, even though some lowered indices appear in the expression. The correction is just a rotation in the r\phi–plane! From inspection of the diagram, this is unsurprising.

This is analogous to Fermi–Walker transport. Given a worldline, this corrects the (Levi–Civita connection) time-derivative \nabla_{\mathbf u} by a rotation in the plane spanned by \mathbf u and the 4-acceleration vector \nabla_{\mathbf u}\mathbf u. Under Fermi–Walker transport, orthonormal frames stay orthonormal over time, and their orientation agrees with gyroscopes. For both our connection and the Fermi-Walker derivative, there is a preferred differentiation direction, along which a rotation is added to the Levi-Civita derivative.

I previously wrote about a connection for a spherically symmetric vector field. This has been a good learning experience about connections other than Levi-Civita. Many of us completed general relativity courses in which the curvature quantities were merely formulae with no intuitive understanding. However questions from mathematicians like: “Which connection are you using?” prompted me to learn more. (At least I have never been asked which differential structure I am using, nor which point-set topology, which is fortunate for all involved. 🙂 ) There are various physically-motivated connections defined in research paper  §2. I intend to apply this to the rotating disc, and to an observer field in Schwarzschild spacetime. Also, I accidentally came across Rothman+ 2001  about parallel transport in Schwarzschild spacetime, with numerous followup papers by various authors. All of this struck me again with a sense of fascination about curvature: how rich and deep it is.

🡐 Spherical symmetry connection | Curvature | ⸻ 🠦

Local Lorentz boost in coordinate-independent notation

Difficulty:   ★★★☆☆   undergraduate

The Lorentz boost between two reference frames can be expressed as a (1,1)-tensor \boldsymbol\Lambda, interpreted as an operator on vectors. Here we re-express this well known fact using a general, index-free, coordinate-independent, 4-vector notation, which is valid locally in curved spacetime.

Recall the prototypical Lorentz boost on Minkowski spacetime:

    \[\Lambda^\mu_{\hphantom\mu\nu} = \begin{pmatrix} \gamma & -\beta\gamma & 0 & 0 \\ -\beta\gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}.\]

This is a boost in the x-direction by speed \beta or Lorentz factor \gamma = (1-\beta^2)^{-1/2}. It maps an arbitrary vector X^\mu to \Lambda^\mu_{\hphantom\mu\nu}X^\nu. Numerous authors generalise to arbitrary boost directions, such as Møller  §18; MTW  §2.9; or Tsamparlis  §1.7. This typically involves separate transformations of time and 3-dimensional space: t' = \gamma(t - \vec n\cdot\vec r) and \vec r' = \vec r + (\frac{\gamma-1}{\beta^2}\vec r\cdot\vec n - \gamma t)\vec n. The arrows signify 3-dimensional vectors, \vec r is the position in 3-space, and \vec n is the relative 3-velocity. The space part uses beautiful, coordinate-independent vector language. However the time part requires privileged coordinates adapted to the observers. We will derive a 4-vector analogue.

Consider two 4-velocity vectors \mathbf u and \mathbf n (located at the same point, if in curved spacetime). They are related by the Lorentz boost:

    \[\mathbf n = \gamma(\mathbf u+\beta\hat{\vec{\mathbf n}}) = \gamma(\mathbf u+\vec{\mathbf n}),\]

where \gamma = -\langle\mathbf u,\mathbf n\rangle, the unit vector \hat{\vec{\mathbf n}} points in the boost direction, and \vec{\mathbf n} = \beta\hat{\vec{\mathbf n}} is the relative velocity. This is the 4-vector analogue of the familiar coordinate boost t' = \gamma(t-\beta x). Combined with the space boost given shortly, this forms a local Lorentz transformation. While the plus sign makes the above appear an inverse boost, this is only because vectors (as whole entities) transform inversely to coordinates. Rearranging:

    \[\vec{\mathbf n} = \gamma^{-1}\mathbf n - \mathbf u.\]

This is the relative velocity of the observer \mathbf n as determined in \mathbf u’s frame, as explained previously. It is equivalent to the \vec n introduced in the 3-dimensional spatial transformation, except now treated as a 4-vector. It is orthogonal to \mathbf u, with length \beta. Conversely, the relative velocity of \mathbf u as determined in \mathbf n’s frame is \vec{\mathbf u} = \gamma^{-1}\mathbf u - \mathbf n. Now, the vector analogue of the usual boosted spatial coordinate x' = \gamma(x-\beta t) is \gamma(\hat{\vec{\mathbf n}}+\beta\mathbf u). After multiplying by \beta:

    \[\vec{\mathbf n} \mapsto \gamma(\vec{\mathbf n} + \beta^2\mathbf u) = \mathbf n - \gamma^{-1}\mathbf u = -\vec{\mathbf u}.\]

Hence the relative velocity vectors are boosted into one-another, aside from a minus sign (Jantzen+ 1992  §4). This generalises the Newtonian result \vec n = -\vec u. So we have the boost’s action on the orthogonal vectors \mathbf u and \vec{\mathbf n}, plus it is the identity on the 2-dimensional spatial plane orthogonal to both, hence:

    \[\begin{aligned} \Lambda^\mu_{\hphantom\mu\nu} &= (g^\mu_{\hphantom\mu\nu} + u^\mu u_\nu -\beta^{-2}\vec n^\mu \vec n_\nu) - n^\mu u_\nu -\beta^{-2}\vec u^\mu \vec n_\nu \\ &= g^\mu_{\hphantom\mu\nu} + (u^\mu-n^\mu)u_\nu + \frac{\gamma}{\gamma+1}(u^\mu + n^\mu)\vec n_\nu, \end{aligned}\]

using \vec{\mathbf u}+\vec{\mathbf n} = -(1-\gamma^{-1})(\mathbf u+\mathbf n) and (\gamma-1)/\beta^2\gamma = \gamma/(\gamma+1). It is a good exercise to check the contractions with u^\nu, \vec n^\nu, or any X^\nu orthogonal to both. In index-free notation,

    \[\boldsymbol\Lambda = \mathbf g^{\sharp\flat} + (\mathbf u-\mathbf n)\otimes\mathbf u^\flat +\frac{\gamma}{\gamma+1}(\mathbf u+\mathbf n)\otimes\vec{\mathbf n}^\flat.\]

The “flat” symbol just means: lower an index. Equivalently, in terms of the initial observer and boost velocity alone:

    \[\boldsymbol\Lambda = \mathbf g^{\sharp\flat} + \big((1-\gamma)\mathbf u - \gamma\vec{\mathbf n})\otimes\mathbf u^\flat + \big(\gamma\mathbf u + \frac{\gamma^2}{\gamma+1}\vec{\mathbf n}\big)\otimes\vec{\mathbf n}^\flat,\]

in which case the relative speed may be obtained from \beta^2 = \langle\vec{\mathbf n},\vec{\mathbf n}\rangle. This is equivalent to MTW’s Exercise 2.7 which uses Cartesian coordinates, after adjusting various minus signs because I use vectors not vector components. In terms of the 4-velocities alone, we have the curiously symmetric expression:

    \[\boldsymbol\Lambda = \mathbf g^{\sharp\flat} + \frac{1}{\gamma+1} (\mathbf u+\mathbf n)\otimes(\mathbf u+\mathbf n)^\flat - 2\mathbf n\otimes\mathbf u^\flat.\]

Formulae are useful machines, allowing you to blithely turn the handle to crank out a result. This contrasts with my usual emphasis on conceptual understanding, and drawing a picture (at least mentally). However Lorentz boosts have many counter-intuitive or seemingly paradoxical effects. It is easier to make a mistake if you reason from first-principles alone. Of course the algebra does originate from careful thinking about foundations, and having multiple approaches is a check of consistency.

Boosts are paramount for comparing physical quantities between frames. Some textbooks present the general Lorentz boost in Minkowski spacetime with Cartesian coordinates. Our abstract vector formulation allows direct application to local boosts in arbitrary spacetime, such as Kerr or FLRW, in any coordinate system. I don’t remember seeing the formulae here in the literature, but someone should have done it somewhere. The Jantzen+ paper was an inspiration, and the same authors define various further quantities (projections, in fact) in Bini+ 1995 .

🡐 relative velocity | relative kinematics | ⸻ 🠦

Spatial gradient examples

Difficulty:   ★★★☆☆   undergraduate

Last time we discussed the “spatial gradient” or “3-gradient”, and here we follow up with two examples. Recall from before that a scalar field \Phi has gradient d\Phi, and the part of this which is orthogonal to an observer 4-velocity \mathbf u is, as a vector:

    \[^{(3)}(d\Phi)^\sharp := (d\Phi)^\sharp + \langle d\Phi,\mathbf u\rangle \mathbf u.\]

This direction has the greatest increase of \Phi, for any vector in \mathbf u’s 3-space (that is, orthogonal to \mathbf u), per length of the vector.

As an example, suppose the 4-gradient vector (d\Phi)^\sharp is a null, future-pointing vector. It can be decomposed E(\mathbf u+\boldsymbol\xi), where E := -\langle d\Phi,\mathbf u\rangle, and \boldsymbol\xi is a unit spatial vector orthogonal to \mathbf u. Physically, this gradient may be interpreted as a null wave or photon, which the observer determines to have energy (or related quantity, such as frequency) E, and to move in the spatial direction \boldsymbol\xi. The 3-gradient vector is E\boldsymbol\xi, hence the direction of relative velocity also has the steepest increase of \Phi, within the observer’s 3-space.

Suppose now (d\Phi)^\sharp is a unit, timelike, future-pointing vector, so that we may interpret it as the 4-velocity \mathbf v of a second observer. Then ^{(3)}(d\Phi)^\sharp = \mathbf v - \gamma\mathbf u, where \gamma = - \langle\mathbf u,\mathbf v\rangle is the Lorentz factor between the pair. But we also have the “relative velocity” decomposition \mathbf v = \gamma(\mathbf u + \mathbf V), where \mathbf V is the relative velocity of \mathbf v as determined in \mathbf u’s frame, as I discussed previously. Combining these, ^{(3)}(d\Phi)^\sharp = \gamma\mathbf V. Hence within the observer’s 3-space, \Phi again increases most sharply in the direction of the relative velocity.

timelike 1-form
Spacetime diagram, from the observer \mathbf u‘s perspective. The timelike 1-form d\Phi \equiv \mathbf v^\flat is suggested by dotted blue lines, given at intervals of 1/4 for more resolution. These are orthogonal to the vector \mathbf v, in the Lorentzian sense.

The figure shows the single tangent space — think of this as the linearisation of what is happening locally over the manifold itself. The hyperplanes are numbered by \Phi, where only the differences between them are relevant, as an overall constant was not specified. Observe \mathbf v crosses four of them, spanning an interval \Delta\Phi = \langle d\Phi,\mathbf v\rangle = -1, so \Phi is the negative of \mathbf v’s proper time; see a previous post for more background. In both our examples, the scalar decreases towards the future (or can vanish in the null case), even though the gradient vectors are future-pointing. That is, the gradient vectors actually point “down” the slope! This quirk is due to our −+++ metric signature, and would apply to spacelike gradients if +−−− were used instead. This really hurt my brain, until I drew the diagram. 🙁

To construct it, consider the action of d\Phi on the axes. The horizontal axis is the relative velocity direction, with unit vector \hat{\mathbf V} := \mathbf V/\beta. One can show \langle d\Phi,\hat{\mathbf V}\rangle = \beta\gamma. Also \langle d\Phi,\mathbf u\rangle = -\gamma, but I find it easier to think of: \langle d\Phi,-\mathbf u\rangle = \gamma. These give the number of hyperplanes crossed by the unit axes vectors, then you can literally “connect the dots” since the 1-form is linear. In the figure \beta = 1/2, so \gamma \approx 1.15. (As for the 3-gradient, it vanishes in the \mathbf u direction, hence \mathbf u must cross no contours of ^{(3)}d\Phi. It would be drawn as vertical lines, with corresponding vector pointing to the right.)

Most of our discussion applies to arbitrary 1-forms, not just gradients which are termed exact 1-forms. I derived the work here independently, but the literature contains some similar material. It turns out Jantzen, Carini & Bini 1992  §2 explicitly define the “spatial gradient”, as they most appropriately call it. A few textbooks discuss scalar waves, for which the 3-gradient vector is the wave 3-vector, which is orthogonal to the wavefronts within a given frame, as discussed shortly.

Spatial gradient of a scalar

Difficulty:   ★★★☆☆   undergraduate

Suppose you have a scalar field \Phi, and at a given point in spacetime: a 4-velocity vector interpreted as an “observer”. In which direction does \Phi increase most steeply, when restricted to the observer’s local 3-dimensional space?

Last time I reviewed the gradient 1-form or covector d\Phi, and its associated gradient vector (d\Phi)^\sharp obtained by raising the index as usual. The gradient vector has been described as the direction of greatest increase in \Phi per unit length (Schutz 2009  §3.3). However this is only guaranteed when the metric is positive definite, meaning a Riemannian manifold, rather than a Lorentzian manifold as used to model spacetime.

The observer’s 4-velocity splits vectors and 1-forms into purely “time” parts parallel to \mathbf u, and purely “space” parts orthogonal to it. (Intuitively, it may help to think of a basis \{\mathbf e_\alpha\} adapted to the observer, meaning \mathbf e_0 := \mathbf u, and the \mathbf e_i vectors are orthogonal to \mathbf u, where i = 1,2,3. Then a purely spatial vector is spanned by the \mathbf e_i. Since vectors and covectors are linear, we need only specify their values on a basis set.)

Consider the tangent space at the specified point. Imagine working within the observer’s local 3-space, by which I mean the 3-dimensional subspace consisting of vectors orthogonal to \mathbf u. Label the gradient as restricted to this subspace by ^{(3)}d\Phi. On the subspace the metric has Riemannian signature, hence the corresponding vector ^{(3)}(d\Phi)^\sharp is the direction of steepest increase. We can mimic this mathematically by staying in 4 dimensions, but setting the “time” part to zero:

    \[^{(3)}d\Phi := d\Phi + \langle d\Phi,\mathbf u\rangle \mathbf u^\flat.\]

This is a 4-dimensional object, but I reuse the notation “^{(3)}” to imply it vanishes in the observer’s time direction. This “3-gradient” is the projection of d\Phi orthogonal to \mathbf u^\flat. The angle brackets signify contraction of the 1-form and vector, and the “flat” symbol denotes the 1-form obtained from \mathbf u by “lowering the index” using the metric. The vector 3-gradient is:

    \[^{(3)}(d\Phi)^\sharp = (d\Phi)^\sharp + \langle d\Phi,\mathbf u\rangle \mathbf u.\]

This follows from “raising the index” using the inverse metric g^{\mu\nu} as usual. Note that on the subspace, the inverse metric coincides with the inverse 3-metric which has components (g_{ij})^{-1}, for i,j=1,2,3. Equivalently, one can apply the spatial projector g^{\mu\nu}+u^\mu u^\nu to either d\Phi or ^{(3)}d\Phi, with the same result. This projector agrees with the inverse metric on the 3-space, and is zero on purely timelike covectors. Either way, the essential part of the process is to remove the “time” component of the gradient. I will give examples in the following post.

Gradient of a scalar

Difficulty:   ★★★☆☆   undergraduate

Suppose a scalar field \Phi is defined on some region of spacetime. Its gradient d\Phi\equiv \nabla\Phi expresses the change in \Phi (that is, its derivative) in each direction. In a coordinate system, it has components:

    \[\nabla_\mu\Phi = (d\Phi)_\mu = \Phi_{,\mu} := \frac{\partial\Phi}{\partial x^\mu}.\]

d\Phi is a 1-form or covector. [Recall a 1-form is just a (0,1)-tensor. Schutz 2009  also uses the term dual vector, though I find this can lead to clumsy wording, such as the hypothetical phrase: “the vector [which is] dual to a dual vector”. Traditionally the term covariant vector has been used, meaning its components transform “covariantly” with a change of basis. 1-forms are a rigorous version of differentials, superceding the older idea of infinitesimals but using similar notation (Schutz 1980  §2.19; Spivak vol. 1  §4).] Above, “d” is called the exterior derivative, and \nabla is the covariant derivative, but when acting on a scalar these coincide. Recall a 1-form accepts a vector and returns a number. In this case, the vector is the direction of differentiation, and the output is the derivative of \Phi in that direction (where the vector’s magnitude matters also).

The 1-form d\Phi may be visualised as a set of hypersurfaces or level sets \Phi = \textrm{const}, on the manifold (MTW  §2.5–2.7, Box 4.4; Schutz 2009 §3.3). Ideally these could be spaced at intervals \Delta\Phi = 1. Given some vector \mathbf Y, the contraction:

    \[d\Phi(\mathbf Y) \equiv \langle d\Phi,\mathbf Y\rangle = Y^\mu\frac{\partial\Phi}{\partial x^\mu}\]

is visualised as the number of hypersurfaces the vector pierces, or “bongs of [a] bell” in MTW’s colourful terminology. Technically however, vectors and 1-forms exist in the (co-)tangent spaces, not extended along the manifold. At any given point, d\Phi is the linear approximation to \Phi, ignoring the constant term (MTW §2.6). Hence d\Phi is more accurately visualised as hyperplanes within the tangent space there. The diagram below shows both artistic choices. Note in two dimensions, hypersurfaces and hyperplanes are just curves and straight lines, respectively.

sphere with dtheta field
A 2-sphere with visualisations of the 1-form field d\theta, both over the entire manifold and within a single tangent space. The spacing is \Delta\theta = \pi/10.

The gradient vector is the dual to d\Phi, with components obtained by raising the index in the usual way: g^{\mu\nu}(d\Phi)_\nu. This may be elegantly written (d\Phi)^\sharp, where the “sharp” symbol is part of the “musical isomorphism” notation. While the gradient is usually first encountered as a vector, it is most naturally a 1-form, as this does not require a metric (MTW §9.4). As Schutz 2009 §3.3 explains:

…we in general cannot call a gradient a vector. We would like to identify the vector gradient as that vector pointing ‘up’ the slope, i.e. in such a way that it crosses the greatest number of contours per unit length. The key phrase is ‘per unit length’. If there is a metric, a measure of distance in the space, then a vector can be associated with a gradient. But the metric must intervene here in order to produce a vector. Geometrically, on its own, the gradient is a one-form.

But if one does not know how to compare the lengths of vectors that point in different directions, one cannot define a direction of steepest ascent…

The last line is from Schutz 1980 §2.19, where the discussion is similar. These textbooks give a superb introductory account of 1-forms, however the steepness comments are only valid for a Riemannian metric, with positive-definite signature. Consider Minkowski spacetime with coordinates (t,x,y,z). By linearity, we need only consider unit vectors. The 1-form dx has components (0,1,0,0), with (dx)^\sharp just \partial_x. These contract to give unity. If we restrict to vectors spanned by \partial_x, \partial_y and \partial_z, Schutz’ steepness comments apply. However \pm(\beta\gamma,\gamma,0,0) is also a unit spacelike vector, where \gamma = (1-\beta^2)^{-1/2}, but combines with dx to give \pm\gamma, hence crosses more intervals x = \textrm{const} than the gradient vector does. Similarly for dt, the contraction with (dt)^\sharp = -\partial_t returns -1, but with the unit timelike vector \pm(\gamma,\beta\gamma,0,0) yields \pm\gamma. Hence for a timelike 1-form, its gradient vector crosses the least contours (taking the absolute value) per unit length, compared to other timelike vectors only. For a null 1-form, its gradient vector lies along the hyperplanes, so crosses zero of them (MTW Figure 2.7)!

Instead, we are left with saying the gradient vector is orthogonal to all vectors \mathbf Y on which the 1-form vanishes: \langle (d\Phi)^\sharp,\mathbf Y\rangle = 0 whenever \langle d\Phi,\mathbf Y\rangle = 0. The angle brackets mean contraction using the metric, with indices appropriately raised or lowered. Another property is the gradient vector’s squared-norm equals the 1-form’s squared-norm, which also matches the number of contours crossed:

    \[\langle (d\Phi)^\sharp,(d\Phi)^\sharp\rangle = \langle d\Phi,d\Phi\rangle = \langle d\Phi,(d\Phi)^\sharp\rangle.\]

The above statements are basically tautologies, but they help clarify what metric duality means. Incidentally, not all 1-forms arise as the “d” of a scalar, but only those termed exact (Wald  §B1). Most of this post applies also to arbitrary 1-forms \boldsymbol\alpha, for which the hyperplanes are spanned by vectors satisfying \langle\boldsymbol\alpha,\mathbf Y\rangle = 0. For many creative illustrations see MTW, including their “honeycomb” and “egg crate” analogies for 2-forms and 3-forms, and their Figure 4.5 for the 2-form d\theta\wedge d\phi. Finally, I previously reviewed contractions like d\Phi(\mathbf u) = d\Phi/d\tau, which give the rate of change of the scalar by proper time along a worldline.