Vector with a lowered component index?

Difficulty:   ★★★☆☆   undergraduate

Normally we recognise a vector as having a raised (component) index: u^\mu, whereas a covector has a lowered index: \omega_\nu. Similarly the dual to \mathbf u, using the metric, is the covector with components denoted u_\nu say; while the dual to \boldsymbol\omega is the vector with components \omega^\mu.

Recall what this notation means. It presupposes a vector basis \mathbf e_\mu say, where in this case the \mu labels entire vectors — the different vectors in the basis — rather than components. Hence we have the decomposition: \mathbf u = u^\mu\mathbf e_\mu. Similarly, the component notation \omega_\nu implies a basis of covectors \mathbf e^\nu, so: \boldsymbol\omega = \omega_\nu\mathbf e^\nu. These bases are taken to be dual to one another (in the sense of bases), meaning: \mathbf e^\nu(\mathbf e_\mu) = \delta^\nu_\mu. We also have \mathbf u^\flat = u_\nu\mathbf e^\nu and \boldsymbol\omega^\sharp = \omega^\mu\mathbf e_\mu, where as usual u_\nu = g_{\mu\nu}u^\mu and \omega^\mu = g^{\mu\nu}\omega_\nu. (The “sharp” and “flat” symbols are just a fancy way to denote the dual. This is called the musical isomorphism.)

However since \mathbf u = u^\mu\mathbf e_\mu, we may instead take the dual of both sides of this expression to get: \mathbf u^\flat = u^\mu(\mathbf e_\mu)^\flat. This gives a different decomposition than in the previous paragraph. Curiously, this expression contains a raised component index, even though it describes a covector. For each index value \mu, the component u^\mu is the same number as usual. But here we have paired them with different basis elements. Similarly \boldsymbol\omega^\sharp = \omega_\nu(\mathbf e^\nu)^\sharp is a different decomposition of the vector \boldsymbol\omega^\sharp. It describes a vector, despite using a lowered component index. Using the metric, the two vector bases are related by: \langle(\mathbf e^\nu)^\sharp,\mathbf e_\mu\rangle = \delta^\nu_\mu.

A good portion of the content here is just reviewing notation. However this article does not seem as accessible as I envisioned. The comparison with covectors is better suited to readers who already know the standard approach. (And I feel a pressure to demonstrate I understand the standard approach before challenging it a little, lest some readers dismiss this exposition.) However for newer students, it would seem better to start afresh by defining a vector basis \mathbf e^\nu to satisfy: \langle\mathbf e^\nu,\mathbf e_\mu\rangle = \delta^\nu_\mu. (While \mathbf e^\nu is identical notation to covectors, there need not be confusion, if no covectors are present anywhere.) This relation is intuitive, as the below diagram shows.

reciprocal bases
Figure: reciprocal bases in \mathbb R^3. In the diagram we index the vectors by colour, rather than the typical 1, 2, and 3. Rather than using a co-basis of covectors as in the standard modern approach, we take both bases to be made up of vectors. Note for example, the red vector on the right is orthogonal to the blue and green vectors on the left. (I made up the directions when plotting, so don’t expect everything to be quantifiably correct.) Arguably “inverse basis” would be the best name. Finally I would prefer to start from this picture, as it is simple and intuitive, rather than start with covectors and musical isomorphisms. Maybe next time I will be brave enough.

I learned this approach (of defining a second, “reciprocal basis” of vectors) from geometric algebra references. However, it is really just a revival of the traditional view. I used to think old physics textbooks which taught this way were unsophisticated, and unaware of the modern machinery (covectors). I no longer think this way. The alternate approach does require a metric, so is less general. However all topics I have worked on personally, in relativity, do have a metric present. But even for contexts with no metric, this approach could still serve as a concrete intuition, to motivate the introduction of covectors, which are more abstract. The alternate approach also challenges the usual distinction offered between contravariant and covariant transformation of (co)vectors and higher-rank tensors. It shows this is not about vectors vs covectors at all, but more generally about the basis chosen. I write about these topics in significant length in my MPhil thesis (2024, forthcoming, §2.3), and intend to write more later.