# Geometry

## Notation

• denotes the space of all functions which have continuous derivatives to the k-th order
• smooth means , i.e. infitively differentiable, more specificely, means all infitively differentiable functions with domain
• Maps are assumed to be smooth unless stated otherwise, i.e. partial derivatives of every order exist and are continuous on
• Euclidean space as the set together with its natural vector space operations and the standard inner product
• sub-scripts (e.g. basis vectors for and coeffs. for ) are co-variant
• super-scripts (e.g. basis vectors of for and coeffs for ) are contra-variant
• where are manifolds, uses the to refer to a linear map from to

## Stuff

### Curves

#### Examples

##### Helix

The helix in is defined by

### Constructing a sphere in Euclidean space

Surface of a sphere in the Euclidean space is defiend as:

As it turns out, is not a vector space. How do you define vectors on this spherical surface ?

#### Defining vectors on spherical surface

At each point on , construct a whole plan which is tangent to the sphere, called the tangent plane.

This plane is the two dimensional vector space of lines tangent to the sphere at the given point, called tangent vectors.

Each point on the sphere defines a different tangent plane. This leads to the notion of a vector field which is: a rule for smoothly assigning a tangent vector to each point on .

The above description of a vector space on is valid everywhere, and so we refer to it as a global description.

Usually, we don't have this luxury. Then we we parametrise a number of "patches" of the surface using coordinates, in such a way that the patches cover the whole surface. We refer to this as a local description.

#### Motivation

The tangent-space at some point on the 2-sphere is a function of the point .

The issue with the 2-sphere is that we cannot obtain a (smooth) basis for the surface. We therefore want to think about the operations which do not depend on having a basis. gives a way of doing this, since each of the derivatives are linear inpendent.

### Ricci calculus and Einstein summation

This is reason why we're using superscript to index our coordinates, .

Suppose that I have a vector space and dual space . A choice of basis for induces a dual basis on the dual vector space , determined by the rule

where is the Kronecker delta.

Any element of or of can be written as lin. comb. of these basis vectors:

and we have .

If we do a change of basis of , which induces a change of basis for , then the coefficients of a vector in transform in the same way as the basis of vectors and vice versa, the coefficients of a vector transform in the same way as the basis vectors of .

Suppose a new basis for is given by , with

where the are the coefficients of the invertible change-of-basis matrix, and are the coefficients of its inverse (i.e. ). If we denote the new induced dual basis for by , we have

Moreover, for any elements of of and of which we can write as

we have

See how the order of the indices are different?

The entities and are co-variant .

The entities and are contra-variant .

One-forms are sometimes referred to as co-vectors , because their coefficients transform in a co-variant way.

The notation then goes:

• sub-scripts (e.g. basis vectors for and coeffs. for ) are co-variant
• super-scripts (e.g. basis vectors of for and coeffs for ) are contra-variant

Very important: "super-script indicies in the denominator" are understood to be lower indices, i.e. co-variant in denominator equals contravariant.

Now, consider this notation for our definition of tangent space and dual space:

If you choose coordinates on an open set containing a point , then you get a basis for the tangent space at

which have super-script in denominator, indicating a co-variant entity (see note).

Similarily we get a basis for the cotangent space at

which have super-script indices, indicating a contra-variant entity.

Why did we decide the first case is the co-variant (co- and contra- are of course relative)?

Because in differential geometry the co-variant entities transform like the coordinates do, and we choose the coordinates to be our "relative thingy".

### Differential forms

Differential forms are an approach to multivariable calculus which is independent of coordinates.

### Surfaces

#### Notation

• is the domain in the plane whose Cartesian coordinates will be denoted unless otherwise stated
• , unless otherwise stated
• denotes the image of the smooth, injective map

#### Regular surfaces

A local surface in is smooth, injective map with a continuous inverse of . Sometimes we denote the image by .

The assumptation that is injective means that points in the image are uniquely labelled by points in .

Given a local surface we define

For every point , these are vectors in , which we will identify with itself. We say that a local surface is regular at if and are linearly independent. A local surface is regular if it is regular at for all .

This gives rise to the differential form :

Here is a quick example of evaluating the differential form induced by the definition of a regular local surface:

is a regular surface if for each there exists a regular local surface such that and for some open set .

In other words, if for each point on the surface we can construct a regular local surface, then the entire surface is said to be regular.

A map defines a local surface which is part of some surface , is sometimes called a coordinate chart on .

Thus, if the surface is a regular surface (not just locally regular) we can "define" from a set of all these coordinate charts .

At a regular point on a local surface, the plane spanned by and is the tangent plane to the surface at , which we denote by . At a regular point, the unit normal to the surface is

Clearly, is orthogonal to the tangent plane .

Given a local surface the map is a smooth function whose image lies in a unit sphere . The map is called the local Gauss map.

#### Standard Surfaces

Let be a smooth function. The graph of is the local surface defined by

An implicitly defined surface is zero set of a smooth function , i.e.

Note that is a mapping from , and we're saying that the inverse of this function defines a surface, where it's also important to note the smooth requirement, as this implies that is differentiable.

An implicitly defined surface , such that everywhere on , is a regular surface.

This is due to the fact that if there is a point such that , then that implies that and are linearly dependent, hence not a regular surface.

A surface of revolution with profile curve is a local surface of the form

A surface of revolution can be constructed by rotation a curve around the axis in . It thus has cylindrical symmetry.

A ruled surface is a surface of the form

Notice that curves of constant are straight lines in through in the direction .

#### Examples of surfaces

Quadratic surfaces are the graphs of any equation that can be put into the general form:

The general equation for a cone

The general equation for a hyperboloid of one sheet

The general equation for a hyperboloid of two sheets

The general equation for an ellipsoid

with being a sphere.

General equation for an elliptic paraboloid

General equation for an hyperbolic paraboloid

### Fundamental forms

#### Symmetric tensors

##### Notation
• are coordinates
##### Definitions

A symmetric bilinear form on is a bilinear map such that .

Given two 1-forms at define a symmtric bilinear form on by

where .

Note that and we denote .

A symmetric tensor on is a map which assigns to each a symmetric bilinear form on ; it can be written as

where are smooth functions on .

Remember, we're using Einstein notation.

A (Riemannian) metric on is a symmetric tensor which is positive definite at each point; , with equality if and only if .

Equivalently, it is a choice for each of an inner product on

#### First fundamental form

##### Notation
• and are our coordinates
##### Stuff

Consider a regular local surface defined by . The linear map

is a bijection.

This bijectivity can be used to give a coordinate free definition of regularity of a local surface.

Given a regular local surface , the first fundamental form is defined by

where we have introduced the notation .

The first fundamental form of a local surface is

where are functions on given by

The first fundamental form is a metric on .

##### Problems
• Prove bijectivity of linear map from regular local surface

Let be a vector field on ,

Also, we have the valued 1-form on :

Then,

Evaluating this at each it is clear that this is onto . Further, since the surface is implies , so the map is one-to-one.

#### Second Fundamental form

##### Notation
• and are our coordinates
• is the normal of the surface (if my understanding is correct)
##### Stuff

Given a local surface , the second fundamental form is defined by

with the dot product interpreted as usual.

The valued 1-form is linear map which may have a non-trivial kernel. It is convenient to use the isomorphism to rewrite the map as a symmetric form.

Since is unit normalised it follows that (by differentiating by and respectively).

Hence, and must belong to the tangent plane . In other words, .

The second fundamental form is given by

where are continuous functions on given by

Which can also be written as

##### Q & A
• DONE What do we mean by a 1-form having a "non-trivial kernel"?

In Group-theory we have the following definition of a kernel :

where is a homomorphism.

When we say the mapping has a non-trivial kernel, we mean that there are more elements in than just the identity element which is being mapped to the identity-element in , i.e.

Hence, in the case of the some 1-form , we have mean

i.e. non-trivial kernel refers to the 1-form mapping more than just the zero-vector to the zero-vector in the target vector-space.

• DONE What do we mean when we write dx from TpD to Tx(p) S?

What do we mean when we write the following:

where:

• is some surface in
• is the domain of our "coordinates"
• is a smooth map

We're saying that the differential 1-form maps from the vector-fields defined on at to the vector-fields defined on the point on the surface .

### Curvature

#### Bilinear algebra

The eigenvalues of wrt. are roots of the polynomial

where are represented by symmetric matrices.

If is positive definite (i.e. defines an inner product) there exists a basis of such that:

1. is orthonormal wrt.
2. each is an eigenvector of wrt. with a real eigenvalue

#### Gauss and mean curvatures

have 2 symmetric bilinear forms on , and look for eigenvalues & eigenvectors of and .

The eigenvalues of wrt. are the principal curvatures of the surface. The corresponding eigenvectors are the principal directions of the surface. Hence the principal curvatures are the roots of the polynomial .

The principal curvatures may vary with position and so are (smooth) functions on .

The product of the principal curvatures is the Gauss curvature :

Average of the principal curvatures is the Mean curvature :

If we have that all directions are principal.

where all variables are as given by the first and second fundamental forms.

We get the elegant basis independent expressions

Thus, the Gauss curvature is positive if and only if is positive definite.

### Meaning of curvature

#### Curves on surfaces

The composition

describes a curve in lying on the surface.

and

The arclength of the curve , lying on the surface is

For a curve lying on a surface,

where is the second fundamental form of the surface.

#### Invariance under Euclidean motions

Let and be two surfaces related by a Euclidean motion, so

where is a orthogonal matrix with and .

Then,

and hence, in particular,

The first fundamental form and second fundamental form determine the surface (up to Euclidean motions).

#### Taylor series

Let be a point on a regular local surface. By Euclidean motion, choose to be at the origin, and the unit normal at that point to be along the positive axis so is the plane.

Near we can parametrise the surface as a graph:

where at the origin

Using the above parametrization, and observing that and span , which is the plane orthogonal to , we see that and .

Further, supposing the axes correspond to the principal directions, then the Taylor series of the surface near the origin is

where are the principal curvatures at .

#### Umbilical points

Let be a regular local surface.

We then say a point is a umbilical if and only if

or equivalently,

i.e. all directions are pricipal directions.

An umbilical point is part of a sphere.

We can see the "being a part of a sphere" from the fact that a point on a sphere can be written as

where corresponds to pointing inwards, while is pointing outwards. In this case, we have

hence,

Conversely, if then

Which tells us that

Thus,

where is just some constant. Then,

A regular local surface has if and only if it is (a piece of) a plane.

The statement that or is equivalent of saying that is part of a plane, since the tangents of the map are perpendicular to the normal.

Every point is umbilical if and only if the surface is a plane or a sphere.

If for some smooth function , then

(here we have as a function, thus the exterior derivative of gives us a 1-form).

And since

and hence by regularity of the surface . Thus is a constant function on which implies .

This is because we've already stated that if is part of a plane (thm:second-fundamental-form-zero-everywhere-on-surface) and if and constant we have to be part of a sphere (thm:all-points-umbilical-surface-is-sphere-or-plane).

### Moving frames in Euclidean space

#### Notation

• is a smooth map
• denotes the coordinates on
• moving frame denotes a collection of maps for such that these form a oriented orthonormal basis of
• oriented means that
• , which, because the frame is oriented, we have , i.e. it's a rotation matrix

#### Stuff

A moving frame for on is a collection of maps for such that for all the form an oriented orthonormal basis of .

Oriented means that .

This definition uses the notation of orientedness in three dimensions. For general there is a different definition of a oriented frame.

If , given by

we write for its entry by entry exterior derivative:

Thus, takes vector fields in and spits out vectors in .

#### Connection forms and the structure equations

Since is an orthonormal basis for , any vector can be expanded as in the moving frame, and the same applies to a vector-valued 1-form, e.g. .

Therefore we define 1-forms by

The 1-forms are called the connection 1-forms and by definition satisfy

Each are in this case a 1-form.

The connection 1-forms are related by the antisymmetry property:

for all . In particular for all .

We can now write the structure equations for a surface using matrix-notation:

We can also write

We will also write

The first structure equations are

where the wedge product between the vectors are taken as

The second structure equations are

Definition of connection 1-forms and second structure equations only requires the existence of a moving frame and not a map .

The structure equations exist in the more general context of Riemannian geometry, where is the Riemann curvature, which in general is non-vanishing. In our case it's zero because our moving frame is in .

### Structure equations for surfaces

#### Notation

• are 1-forms
• are "connection" 1-forms

#### Adapted frames and the structure equations

A moving frame for on is said to be adapted to the surface if .

I.e. it's adapted to the surface if we orient the basis such that corresponds to the normal of the surface.

The first and second structure equations for a local surface wrt. to an adapted frame, give the structure equations for a surface:

First structure equations:

Symmetry equation:

Gauss equation:

Codazzi equations:

Notice how has just vanished if you compared to in a moving frame, which comes from the fact that in an adapted moving frame we have .

The Gauss equation above is equivalent to

This shows that the Gauss curvature can be computed simply from a knowledge of and without reference to the local description of the surface .

Let be a local surface with the first fundamental form and be the 1-forms on such that

Then there exists a unique adapted frame such that and .

We say a 1-form is degenerate if wrt. any basis , the matrix representing the 1-form has .

Two local surfaces and are isometric if and only if .

Isometric surfaces have the same Gauss curvature. More specifically,

If are two isometric surfaces, then

The Guass curvature is an instrinsic invariant of a surface!

The first fundamental form of a surface actually then turns out to determine the following properties:

• distance
• angles
• area

### Geodesics

#### Notation

• which defines the map , and has unit speed joining two points

#### Stuff

Consider a 1-parameter family of nearby curves

where and so that all curves in the family join to . We refer to as a connecting vector.

It's very important that , because if has a component along we could remove the shared component by reparametrising .

We say a unit speed curve as above has stationary length if the length of the nearby curves satisfies

for all connecting vector .

A unit speed curve in Euclidean space has stationary length if and only if it is the straight line joining the two points.

Let be a unit speed curve in Euclidean space. We then have to prove the following:

1. is a straight line, then it has stationary length
2. has stationary length then it's a straight line

Remember, stationary length is equivalent of

First, suppose that is in fact a straight line, then

Now, taking the square root and the derivative wrt. we have

Remembering that is a unit-speed curve, i.e. , thus

Now, substituting this into the expression for , and observing that interchanging the integral wrt. and derivative wrt. is alright to do, we get

since by definition of connecting vectors. The final integral is zero if and only if , which is equivalent of saying that is linear in and thus is a straigt line, concluding the first part of our proof.

Now, for the second part, we suppose that has stationary length

We again perform exactly the same computation and end up with the same integral as we got previously (since we did not use any of our assumptions until the very end), i.e.

And since is assumed to have stationary length,

which is true if and only if , hence by the same argument as above, is the straight line between the two points and .

Notice the "calculus of variations" spirit of the proof! Marvelous, innit?!

##### Geodesics on surfaces

A unit-speed curve lying in a surface is a geodesic if its acceleration is everywhere normal to the surface, that is,

where is the unit normal to the surface and is some function along the curve.

This means that for a geodesic the acceleration in the direction tangent to the surface vanishes thus generalising the concept of a straight line in a plane.

You can see this from looking at the proof of stationary length in Euclidean space being equivalent to the curve being the straight line: in the final integral we have a dot-product between and ,

But, all defined in the definition of a connecting vector / nearby curves also lies on the surface, hence cannot have a component in the direction perpendicular to surface. Neither can since this is also on the surface, which implies also cannot have a component normal to the surface. Thus,

Finally implying

A curve lying in a surface has stationary length (among nearby curves on the surface joining the same endpoints) if and only if it's a geodesic.

A curve lying in a surface is a geodesic if and only if, in an adapted moving frame it obeys the geodesic equations

and the energy equation

Given a point on a surface and a unit tangent vector to the surface at , there exists a unique geodesic on the surface for (with sufficiently small), such that and .

The geodesic equations only depend on the first fundamental form of a surface. Hence they are partof the intrinsic geometry of a surface and isometric surefaces have the same geodesics!

Two-dimensional hyperbolic space is the upper half plane

equipped with the first fundamental form given by

### Integration over surfaces

#### Notation

• defines a local map , where we drop the bold-face notation due to not anymore using the Euclidean structure
• denotes the pull-back of by the map

#### Integration of 2-forms over surfaces

Let define a local surface

Note we do not write the map defining the surface in bold here, to emphasise we are not going to use the Euclidean structure).

Let

be a 2-form on . We define the pull-back of by the map to be the 2-form on given by

IMPORTANT: where here is the exterior derivative of , i.e.

Let be a local surface and let be a 2-form on . We define the integral of over the local surface to be

So, we're defining the integral of the 2-form over the map as the integral over the pull-back of over the domain .

Why is this useful? It's useful because we can integrate some 2-form in the "target" manifold over the "input" domain .

Let be a k-dimensional oriented closed and bounded submanifold in with boundary given the induced orientation and . Then

The Stokes' and divergence of vector calculus are the and special cases respectively.

#### Integration of functions over surfaces

For a local surface, we have

Hence, we obtain an alternate expression for the area

Thus the are depends only on , hence it's an intrinsic property of the surface.

For a local surface with an adapted frame,

Let be a local surface and be a function.

Then the integral over over the surface is given by

In particular,

gives the are of the local surface. The 2-form is called the area form.

## Definitions

### Words

space-curves
curves in
plane curves
curves in
canonically
"independent of the choice"
rigid motion / euclidean motion
motion which does not change the "structure", i.e. translation or rotation

### Regular curves

A curve is regular if its velocity (or tangent) vector .

The tangent line to a regular curve at is the line .

A unit-speed curve is biregular if , where denotes the curvature.

(Note that a unit-speed curve is necessarily regular.)

The principal normal along a unit-speed biregular curve is

The binormal vector field along is

The norm of the velocity

is the speed of th curve at .

A parametrisation of a regular curve s.t. is called a unit-speed parametrisation.

### Level set

The level set of a real-valued function of variables is the set of the form

### Arc-length

The arg-length of a regular curve from to is

For a unit-speed parametrisation we have , hence it is also called an arc-length parametrisation.

As we can see in the notes, there's a theorem which says that for any regular curve, there exists a reparametrisation of which is unit-speed.

Most reparametrisations are difficult to compute, and thus it's mostly used as a theoretical tool.

#### Example: Helix

The helix in is defined by

which is an arc-length parametrisation

### Curvature

The unit tangent vector field along a regular curve is is

Thus, for a unit-speed curve it is simply .

For a unit-speed curve the curvature is defined by

### Torsion

The torsion of a biregular unit-speed curve is defined by

or equivalently .

The oscillating plane at a point on a curve is the plane spanned by and . The torsion measure how fast the curve is twisting out of this plane.

### Isometry

An isometry of is a map given by

where is an orthogonal matrix and is a fixed vector.

If , so that is a rotation matrix, then the isometry is said to be Euclidean motion or a rigid motion.

If the isometry is orientation-reversing.

By definition, an isometry preserves the Euclidean distance between two points .

### Tangent spaces

we define the tangent space to at as the set of all derivative operators at , called tangent vectors at

and thus we have

in the notation we love so much.

Vector fields are directional derivatives.

A vector field is defined by the tangent at each point for all in the domain of the vector field.

It's important to remember that these are curves which are parametrised arbitrarily, and thus describe any potential curve not just the you are "used" to seeing.

#### In words

• Tangent space of a manifold facilitiates the generalization of vectors from affine spaces to general manifolds

#### Tangent vector

There are different ways to view a tangent vector:

• embedded, i.e. with the manifold where we want to define the tangent vector embedded in a surrounding space, so that we can refer to the tangent vector as "sticking out" of the manifold
• intrinsically, i.e. without having to refer to some surrounding space
##### Physists view

Basically considers the tangent vector as a directional derivative

A tangent vector to at is determined by an n-tuple

for each choice of coordinates at , such that, is the set of coordinates, we have

In your "normal" vector spaces we're used to thinking about direction and derivatives as two different concepts (which they are) which can exist independently of each other.

Now, in differential geometry, we only consider these concepts together ! That is, the direction is defined by the basis which the tangent vectors ("derivative" operators) defines.

##### "Geometric" view

This is a more "intuitive" way of looking at tangent vectors, which directly generalises the concept used in Euclidean space.

A (regular) curve in is a (smooth) map , given by

where each is a smooth function, such that its velocity

is non-vanishing, , (as an element of ) for all . We say that a curve passes through if, say (without loss of generality one can always take the parameter value at to be 0).

means a map from the open range to , NOT a map which "takes two arguments", duh…

Let be a curve that passes through . There exists a unique such that for any smooth function

There is a one-to-one correspondence between velocities of curves that pass through and tangent vectors in . By (standard) abuse of notation sometimes we denote by the corresponding velocity .

##### Tangent vector of smooth curves

This approach is quite similar to the geometric view of tangent vectors described above, but I prefer this one.

As of right now, you should have a look at the section about Tangent space and manifolds, as I'm not entirely sure whether or not this can be confusing together with the different notation and all. Nonetheless, the other section is more interesting as it's talking about tangent vectors and general manifolds rather than the more "specific" cases we've been looking at above.

Let be a smooth curve and (wlog).

The tangent vector to curve at is a linear map

where

where is a chart map.

Often denote by .

### Tangent bundle

The tangent bundle of a differential manifold is a manifold , which assembles all the tangent vectors in . As a set it's given by the disjoint union of the tangent spaces of , i.e.

Thus, an element in can be thought of as a pair , where is a point in the manifold and is a tangent vector to at the point .

Let be a smooth manifold. Then the tangent bundle is the set

and further we define the bundle projection:

where is the point for which . This gives us a set bundle; now we just have to show that the fibres are indeed isomorphic, and thus we've obtained a fibre bundle.

Idea: construct a smooth atlas on from a given smooth atlas on .

• Take some chart
• Construct

where we define as

where

• First coordinates we observe is projecting the tangent at some point onto the point itself , i.e. (we don't write in the above because we can do this for any point in the manifold)
• Second coordinates account of the direction and magnitude of the tangent , i.e. we choose the coefficients of in the tangent space at that point!

• Finally we need to ensure that this map is indeed smooth : We start by considering the total space, which is the space of all sections , i.e.

equipped with the two operations:

and multiplication:

### Dual space

Let be a vectorspace over . Then the dual space of denoted as , is given by

#### Properties

##### Dual Basis

Honestly, "automatically" is a bit weird. What is actually happening as follows:

Suppose that we have a basis in defined by the set of vectors . then we can construct a basis in the dual space , called the dual basis. This dual basis is defined by the set of linear functions / 1-forms on , defined by the relation

for any choice of coefficients in the field we're working in (which is usally ).

In particular, letting each of these coefficients be equal to 1 and the rest equal zero, we get the following set of equations

which defines a basis.

If is a basis for , we automatically get a dual basis for , defined by

If (is finite), then

If

##### Map between duals

If in a linear map between (dual) vector spaces get canonically a dual map :

### 1-forms

A 1-form at is a linear map . This means, for all and ,

1-forms is equivalent to linear functionals

The set of 1-forms at , denoted by , is called the dual vector space of

We define 1-forms at each by their action on the basis :

Or equivalently, are defined by their action on an arbitrary tangent vector :

#### Differential 1-form

A differential 1-form on is a smooth map which assigns to each a 1-form in ; it can be written as:

where are smooth functions.

#### Line integrals

Let be a curve (the end points are included to ensure the integrals exist) and on the 1-form on . The integral of over the curve is

where is the tangent vector field to the curve.

Working in coordinates, the result of applying the 1-form on gives the expression

i.e. the derivative of wrt. times the evaluation of at , where denotes the evaluation of along .

##### Example
• Question

Consider the parametrized graph of a function :

as a curve in the plane. Show that is just the usual integral .

for any 1-form over the curve . We then simply let

Then,

Finally giving us the integral

Where we can obtain the wanted form by noting that .

### k-form

A 2-form at is a map which is linear in each argument and alternating

More generally, a k-form at is a map of vectors in to which is multilinear (linear in each argument) and alternating (changes sign under a swap of any two arguments).

And even more general, on the vector space with , a k-form () is a tensor that is anti-symmetric, e.g. for a 2-form

In the case of a k-form, if , where , then are top forms, both non-vanishing:

i.e. any two top-forms are equal up to a constant factor.

Further, the definition of a volume on some d-dimensional vector space, completely depends on your choice of top-form.

### Wedge product

The wedge product or exterior product of 1-forms and is a 2-form defined by the following bilinear (linear in both arguments) and alternating map

More generally, the wedge product of 1-forms, can be defined as a map acting on vectors

From the properties of the determinant it follows that the resulting map is linear in each vector sperarately an changes sign if any pair of vectors is exchanged (this corresponds to exchanging two columns in the determinant). Hence it defines a k-form.

#### Wedge product between different forms

We extend linearly in order to define the wedge product of a -form and an -form . Explicitly,

Here the sum is happening over all multi-indices and with and .

Now two things can happen:

• , in which case since there will be a repeated index
• , in which chase , for some muli-index K of length . The sign is due to having to reorder them to be increasing.

Therefore, the wedge product defines a (bilinear) map

### Multi-index

Useful as more "compact" notation.

By a multi-index of length we shall mean an increasing sequence of integers . We will write

The set of k-forms at is a vector space of dimension for with basis .

Here denotes the maximum number of dimensions. So we're just saying that we're taking the wedge-product between some indicies of the 1-forms we're considering.

### Differential k-form

A differential k-form or a differential form of degree k on is a smooth map which assigns to each a k-form at ; it can be written as

where are smooth functions, and the sum happens over all multi-indices with .

Given two differential k-forms and a function the differential k-forms and are

The set of k-forms on is denoted .

By convention, a zero-form is a function. If then (for every form has a repeated index).

To make the notation used a bit more apparent, we can expand for in for a vector-space in , i.e. , defined above as follows:

where we've used the fact that . and just combined the "common" wedge-products. It's very important to remember that the here represents a 0-form / smooth function. The actual definition of is as a sum of all possible but the above definition is just encoding the fact that .

A form is said to be closed if .

A form is said to be exact if

for some .

If a k-form is closed on , then it is also exact.

### Exterior derivative

Given a smooth function on , its exterior derivative (or differential) is the 1-form defined by

for any vector field . Equivalently

Let be a smooth function, i.e. .

As it turns out, in this particular case, the push-forward of , denoted is equivalent to the exterior derivative!

If , then its exterior derivative is

where denotes the exterior derivative of the function (which we defined earlier).

More explicitly, take the example of the exterior derivative of a 1-form, i.e. :

from the the definition of where is a function (0-form), and .

#### Theorems

The exterior derivative is a linear map satisfying the following properites

1. obeys the graded derivation property, for any
1. for any , or more compactly,

#### Example problems

##### Handin 2

Let be the helix and consider the 1-form on

1. Find the tangent at each point along curve. Hence evaluate the line integral of the 1-form along the curve .

Hence the integral is

The tangent plane at some point along the curve for a specified is given by

which in this case is equivalent of

Concluding the first part of the claim.

For the integral, we know that

for the boundaries and . Computing we get

2. Show that . Now find a smooth function such that . Hence evaluate the above line integral without explicit integration.

### Integration in Rn

The standard orientation (which we always assume) is defined by

Coordinates (an ordered set) are said to be oriented on if and only if is a positive multiple of for all .

Observe that this induces an orientation on , since we simply apply to the coordinates , thus returning a or dependening on whether or not the surface is oriented.

Let be oriented coordinates for . Let be smooth functions on . Then

where the factor on the RHS is hte Jacobian of the coordinate transformation (i.e. the determinant of the matrix whose component is .

Let be oriented coordinates on and write

Then the integral of over is defined by

where the RHS is now the usual multi-integral of several variable caculus (provided it exists).

### Topological space

A topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods.

Or more rigorously, let be a set. A topology on is a collection of subsets of , called open subsets, satisfying:

• and are open
• The union of any family of open subsets is open
• The intersection of any finite family of open subsets is open

A topological space is then a pair consisting of a set together with a topology on .

The definition of a topological space relies only upon set theory and is the most general notion of mathematical space that allows for the definition of concepts such as:

• continuity
• connectedness
• convergence

A topology is a way of constructing a set of subsets of such that theese subsets are open and satisfy the properties described above.

### Atlases & coordinate charts

A chart for a topological space (also called a coordinate chart, coordinate patch, coordinate map, or local frame ) is a homeomorphism , where is an open subset of . The chart is traditionally denoted as the ordered pair .

An atlas for a topological space is a collection , indexed by the set , of charts on s.t. .

If the codomain of each chart is the n-dimensional Euclidean space, then is said to be n-dimensional manifold.

Two atlases and on are compatible if their union is also an atlas.

So we need to check the following properties:

1. The following are open in for all and all

2. and are for all and all .

Compatibility of atlases define a equivalence relation of atlases.

A differentiable structure on is an equivalence class of compatible atlases.

Often one defines differentiable structure with a "maximal atlas" instead of an equivalence class. The "maximal atlas" is obtained by simply taking the union of all atlases in the equivalence class.

A transition map is a composition of one chart with the inverse of another chart, which defines a homeomorphism of an open subset of the onto another open subset of the ..

Suppose we have the following two charts on some manifold

such that

The transition map is defined

where we've used the notation

to denote that the function is restricted to the domain , i.e. the statement is only true on that domain.

A differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition map are all differentiable.

More generally, a manifold is a topological manifold for which all the transition maps are all k-times differentiable.

A smooth manifold or manifold is a differentiable manifold for which all the transition map are smooth.

To prove is a smooth manifold if suffices to find one atlas due to the compatibility of atlases being an equivalence relation.

A complex manifold is a topological space modeled on the Euclidean space over the complex field and for which all the transition maps are holomorphic.

When talking about "some-property-manifold", it's important to remember that the "some-property" part is specifying properties of the atlas which we have equipped the manifold with.

is smooth if for any chart , the function

is smooth.

Observe that if is smooth for a chart , then we can transition between patches to get a smooth map everywhere.

#### Examples

##### Real projective space

If spans a 1d subspace (up to multiplication by real numbers). So, for each , we let

Then

and we further let

### Manifolds

A topological space that locally resembles the Euclidean space near each point.

More precisely, each n-dimensional manifold has a neighbourhood that is homomorphic to the Euclidean space of dimension .

#### Immersed and embedded submanifolds

An immersed submanifold in a manifold is a subset with a structure of a manifold (not necessarily the one inherited from !) such that the inclusion map is an immersion.

Note that the manifold structure on s part of the data, thus, in general, it is not unique.

Note that for any point , the tangent space to is naturally a subspace of the tangent space to , i.e. .

An embedded submanifold is an immersed manifold such that the inclusion map is a homeomorphism, i.e. is an embedding.

In this case the smooth structure on is uniquely determined by the smooth structure on .

##### Examples
• Figure 8 loop in
• It is immersed via the map

• This immersion of in fails to be an embedding at the crossing point in the middle of the figure 8 (though the map itself is indeed injective)
• Thus, is not homeomorphic to its image in the subspace / induced topology.

#### Riemannian manifold

A (smooth) Riemannian manifold or (smooth) Riemannian space is a real smooth manifold equipped with an inner product on the tangent space at each point that varies smoothly from point to point in the sense that if and are vector fields on the space , then:

is a smooth function .

The family of inner products is called a Riemannian metric (tensor).

The Riemannian metric (tensor) makes it possible to define various geometric notions on Riemannian manifold, such as:

• angles
• lengths of curves
• areas (or volumes)
• curvature
• gradients of functions and divergence of vector fields

Euclidean space is a subset of Riemannian manifold

##### Resolving some questions
• Why do we need to map the point to two vector-spaces before applying the metric ? Because it's the tangent space which is equipped with the metric , not the manifold itself, and since vector spaces are defined by a basis in we need to map into this space before applying .
• What do we really mean by the map being smooth ? This means that this maps varies smoothly wrt. the point .
• The Riemannian metric is dependent on , why is that? Same reason as the first question: the inner product is equipped on the tangent space, not the manifold itself, and since we have a different tangent space at each point , the inner product itself depends on the point chosen.

#### Differential manifold

A differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus.

##### Submersion

A submersion is a differentiable map between differential manifolds whose differential is surjective everywhere.

Let and be differentiable manifolds and be a differentiable map between them. The map is a submersion at the point if its differential

is a surjective linear map.

#### Homeomorphism

A homeomorphism or topological isomorphism is a continuous function between topological spaces that has a continuous inverse function.

### Diffeomorphism

A diffeomorphism is an isomorphism of smooth manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are smooth.

### Isometric

An isometry or isometric map is a distance-preserving transformation between metric-spaces, usually assumed to be bijective.

### Tensor

#### In words

Let be a vector space where is some field, is addition in the vector space and is scalar-multiplication.

Then a tensor is simply a linear map from some q-th Cartesian product of the dual space and some p-th Cartesian product of the vector space to the reals . In short:

where denotes the (p, q) tensor-space on the vector space , i.e. linear maps from Cartesian products of the vector space and it's dual space to a real number.

#### Maths

Tensors are geometric objects which describe linear relations between geometric vectors, scalars, and other tensors.

A tensor of type is an assignment of a multidimensional array

to each basis of an n-dimensional vector space such that, if we apply the change of basis

Then the multi-dimensional array obeys the transformation law

We say the order of a tensor is if we require an n-dimensional array to describe the relation the tensor defines between the vector spaces.

Tensors are classified according to the number of contra-variant and co-variant indices, using the notation , where

• is # of contra-variant indices
• is # of co-variant indices

Examples:

The tensor product takes two tensors, and , and produces a new tensor, , whose order is the sum of the orders of the original tensors.

When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.

which again produces a map that is linear in its arguments.

On the components, this corresponds to multply the components of the two input tensors pairwise, i.e.

where

• is of type
• is of type

Then the tensor product is of type .

#### Examples

##### Linear maps - matrices

A linear map is represented as a matrix, and we say this is a tensor of order 2, since it requires 2-dimensional array to describe the relation.

### Algebra

A K-vector space equipped by a product, i.e. a bilinear map

is called an algebra

#### Example: Algebra over differentiable functions

On some manifold , we have the vector-space which is a -vector space, and we define the product as

by the map, for some ,

where and the product on the RHS is just the s-multiplication in .

#### Derivation

A derivation is a linear map

for some algebras and , which additionally satisfies the Leibniz rule, that is:

where and denotes the products in the algebras and respectively.

##### Example: derivative on algebra of continuous functions

We have the algebra , for any , we have

which satisfies

##### Example: Lie-algebra

Let for some K-vectorspace , then we define the map

defined by

then forms an algebra, which is an example of a Lie-algebra!

In fact, the definition of a Lie-algebra is because of some other property, but this is an example of a Lie-algebra.

## Equations / Theorems

### Frenet-Serret frame

The vector fields along a biregular curve are an orthonormal basis for for each .

This is called the Frenet-Serret frame of .

By definition of the unit tangent, . Differentiate this wrt. to find . Thus, the principal normal satisfies and .

By definition of the binormal we also have and . Hence, form an orthonormal basis.

### Structure equations

Let be a unit-speed biregular curve in . The Frenet-Serret frame along satisfies:

These are called the structure equations for unit-speed space curve, or sometimes the "Frenet-Serret equations".

See p. 9 in the notes for a proof.

For a general parametrisation of a biregular space curve the structure equations become

where is the speed of the curve.

#### Extras

A biregular curve is a plane curve if and only if everywhere.

If lies in a place, then and are tangent to the plane and so must be a unit normal to this plane and hence constant.

The structure equations then imply .

The curvature and torsion of a biregular space curve in any parametrisation can be computed by

#### Matrix formulation

The structure equations can also be expressed in matrix form:

By ODE theory, for given and and initial conditions

there exists a unique solution

to the ODE system, and hence it must conicide with the Frenet-Serret frame.

There is then a unique curve satisfying

### Equivalence problem

The equivalence problem is the problem of classifying all curves up to rigid motions.

#### Uniqueness of biregular curve

Let and be given, with everywhere positive. Then there exists a unique unit-speed biregular curve with these as curvature and torsion such that and is any fixed oriented orthonormal basis in .

### Fundamental Theorem of Curves

If two biregular space curves have the same curvature and torsion then they differ at most by a Euclidean motion.

### Tangent spaces

#### Change of basis

Suppose we have two different bases for a space:

we have the following relationship

and for the dual-space

### Implicit Function Theorem

Let , where is the base and is the "extra" dimension.

Let be an open subset of , and let denote standard coordinates in .

Suppose is smooth, with and , and

If the matrix

is invertible, then there exists neighbourhoods of and of and a smooth function

such that:

is the graph of .

Or, equivalently, such that

I like to view it like this:

Suppose we have some dimensional space, and we split it up into two subspaces of dimension such that

Then, using Implicit Function Theorem, we can simply check the invertibility of the Jacobian of , as described in the theorem, to find out if there exists a function where and .

Where , where , is a map "projecting" some neighbourhood / open set of to .

#### Example

Consider and . Ket

Then,

Consider and , and .

Thus,

Thus, in the neighbourhood of in we can consider the level set and locally solve as a function of , i.e. there exists a function such that .

### Inverse Function Theorem

Let be a differentiable map, i.e. . If

is a linear isomorphism at a point in , then there exists an open neighborhood such that

is a diffeomorphism.

Note that this implies that and must have the same dimension at .

If the above holds , then is a local diffeomorphism.

### Gauss-Bonnet

Let be an oriented closed and bounded surface with no boundary. Then

where is the Euler characteristic of the surface , defined as

where denotes the vertices, the edges, and the faces obtained by dissecting the surface into polygons (this turns out to be independent of choice of dissection).

## Change of basis

### Notation

• Einstein summation notation
• is a K-vector space, i.e. vector space over some field
• denotes the b-th basis-vector of some basis in
• means isomorphic to
• denotes a tensor of order

### Stuff

Suppose we have two different bases in some K-vector space :

Now, how does the this affect the 1-forms / covectors (contra-variant) change?

#### k-forms / covectors

Let be a covector:

where denotes the m-th new basis!

is true since is a linear map by definition.

#### vectors

where the only thing which might seem a bit weird is the

which relies on

where

i.e. is isomorphic to the the dual of the dual space, which is only true for a finite basis!

Apparently this can be shown using a "constructive proof", i.e. you build up the notation mentioned above and then show that it does indeed define an isomorphism between the vector-space and the dual of the dual space.

## Determinants

### Stuff

#### Problem with matrix-representation

A matrix is a tensor, and we can thus write it as

where means that we can write out an exhaustive representation of all the and entries in such a way (in this case a matrix).

Also, it turns out that we can write a bilinear map as

See?! We can represent both as a matrix, but the way the change with the basis are completely different!

The usual matrix representation that we're used to (the one with the normal matrix-multiplication, etc.) is the tensor, and it's an endomorphism on , i.e. homomorphism which takes a vector to a vector.

#### Definition

Let . Then

for some volume form / top-form , and for some basis of , i.e. it's completely independent from the choice of basis and top-form.

Due to the top-forms being equal up to some constant, we see that any constant in the above expression would cancel.

## Tangent space and manifolds

### Notation

• , that is a linear map from the manifold and , often called a chart map (as it is related to a chart )
• is a smooth curve, i.e. a smooth mapping taking in a single parameter and mapping it to a point on the manifold
• is the partial derivative, which at each point (since ) for some function we take the partial derivative of wrt. a-th entry of the Cartesian product (n times)

### Stuff

We define a new symbol

that is

Why is this all necessary? It's pretty neat because we're first using the chart-map to map the point to Euclidean space.

Then, the composite function

The tangent space is an n-dimensional (real) vector space.

Addition structure: Consider two curves in s.t.

with tangent vectors and . We let

Need to show that curve , s.t.

Let be a chart, . Then we define by

so and

where are the components of the chart. Tangent vector to at :

where we have used the fact that

N-dimensional follows from Theorem thm:tangent-vectors-form-a-basis: They form the basis

thus .

#### Construting a basis

From above, we can construct vectors

which are the tangent vectors to the chart-induced curves .

Any ca be written as

where we're using Einstein summation and refers to the s-multiplication of a in the vector space.

Further,

form a basis for .

, is a smooth curve through

Then the map , which is the tangent vector of the curve at point given by

Then by the chain rule, we have:

where we've used the fact that is just a real number, allowing us to move it to the front.

Hence, for any smooth curve in at some point we can "generate" the tangent of this curve at point from the set

which we say to be a generating system of .

Now, all we need to prove is that they are also linearly independent, that is

This is just the definition of basis, that if a vector is zero in this basis, then either all the coefficients are zero or the vector itself is the zero-vector.

One really, really important thing to notice in this proof is the usage of

where we just "insert" the since itself is just an identity operation, but which allows us to "work" in Euclidean space by mapping the point in the manifold to the Euclidean space, and then mapping it back to the manifold for to finally act on it!

#### Push-forward and pullback

Let be a smooth map between smooth manifolds and .

Then the push-forward at the point is the linear map

which defines the map

as

where:

• is a smooth function
• Elements in and define maps of functions, hence we need to apply it to some function to define it's operation

A couple of remarks:

• defined as above, is the only linear map from to one can actually define!
• is often referred to as the derivative of at the point
• The tangent vector of the curve at the point in the manifold is pushed forward to the tangent vector of the curve at the point in the manifold ; i.e. for a curve we map the tangent vector at some point to the tangent vector of the "new" curve in the target manifold at the resulting point

Let be a smooth map between smooth manifolds and .

Then the pull-back of at the point is the linear map

i.e. a linear map from the cotangent space at the target TO the cotagent space of the originating manifold at point !

We define the map

as, acting on ,

which is linear since and are linear, where is the push-forward.

##### Comments on push-forward and exterior derivative

First I was introduced to the exterior derivative in the Geomtry course I was doing, and afterwards I was, through the lectures by Schuller, introduced to the concept of the push-forward, and the pull-back defined using the push-forward. Afterwards, in certain cotexts (e.g. in Geometry they defined the pull-back of a 2-form on involving the exterior derivative), I kept thinking "Hmm, there seems to be some connection between the exterior derivative and the push-forward!

Then I read this stachexchange answer, where you'll find the following snippet:

Except in one special situation (described below), there is essentially no relationship between the exterior derivative of a differential form and the differential (or pushforward) of a smooth map between manifolds, other than the facts that they are both computed locally by taking derivatives and are both commonly denoted by the symbol .

And the special case he's referring to is; when the function is a smooth map , where the two are equivalent.

#### Immersion and embedding

Let be a smooth map on manifolds to .

We say is an immersion if and only if the derivative / push-forward is injective for each point , or equivalently .

Remember the push-forward is a map from the tangent space of at point to the tangent space of at : . We do NOT require the map itself to be injective!

Let be a smooth map on manifolds and .

We say is an embedding of in if and only if:

1. is an immersion
2. , where means a homeomorphism / topological isomorphism

Any smooth manifold can be:

• embedded in
• immersed in

Where .

This is of course "worst-case scenarios", i.e. there exists manifolds which can be embedded / immersed in lower-dimensional manifolds than the rules mentioned here.

There exists even stronger / better lower bounds for a lot of target manifolds, which requires slightly more restrictions on the manifold.

Let and me differentiable manifolds. A function

is a local diffeomorphism, if, for each point , there exists an open set such that , and the image is open and

is a diffeomorphism.

A local diffeomorphism is then a special case of an immersion from to , where the image of under locally has the differentiable structure of a submanifold of .

##### Example: 2D Klein bottle in 3D

The klein bottle is a 2D surface, as we can see below:

But due to the self-intersecting nature of the Klein bottle, it is not a manifold when it "sits" in . Nonetheless, the mapping of the Klein bottle as shown in the picture, does in fact have a injective puh-forward! That is, we can injectively map each tangent vector at a point in such a manner that no two tangent vectors are mapped to the same tangent vector on the Klein bottle 2D surface in .

Hence, the Klein bottle can be immersed in but NOT embedded, as "predicted" by Whitney Theorem. And the same theorem tells us that we can in fact embed the Klein bottle in .

## Tensor Fields and Modules

### Notation

• where denotes the tangent bundle of the manifold .
• is taking the Cartesian product and equipping it with addition

### Stuff

Let

A vector field is a smooth section of , i.e.

• is smooth
• is smooth

Informally, a vector field on a manifold can be defined as a function that inputs a point and outputs an element of the tangent space . Equivalently, a vector field is a section of the tangent bundle.

### Module

We say is a R-module, being a ring, if

satisfying

Thus, we can view it as a "vector space" over a ring, but because it behaves wildly different from a vector space over a field, we give this space a special name: .

Important: denotes a module here, NOT manifold as usual.

If is a division ring, then has a basis.

This is not a but simply says that we guarantee the existence of a basis if is a division ring.

First we require the Axiom of Choice, in the incarnation of Zorn's lemma, which is just the Axiom of Choice restated, given that we already have all the other axioms of Zermelo-Fraenkel set theory.

Zorn's Lemma: A partially ordered set whose every totally ordered subset has an upper bound in contains a maximal element.

where:

partially ordered

#### Every module over a divison ring has a basis

##### Theorem

Every module over a division ring has a basis.

1. Let be a generating system of , i.e.

Observe that , the generating system, always exists since we can simply have

2. Define a partially ordered set by

where denotes the powerset of . We partially order it by inclusion:

i.e. if a set is a subset of another, then the it's smaller than the other subset.

3. Let be any totally ordered subset of , then

and it is a lin. indep. subset of . Thus, by Zorn's Lemma, has a maximal element, one of which we call . By construction, is a maximal lin. indep. subset of .

4. Claim: Proof: Let . Since is maximal lin. indep. subset, we have is linearly dependent. That is,

and not all of , vanish, i.e. . Now it is clear that , because

but this is a contradiction to being linearly independent, as assumed previously. Hence we consider ; then, since (remembering that is a division ring)

Thus, if we multiply the equation above with the inverse of , we get

for the finite subsets of B. Thus,

Hence, we have existence of a linear indepndent subset of which also spans .

As we see above, here we're making use of the fact that is a division ring, when we're using the inverse of .

Observe that is not a division ring, hence consider as a module is not guaranteed to have a basis.

Definition of can be found here.

#### Examples

##### module

One simply example of a module is

where:

• is a manifold
• is the projection
• denotes the set of all sections of , i.e. the total space of the bundle

which is a module.

### Terminology

A module over a ring is called free if it has a basis.

Examples:

A module over a ring is called projective if it is a directed summand of a free R-module :

where is the R-module.

Remark: free projective

### Theorems

is a *finitely projective module .

From this have the following corollary:

where is the module and is a free module.

Thus, "quantifies" how much fails to have a basis, since is how much we have to add to to make it free.

Let be finitely generated projective modules over a commutative ring .

Then,

is again a finitely generated projective module.

This falls out of the commutativity of the ring .

In particular:

where the equality can be shown (but we haven't done that here).

Finally, this gives us the "standard textbook definition" of a tensor-field:

A tensor field on a smooth manifold is a multilinear map

We can then view

as the space of all tensor-fields on , which again, forms a module!

Hence, we when we talk about the mapping being multilinear, of course it must be multilinear to the underlying ring-structure of the module, i.e. multilinear!

Some textbooks give the above definition, and then note that is not linear, but linear, and that this needs to be checked. But because we're aware of the commutative ring that "pops out" we know that of course it has to be multilinear wrt. the underlying ring-structure, which in this case .

## Grassman algebra and deRham cohomology

### Notation

• , i.e. maps in s.t. composed with the projection from the total space to the base space of the tangent bundle form the identity. That is, maps a point in to it's fibre in , known as sections
• denotes a permutation in what follows section, NOT a projection as seen earlier
• denotest the set of permutations on letters / digits
• is the tensor product
• is called the anti-symmetric bracket notation, where denotes some "indexable object"
• is the same as anti-symmetric bracket notation, but dropping the , we call symmetric bracket notation
• where is the exact forms and is the closed forms, where denotes the previous and the next

### Grassman Algebra

The set of all n-forms is denoted

which naturally is a module since:

• sum of two n-forms is a n-form
• multiple of some is again in

We have a problem though: taking the tensor product of forms does not yield a form!

I.e. the space is not closed.

In what follows we're slowly building up towards a way of defining a product in such a way that we do indeed have the space of forms being closed under some additional operation (other than and ), which is called the Grassman algebra.

We define the wedge product as follows:

defined by

e.g.

where the tensor product is just defined as

as usual.

Further, this allows us to construct the pull-back for some arbitrary form:

Let and be a smooth mapping between the manifolds.

This induces the pull-back:

can be used to define

where , which is the pull-back of the entire space rather than at a specific point .

Then,

where is the push forward of a vector field.

The pull-back distributes over the wedge product:

The module defined

(where we've seen and before!)

Then defines the Grassman algebra / exterior algebra of , with the being a bilinear map

is defined by linear contiuation of (wedge product for forms), which means that for example if we haven

where and , and another , then

Now, as it turns out, we cannot define a differentiable structure on tensors on some manifold . But do not despair! We can in fact do this on anti-symmetric tensors, i.e. forms, we can indeed define a differentiable structure without any further restrictions on the manifold!

The exterior derivative operator

where

i.e. since takes entries we leave out the i-th entry, and is the commutator.

Commutator of two vector fields is given by

for .

Let and .

If we have the smooth map , then

where we observe that the are "different":

• LHS:
• RHS:

which is why we use the word "commute" rather than that they are "the same" (cuz they ain't)

Further, action of extends by linear continuation to :

where denotes the Grassman algebra.

#### Physical examples

##### Maxwell electrodynamics

Let be the field strength, i.e. the Lorentz force:

then is a two-form (since it maps both and to ), and we require

which is called the homogenous Maxwell equations.

Since is a two-form on the reals, we know from Poincaré lemma that if a n-form is closed on and thus we must exact, thus

for some . is called the gauge potential.

### de Rham Cohomology

The following theorem has already been stated before, but I'll restate it due to it's importance in this section:

where is the exterior derivative.

and

in local coords:

(remember we're using Einstein summation), where we've used the fact that , which gives us

where the last equality is due to Schwartz ( under certain conditions).

implies that there exists a sequence of maps such that

We then observe that:

where the above theorem tells us that:

Now, we introduce some terminology:

We then say that is called:

• exact if
• closed if

which is equivalent to the exact and closed definitions that we've seen before, since for some is exact, i.e. . Observe that we here consider as a mapping rather than , thus the different colored mappings are sort of the same but sort of the not :)

As we know from Poincaré lemma, there are cases where

but then you might wonder, if it's not the case: how would one quantify the difference between and ?

The n-th deRham Comohomology group is the quotient vector space

where on we have equivalence relation:

and we write (this is just notation)

The idea of de Rham cohomology is to classify the different types of closed forms on a manifold.

One performs this classification by saying that two closed forms are cohomologous if they differ by an exact form, i.e. is a exact:

where is the set of exact forms.

Thus, the definition of n-th de Rham cohomology group is the set of equivalence classes with the equiv. relation described above; that is, the set of closed forms in modulo the exact forms.

Further, framing it slightly different, it might become a bit more apparent what we're saying here.

Observe that

Since , then clearly

And further, it turns out that by partitioning all the closed forms by taking the "modulo" the exact forms, we get a set of unique and disjoint partitions (due to this being an equiv. relation).

That is,

only depends on the global topology of the manifold

This is quite a remarkable result, since all our "previous" work depends on the exterior derivative of the local structure, and then it turns out that only depends no the actual topology of the manifold! Woah dude!

We have the following example:

#### Summary

• From we get the sequence with inclusions where always the images are included in the kernels of the next map.n
• Then if we want to quantify how much the images diviate from the kernel, we can quantify by "modding out" the closed forms, , from the exact forms, .
• We then learn purely topological invariants,

## Lie Theory

### Notation

• often used as short-hand notation for the K-vectorspace when this vector space is further equipped with the Lie brackets , i.e. when writing

refers the underlying K-vectorspace.

• is the set of left-invariant vector fields in the Lie group
• refers to two vector spaces being isomorphic (which is not the same as, say, a group isomorphism)
• denotes the abstract Lie-brackets, i.e. any function which takes two arguments as satisfy the properties of a Lie bracket.
• denotes the particular instance of a Lie bracket defined by

known as the commutation relation

• refers to the 0th fundmantal group / path components
• refers to the 1st fundamental group
• denotes the connected component of the identity
• Submanifold refers to embedded submanifold
• denotes the 2-torus
• denotes the space of vector fields
• denotes that acts on as a group

### TODO Stuff

A Lie group is

• a group with group operation :

where denotes that the group could be commutative, but is not necessarily so.

• is a smooth manifold and the maps:

where inherits smooth atlas from , thus is a map between smooth manifolds.

are both smooth maps.

Let be a Lie group.

Then for any , there exists a map

called the left translation wrt. .

Each left translation of a Lie group is an isomorphism but NOT a group isomorphism.

It is also a diffeomorphism on , by the definition of a Lie group.

Let and be Lie groups

If is a smooth / analytic map preserving group structure, i.e.

then is a morphism of Lie groups.

1. Lie groups do not have to be connected, neither simply-connected
2. Discrete groups are Lie groups

Let

• be a Lie group
• be the connected component (which always exists around identity) of
• Then is a normal subgroup in and is a Lie group itself.
• is a discrete group
1. First we show that is indeed a Lie group. By definition of a Lie group, the inversion map

is continuous. The image of a connected topological space under a continous map is connected, hence takes to a connected comp of containing the identity, since

Similar argument for . Hence is a Lie group. At the same time, conjugation

is cont. in for all . Thus is a conn. comp. of which contains since .

2. Let be the quotient map. is an open map (i.e. maps open to open) since is equipped with the quotient topology. This implies that for every we have

i.e. it's open. This implies that every element of is an open subset, hence the union of all elements in cover and each of them open, i.e. we have an open covering in which every open subset contains exactly one element of (which is the definition of a discrete topological group).

Let

FINISH IT!

1. Show that is a Lie group. Let be a connected manifolds, , , and

be cont. be universal covers, and with

Then lifts to s.t.

Choose s.t. implies that lifts in a unique way to taking . Same trick works for .

2. is discrete and central

#### Lie subgroups

A closed Lie subgroup of a Lie group is a (embedded) submanifold which is also a subgroup.

A Lie subgroup of a Lie group is an immersed (as opposed to embedded) submanifold which is also a subgroup.

1. Any closed Lie subgroup is closed (as a submanifold)
2. Any subgroup of a Lie group which is a closed subset is a closed Lie group.
1. connected Lie group, neighborhood of , then generates .
2. is a morphism of Lie groups, and is connected. If is surjective, then is surjective.
1. subgroup generated by , then is open in because , we have is open neighborhood of in . Then
2. Inverse function theorem says that is surjective onto some neighborhood , Since an image of a group morphism is a subgroup, and generates , is surjective.
##### Example

and with

Then it is well-known (apparently) that the image of this map is everywhere dense in , and is often called the irrational or dense winding of , and the map is open "one way" but the "other way".

This is an example of a Lie subgroup which is NOT a closed Lie subgroup. The image of the map is a Lie subgroup which is not closed. It can be shown that if a Lie subgroup is closed in , then it is automatically a closed Lie subgroup. We do not get a proof of that though, apparently.

#### Factor groups

• As in for discrete groups, given a closed Lie subgroup , we can define notion of cosets and define as the set of equivalence classes.
• Following theorem shows that the coset space is actually a manifold

Let

Then is a submanifold of and there exists a fibre bundle with , where is the canonical map, with as it's fibre. The tangent space is given by

Further, if is a normal closed Lie subgroup then has a canonical structure of a Lie group (i.e. transition maps are smooth and the smooth structure does not depend on the choice of and (see proof).

Let

• be the canonical map
• and

Then is a (embedded) submanifold in as it's an image of under diffeomorphism . Choose a submanifold such that and is traversal to the manifold , i.e.

which implies that .

Let be a sufficently small neighborhood of in . Then the set

is open in . This follows from the IFT applied to the map .

Consider . Since is open, is an open neighborhood of in and the map is a homeomorphism. This gives a local chart for by , where denotes a chart map for . At the same time this shows that is a fibre bundle with fibre .

GET CONFIRMATION ABOUT THIS. With the the atlas we see that the transition maps are smooth by the smoothness of and . Further, observe that choosing any other and does not alter the proof, since still holds, and therefore ???

The above argument also shows that the push-forward of , i.e. has the kernel

In particular, gives the isomorphism (since is an isomorphism)

as wanted.

REMINDER: If is a fibre bundle with fibre , then there exists a long exact sequence of homotopy groups

Exact means that with .

Let be a closed Lie subgroup of a Lie group .

1. connected where . In paricular, if and are both connected, then so is .
2. connected, connected

#### Push-forward on fields

What does this mean? It means that on a Lie group we can in fact construct a diffeomorphism using the left translations, and thus a push-forward from to , i.e. we can map vector fields to vector fields in the group !

We can push forward on vector field on to another vector field, defined

where , .

Let be a Lie group, and a vector field on , then is called left invariant vector field, if for any

Alternatively, one can write this as

where we write the map pointwise on the vector field.

Alternatively, again, and

The set of left-invariant vector fields of a Lie group can be denoted

where is a module.

We observe that the following is true:

which implies that is a module.

#### Lie Algebra

An abstract Lie algebra is a K-vectorspace equipped with an abstract lie bracket that satisfies:

• bilinear (in ):
• anti-symmetric:
• Jacobi identity:

One might wonder why we bother with these weird brackets, or Lie algebras at all; as we'll see, there is a correspondance between Lie groups, which are geometrical objects, and these Lie algebras, which are linear objects.

We need to construct a linear isomorphism

where

where denotes at the point .

That is, we push forward the vector-field at the point for every point , thus creating a vector at every point.

1. We now prove that it's left-invariant vector field:

2. It's clearly linear, since and the push-forward on a vector-field at the point , , is by definition linear.
3. is injective:

which can be seen from

4. is surjective: Let , i.e. is a left-invariant vector field. Then we let be some vector field associated with , defined by

Consider:

which implies

Hence, as claimed,

Which means that as a vector space we can work with to prove properties of the vector-field of left-invariant vector field!

Only problem is that we do not have the same algebra, i.e.

and we would really like for the following to be the case

that is, we want some bilinear map s.t.

Thus, we simply define the commutation brackets on , such that

as desired we get

##### Example of Lie Algebra

Let is a vectorspace. Then

is an infinite-dimensional (abstract) Lie algebra.

### Examples of Lie groups

#### Unit circle

where we let the group operation , i.e. multiplication in .

Whenever we multiply a two complex numbers which are both unit length, then we still end up on the unit-circle.

#### General linear group

equipped with the operation, i.e. composition. Due to the nature of linear maps, this group is clearly satisfies (but not ), hence it's a Lie group.

##### Why is GL a manifold?

can be represented in (as matrices), and due to the the set is also open.

Thus we have an open set of on which we can represent as

#### Relativistic Spin Group

In the definition of the relativistic spin groups we make use of the very useful method for constructing a topology over some set by inheriting a topology from some other space.

1. Define topology on the "components" of the larger set
2. Take product topology
3. Take induced subset-topology
##### Proof / derivation

• As a group

We make into a group :

i.e. matrix multiplication, which we know is ANI (but not commutative):

• Associative
• Exists neutral element
• Invertible (since we recognize )
• As a topological space

From this group, we can create a topological space :

1. Define topology on by virtue of defining "open balls":

which is the same as we do for the standard topology in .

2. Take the product topology:

3. Equip with the induced subset topology of the product topology over , i.e.

Verify that we have , with as given above, is a topological manifold. We do this by explicitly constructing the charts which together fully covers , i.e. defines an atlas of :

1. First chart :

and the map

which is continuous and invertible, with the inverse:

hence is a homeomorphism, and thus is a coordinate chart of .

2. Second chart :

and the map

which is continuous and invertible, with the inverse:

3. Third chart :

and the map

which is continuous and invertible, with the inverse:

Then we have an atlast in , since these cover all of , since the only case we're missing is when all , which does not have determinant , therefore is not in . Hence, is a complex topological manifold, with

• As a differentiable manifold

Now we need to check if is differentiable manifold (specifically, a ); that is, we need the transition maps to be " compatible", where ? specifies the order of differentiability.

One can show that the atlast defined above is differentiable to arbitrary degree. We therefore let be the maximal atlast with differentiability to arbitrary degree, containing the atlast we constructed above. This is just to ensure that in the case we late realize we need some other chart with these properties, then we don't have to redefine our atlas to also contain this new chart. By using the maximum atlas, we're implicitly including all these possible charts, which is convenient.

One can show that above defines open subsets, by observing that the subset where is closed, hence the complement () is open.

• As a Lie group

As seen above, we have the group , where is a manifold to arbitrary degree, with the maximum atlas containing as defined previously.

To prove that this is indeed a Lie group, we need to show that both the following maps:

and

are both smooth.

Differentiability is a rather strong notion in the complex case, and so one needs to be careful in checking this. Nonetheless, we can check this to the arbitrary degree.

We observe that the Fig. fig:commutation-diagram-inverse-relativistic-spin-group-charts is the case, since the inverse map restricted to , where , is then mapped to , i.e. the image is .

We cannot talk about differentiablility on the manifold itself, hence we say is differentiable if and only if the map is differentiable (since we already know and are differentiable). We observe that

which is most certainly a differentiable map. We've used the fact that all these matrices have in the inverse above.

Performing the same verification for the other charts, we find the same behavior. Hence, we say that , on the manifold-level, is differentiable.

For we can simply let the product-space inherit the smooth atlast on , hence is also smooth.

That is, the composition map and the inverse map are both smooth for the group , hence is a complex 3-dimensional Lie group!

• TODO As a Lie algebra

In this section our aim is to construct the Lie algebra of the Lie group .

We will use the standard notation of

Recall

i.e. it's the set of left-invariant vector fields.

Further, recall

where

Now, we need to equip with the Lie brackets:

where

To explicitly write out this , we use the chart since . For any , we have

Observe that if we write

then we observe that is diffeomorphic to .

### Classification of Lie Algebras

#### Stuff

Every finite-dim. complex Lie algebra can be decomposed as

where:

1. is a Lie sub-algebra of which is solvable, i.e.:

2. are simple Lie algeabras, i.e.
• is non-abelian
• contains no non-trivial ideals, where:
• An ideal means some sub-vector space s.t. , i.e. if you bracket the sub vector spaces from the outside, then you're still in the ideal .
• is clearly ideal since
3. The direct sum between Lie algebras is defined as:

4. semi-direct sum

Which saying that we can always decompose a complex Lie algeabra in a solvable , semi-direct sum, and a direct sum between simple Lie algebras.

Every Lie algebra can be decomposed into a semi-direct sum between a solvable Lie algeabra and a direct sum between simple Lie algebras

A Lie algeabra that has no solvable part, i.e. is called semi-simple.

It turns out it's quite hard to classify the solvable Lie algebras , and simpler to classify the semi-simple Lie algeabras. Thus we put our focus towards classifying the semi-simple Lie algebras and then using these as building blocks to classify the full Lie algebra of interest.

is a complex Lie algebra and , then define

is the adjoint map wrt. .

The bililinear map

is called the killing "form" (it's a symmetric map, so not the kind of form you're used to).

And we make the following remarks about the Killing form:

• is finite-dim. thus the is cyclic, hence
• semi-simple (i.e. no solvable part) if and only if is non-degenerate:

Now, how would we then compute these simple forms?

Now consider, for actual calculations, components of and wrt. a basis:

Then

where are just coefficients of expanding the commutation in the space of the complex numbers, which we clearly can do since . These coefficients are called the structure constants of wrt. chosen basis.

The killing form in components is:

Where the last bit is taking the . Thus, each component of the killing form is given by

We then empasize that is semi-simple if and only if is a psuedo-inner product (i.e. inner product but instead of being positive-definite, we only require it to be non-degenerate).

One can check that is anti-symmetric wrt. killing form a (for a simple Lie algeabra, which implies semi-simple).

A Cartau subalgebra of a Lie algebra is:

• as a vector subspace
• maximal subalgeabra of and that there exists a basis:

of that can be extended to a basis of

such that the extension vectors are eigenvectors for any where :

where the eigenvalue depends on since with any other we have a different map

Now, one might wonder, does such a Cartau subalgebra exists?!

1. Any finite-dimensional Lie algebra posses a Cartau subalgebra
2. If is simple Lie algebra then is abelian, i.e.

is linear in . Thus,

I.e. we can either view as a linear map, OR as a specific value . This is simply equivalent of saying that

The are the roots of the Lie algebra, and we call

the root set.

Since is anti-symmetric wrt. killing form, then

Also, are not linearly independent.

A set of fundamental roots such that

1. linearly independent,
2. Then such that

Observe that the makes it so that we're basically choosing either to take all the positive or all the negative , since they are linearly independent, AND this is different from saying that !!! Since we could then have some negative and some positive. We still need to be able to produce from , therefore we need the .

And as it turns out, such a can always be found!

The fundamental roots of span the Cartau subalgebra

buuut note that is not unique (which is apparent by the fact that we can choose for the expressing the roots, see definition of fundamental roots).

If we let , then we have

We define the dual of the Killing form as defined by

where we define

where exists if is a semi-simple Lie algebra (and thus of course if it's a simple Lie algebra).

If we restrict the dual of the Killing form to (as opposed to ), that is,

and

with equality if and only if .

Then, on we can calculate lengths and angles.

In particular, one can calculate lengths and angles of the fundamental roots of (all roots are spanned by fundamental roots => can calculate such on all roots).

Now, we wonder, can we recover precisely the set from the set ?

For any define

which is:

• linear in
• non-linear in

and such is called a Weyl transformation, and

called Weyl group with the group operation being the composition of maps.

1. The Weyl group is generated by the fundamental roots in :

2. Every root can be produced from a fundamental root by action of the Weyl group :

3. The Weyl group merely permutates the roots:

Thus, if we know the fundamental roots , we can, by 1., find the entire Weyl group, and thus, by 2., we can find all the roots !

#### Conclusion

Consider: for any fundamental roots , by definition of we have

And

Which means that both terms on LHS of must be the same sign, and further because it's an element in , we know the coefficient must be in the integers (for ):

where, for , we have $- Cij ∈$.

Observe, is not symmetric.

We call the matrix defined by the Cartau matrix, and observe that while every other entry is some non-positive number!

No we define the bond number:

which implies

where and are non-positive numbers, hence:

Therefore:

0 0 0
-1 -1 1
-1 -2 2
-2 -1 2
-1 -3 3
-3 -1 3

Which further implies that

##### Dynkin diagrams

We draw these diagrams as follows:

1. for every fundamental root draw circle:

2. if two circles represent , draw lines between them:

3. if there are 2 or 3 lines between two roots, use the sign on the lines between to indicate which is the greatest root

Any fininte-dimensional simple - Lie algebra can be reconstructed from the set of fundamental roots and the latter only comes in the following forms:

Taken from [[https:/commons.wikimedia.org/wiki/]] /

### Representation Theory of Lie groups and Lie algebras

#### TODO Representation of Lie Algebras

Let be a Lie algebra.

Then a representation of this Lie algebra is:

s.t.

where the vector space (a finite-dimensional vector space) is called the representation space.

An example of a representation is acts on , then

is a representation of via

In general, is aan equiv. calss of curves where if and .

are manifolds, with . Then takes curves through to curves through , and

And takes equiv. calsses to equiv. classes (CHECK THIS). Thus differential of , .

A representation is called reducible if there exists a vector subspace s.t.

in other words, the representation map restricts to

Otherwise, is called irreducible.

Let be a Lie algebra over a field .

Given an element of a Lie algebra , one defines the adjoint action of on is given by the adjoint map.

Then there is a linear mapping

Within , the Lie bracket is, by definition, given by the commutator of the two operators:

Using the above definition of the Lie bracket, the Jacobi idenity

takes the form

where , , and are arbitrary elements of .

This last identity says that is a Lie algebra homomorphism; i.e. a linear mapping that takes brackets to brackets. Hence is a representation of a Lie algebra and is called the adjoint representation of the algebra .

#### Casamir operator

Let be the representation of complex Lie algebra .

We define the ρ-Killing form on as

/Note that this is not the same as the "standard" Killing form, where we only consider .

Let be a faithful representation of a complex semi-simple Lie algebra .

Then is non-degenerate.

Hence induces an isomorphism by

Recall that if is a basis of , then the dual basis of which is defined by

By using the isomorphism induced by (when is a faithful representation, by proposition:ρ-killing-form-induces-isomorphism-for-complex-semi-simple-algebras), we can find some such that we have

or equivalently,

We thus have,

This seems awfully much like Reproducing Kernel Hilbert Spaces (RKHSs), doesn't it?

At least seems to define a kernel of the space?

Let and be defined as above. Then

where are the structure constatns wrt. .

Let be a faithful representation of a complex (compact) Lie algebra and let be a basis of .

The Casimir operator associated to the representation is the endomorphisms

Let be the Casimir operator of a representation .

Then

that is, commutes with every endormorphism in .

If is irreducible, then any operator which commutes with every endomorphism in , i.e.

has the form

for some constant (or , if is a real Lie algebra).

The Casimir operator of is

where

The first part follows from Schurs lemma and Thm. thm:casimir-operator-representation-of-lie-algebra.

#### Representation of Lie groups

A representation of a Lie group is a Lie group homomorphism

for some finite-dimensional vector space .

Recall that is a Lie group homomorphism if it is smooth and

Let be a Lie group. For each , we define the Adjoint map:

Notice the capital "A" here to distinguish from the adjoint map of a Lie algebra.

Since is a composition of a Lie group, then it's a smooth map. Further,

Thus, .

### Reconstruction of Lie group from it's Lie algebra

#### Notation

• denotes the map restricted to the vector space

#### Stuff

is a smooth manifold and be is a smooth vector field on .

Then a smooth curve

is called an integral curve if

There is a unique integral curve of through each point of the manifold .

This follows from the existence and uniqueness of solutions to ordinary differential equations.

An integral curve of a vector field is called complete if its domain can be extended to .

On a compact manifold, every vector field is complete.

Every left-invariant vector field on a Lie group is complete.

Let and define the thus unqiuely determined left-invariant vector field :

Then let be the integral curve of (which we can due to this theorem) through the point

This defines the so-called exponential map

It might be a bit clearer if one writes

Some places you might see people talking about using infinitesimally small generators, say . Then they will write something like "we can generate the group from the generator by expanding about the identity:

where is our Lie group element generated from the Lie group element !

Now, there are several things to notice here:

• Factor of ; this is just convention, and might be convenient in some cases
• is just what we refer to as
• This is really not a rigorous way to do things…

Finally, and this is the interesting bit IMO, one thing that this notation might make a bit clearer than our definition of is:

• We don't need an expression for ; is a smooth curve, and so we can
1. Taylor expand to obtain new Lie group element
2. new Lie group element
3. Goto 1. and repeat until generated entire group
• Matching with the expression above, only considering first-order expansion, we see that

where

• Neat, innit?!
1. is a local diffeomorphism:

1. This restricted map is bijective
2. and are smooth
2. If is compact then is surjective: . This is super-nice, as it means we can recover the entire Lie group from the Lie algebra!!! (due to this theorem) However:
• is non-compact
• is compact
• Hence cannot be injective!
3. If is non-compact: it may be not surjective, may be injective and may be bijective. Just saying it can be whatever if is non-compact.

The ordinary exponential function is a special case of the exponential map when is the multiplicative group of positive real numbers (whose Lie algebra is the additive group of all real numbers).

The exponential map of a Lie group satisfies many properties analogous to those of the ordinary exponential function, however, it also differs in many important respects.

### Lie group action, on a manifold

#### Notation

• Unless otherwise specified, assume maps in this section to be continuous
• is sometimes used to the denote a specific equivalence class related to the group-element
• denotes the orbit of of the element , i.e.
• denotes the stabilizer of of the element , i.e.

#### Preparation

Let be a Lie group, and be a smooth manifold.

Then a smooth map

satisfying

is called a left G-action on the manifold .

Similarily, we define a right action:

s.t.

Observe that we can define the right action using the left action :

and

can be understood as a right action of on the basis and a left action of on the components .

Let:

• two Lie group and and a Lie group homomorphism .
• and be two smooth manifolds
• two left actions

• be a smooth map

Then is called equivariant if the following diagram commutes:

where is a function where takes the first entry and maps to , and takes the second entry and maps to .

Let be a left action.

1. For any we define it's orbit under the action as the set

2. Let

which defines a equivalence relation, thus defining a partition of called orbit space

3. For any we define the stabilizer

An action is called free if and only if

where is the identity of the group.

#### Examples of Lie group actions

1. acts on and is a Lie group
2. acts on since

and thus we have

• Of course, also acts on
3. acts on

#### G-homogenous space

Let act on , then

1. is a closed Lie subgroup of
2. is an injective immersion

In particular, is an immersed submanifold in and if it is a submanifold, then is a diffeomorphism .

A G-homogenous space is a manifold with transitive action of , i.e.

If is a G-homogenous space, then there is a fibre bundle with fibre , where

The proof follows from the fact that we have a diffeomorphism between and following thm:stabilizer-is-closed-lie-group-and-action-diffeomorphism-to-manifold , and we already know that is a fibre bundle with fibre from earlier.

##### Example of application of G-homogenous space

acts on with

So has on the first column, and then zeros on the rest of the first row, and then the rest of the matrix is . MAKE THE MATRIX.

We get the fibre fundle with fibre . We then have the exact sequence

If and then

If , we have . This implies

is a point, hence connceted and simply connected, hence so is for all !

#### Principal fibre bundles

A bundle is called a principal G-bundle if

1. is a right G-space, i.e. equipped with a right G-action
2. is free
3. and are isomorphic as bundles, where
• , takes a point to it's orbit / equivalence class
• Since is free:

Suppose we have two principal G-bundles:

Then a principal bundle morphism or map needs to satisfy the following commutation relations:

and a further restriction is also that there exists some Lie group homomorphism , i.e. has to be a smooth map satisfying:

A principal bundle map is a diffeomorphism.

A principal G-bundle under the action by is called trivial if it is diffeomorphic to the principle G-bundle equipped with

and

That is, is trivial if and only if it's diffeomorphic to the bundle where the total space is the mfd. with attached as the fibre to each point, or equivalently (due to this theorem) if there exists a principal bundle map between these bundles.

A principal G-bundle is trivial if and only if

i.e. there exists a smooth global section from to .

Let be a manifold. First observe that

i.e. the bundle is simply the set of all possible bases of .

The frame bundle is then

where denotes the unique union.

Equip with a smooth atlas inherited from .

Further we define the projection by

which implies that is a smooth bundle.

Now, to get a principal bundle, we need to establish a right action on , which we define to be

which is just change of basis, and is therefore free.

Checking that this in fact a principal bundle by verifying that this bundle is in fact isomorphic to .

Observe that the Frame bundle allows us to represent a choice of basis by a choice of section in each neighborhood , with .

That is, any is a choice of basis for the tangent space at . This is then equipped with the general linear group , i.e invertible transformations, which just happens to be used for to construct change of bases!!!

Ya'll see what this Frame bundle is all about now?

#### Associated bundles

Given a G-principal bundle (where the total space is equipped with for ) and a smooth manifold on which we can have a left G-action:

we define the associated bundle

by:

1. let be the equivalence relation:

Thus, consider the quotient space:

In other words, the elements of are the equivalence classes (short-hand sometimes used ) where , .

2. Define by

which is well-defined since

This defines a fibre bundle with typical fibre :

##### Example of associated bundle: tangent bundle

That is, if we change the frame by the right-action (which represents a change of basis), then we must change the components of the tangent space (if we let be the tangent space).

Then is the associated bundle of the frame bundle.

##### Example: tensor associated bundle

With the left action of on :

Which defines the tensor bundle wrt. some frame bundle .

So what we observe here is that "changes of basis" in the frame bundle, which is the principal bundle for the associated bundles tangent bundle and tensor bundle, corresponds to the changes to the tangent and tensors as we are familiar with!

That is, upon having the group act on the frame bundle, thus changing bases, we also have this same group act on the associated bundles!

##### Example of associated bundle: tensor densitites

But now, left action of on :

for some .

Then is called the (p, q)-tensor density bundle over .

Observe that this is the same as the tensor bundle, but with the factor of in front, thus if we had instead used , i.e. orthogonal group, then , hence it would be exactly the same as the tensor bundle!

##### Associated bundle map

An associated bundle map between two associated bundles (sharing the same fibre, but being associated to arbitrarily different respective G-principal bundle )

is a bundle map (structure-preserving map of bundles) which can be costructed from a principal bundle map between the underlying principal bundles,

where

as

##### Restricted Associative bundles

Let

If there exists a bundle morphism (NOT principal) such that

with:

Then

• is called a G-extension of the H-principal bundle
• is called an H-restriction of the /G-principal bundle

i.e. if one is an extension of the other, then the other is a restriction of the one.

#### Connections

Let be a principal G-bundle.

Then each induces a vector field on

It is useful to define the map

which can be shown to be a Lie algebra homomorphism

where:

• is the Lie bracket on
• is the commutation bracket on

Let . Then

where:

is called the vertical subspace at point .

Idea of a connection is to make a choice of how to "connect" the individual points of "neighboring" fibres in a principal bundle.

A connection on a principal G-bundle is an assignment where every for a vector subspace of is chosen such that

1. where is the vertical subspace
2. The push-forward by the right-action of satisfy:

3. The unique decomposition:

leads, for every smooth vector field , to two smooth vector fields and .

The choice of horizontal subspace at each , which is required to provide a connection, is conveniently "encoded" in the thus induced Lie-algebra-valued one-form , which is defined as follows:

where we need to remember that depends on the choice of the horizontal subspace !

Recall that is the map defined in the beginning of this section:

where is called the connection 1-form wrt. the connection.

That is, the connection is a choice of horizontal subspaces of the fibre of the principal bundle, and once we have such a space, we can define this connection 1-form.

Therefore one might wonder "can we go the other way around?":

Yes, we can!

A connection 1-form wrt. a given connection has the properties

1. Pull-back

where we recall

2. Smooth, since

where is smooth since the exponential map is smooth.

##### Different approach to connections

This approach is the one used by Schuller in the International Winter School on Gravity and Light 2015.

A connection (or covariant derivative) on a smooth manifold is a map which takes a pair of consisting of a vector field and a (p, q)-tensor field and sends this to a (p, q)-tensor field , satisfying the following properties:

1. for
2. The Leibnitz rule for a (1, 1)-tensor:

and the generalization to a (p, q)-tensor follows from including further terms corresponding to of the arguments. It's worth noting that this is actually the definition obtained from

which is the more familiar form of the Leibnitz rule.

Consider vector fields . Then

1. A vector field on is said to be parallely transported along a smooth curve if

i.e.

2. A weaker notion is

for some , i.e. it's "parallel".

#### Local representations of connections the basemanifold: "Yang-Mills" fields

In practice, e.g. for computational purposes, one whishes to restrict attention to some :

• Choose local section , thus

Such a local section induces:

1. "Yang-Mills" field, :

i.e. a "local" version of the connection 1-form on the principle fiber , defined through the pull-back of the chosen section .

2. Local trivialization , , of the principal bundle

Then we can define the local representation of

Suppose we have chosen a local section .

The Yang-Mills field , i.e. the connection 1-form restricted to the subspace , is then defined

Thus, this is a "Lie algebra"-valued 1-form.

Choosing the local trivialization :

Then we can define the local representation of the global connection :

given by

where

where we have

##### Gauge map
• Can we use the "local understading" / restriction to , to construct a global connection 1-form?
• Can do so by defining the Yang-Mills field for different, overlapping subspaces of the base-manifold!
• Need to be able to map between the intersection of these subspaces; introduce gauge map

Suppose we have two subspaces of the base manifold , such that for which we also have chosen two corresponding sections , such that

We then define the gauge map as

where is the underlying Lie group (on ), defined on the unique for all :

where is the Maurer-Cartan form.

From the definition of gauge map, get

where:

• denotes a point in the base-manifold
• denotes the the component (which is NOT, hence the comma)

This theorem gives us the relationship between the Yang-Mill fields on the two different subspaces of !

##### Example: Frame bundle

Recall that in the case of the Frame budle we have

Then a particular choice of section for some chart is equivalent of a specific choice of coordinates:

Let's first consider this as an instance of a Yang-Mills field:

is then a Lie-algebra valued one-form on , with components

where

• comes from being a one-form on , hence components
• from

We can obtain the gauge map for the Frame bundle. For this we first need to compute the Maurer-Cartan form in . We do this as follows:

1. Choose coords an open set on containing :

where

which are the "matrix entries".

2. We then consider

since

• is the unique integral curev of
• by def. of vector-field acting on a map

This can then be written

Hence,

Since

we, in this case, have

Then,

as we wanted!

Recalling the definition of the gauge map, we now want to compute

where is the index of the components. To summarize, we're interested in

Let , then

Hence, considering the components of this

where denotes the corresponding matrix.

Now, we need to compute the second term:

Observe that

Thus,

The above can be seen from:

Finally, this gives us the transition between the two Yang-Mills fields

That is, we got it yo!

### Parallel Transport

This is the proper way of introducing the concept of Parallel transport.

The idea behind Parallel Transport is to make a choice of a curve from the curve in the manifold , with

Since we have connections between the fibres, one might think of constructing "curves" in the total space by connecting a chosen point for to some other chosen point for , with and being "near" each other.

Then the unique curve

through a point which satisfies:

1. , for all
2. , for all

is called the lift of through .

My initial thoughts for how to approach this would be to make a choice of section at each , such that

where would be a choice for every . Just by looking at this it becomes apparent that this is not a very good way of going about this, since we would have to choose these sections for each such that it's all well-defined. But this seems like a very "intricate" way of going about this.

The idea is to take some such curve as a "starting curve", denoted , and then construct every other from this by acting on it at each point by elements of , or rather, choosing a curve in the Lie-group, such that

i.e. we can generate any lift of simply by composing the arbitrary curve with some curve in the Lie-group.

It turns out that the choice of is the solution to an ODE with the initial condition

where is the unique group element such that

The ODE for is

with initial condition

such that

Worth noting that this is a first-order ODE.

### Spinors on curves spaces

Spin group is a double cover of the special orthogonal group

where (i.e. inner product), i.e. we can construct a Lie group homomorphism

with

In other words, the map is 2-to-1, where the "kernel is the definition of spins".

Seems awfully similar to a bundle dunnit? With the addition that

## Complex dynamics

This section mainly started out as notes from the very interesting article by Danny Stoll called "A Brief Introduction to Complex Dynamics" (http://math.uchicago.edu/~may/REUDOCS/Stoll.pdf).

### Notation

• denotes a Riemann surface

### Definitions

A Riemann surface is a complex one-dimensional manifold.

That is, a topological space is a Riemann surface if for any point , there is a neighborhood of and a local uniformizing parameter (i.e. complex charts)

mapping homeomorphically into an open subset of the complex plane.

Moreover, for any two such charts and such that

we require the transition map to be holomorphic on .

Let be a simply connected Riemann surface. Then is conformally isomorphic to either

1. the complex plane
2. the open disk consisting of all with absolute value , or
3. the Riemann sphere consisting of together with the point with the transition map in a neighborhood of the point at infinity.

### Normal cover

A deck transformation is a transformation such that .

A covering space is said to be normal if

and there exists a unique deck transformation such that .

The group of deck transformations for the universal covering is known as the fundamental group of . Thus,

And we're just going to state a theorem here:

Let be a family of maps from a Riemann surface to the thrice-punctured Riemann sphere . Then is normal.

## Partitions of Unity

Suppose is a topological space, and let be an arbitrary open cover of .

A partition of unity subordinate to is a family of continuous functions with the following properties:

1. for all and all
2. for each
3. The family of supports is locally finite, i.e. every point has a neighborhood that intersects for only finitely many values of :

for some .

4. Sums to one:

Since on each open there is only finitely many non-zero , the issue of convergence of the above sum is gone, as it will only contain finitely many non-zero terms.

If is a smooth manifold, a smooth partition of unity is one for which every function is smooth.

For a proof, see p. 40 in lee2003smooth.

## Integrations on chains

### k-cubes

Let be open.

A singular k-cube in is a continuous map .

• A singular 0-cube in is, in effect, just a point of
• A singular 1-cube in is parametrized cuvre in

The standard (singular) k-cube is the inclusion map of the standard unit cube.

A (singular) k-chain in is a formal finite sum of singular k-cubes in with integer coefficients, e.g.

k-chains are then equipped with addition and scalar multiplication by integers.

### Boundary of chain

Consider inclusion mapping to the standard k-cube .

For each with we define two singular (k - 1)-cubes:

and

We refer to these as the and of , respectively.

Then we define

Now, if is a singular k-cube in , we define its by

and then define

We extend the definition of boundary to k-chains by linearity:

If is a k-chain in , then , i.e.

Woah! Seems to corresponding heavily with as stated thm:exterior-derivative-squred-is-zero.

### Integration

Now let , i.e. is a differential k-form on the unit k-cube in . Then

In which case we define the integral

If instead for open , and is a singular k-cube in , we define

i.e. integration of a k-form over a singular k-cube is defined by pulling the k-form back to the unit k-cube in and then doing ordinary integration.

By linearity, for a singular k-chain ,

Let

Then

## Q & A

### DONE Tangent spaces and their basis

• Note taken on [2017-10-10 Tue 05:03]
See my note at the end of the question

When we're talking about a tangent space , we say the space is defined by the basis of all the differential operators , i.e.

Now, these , are they arbitrary? Or are they dependent on the fact that we can project them onto Euclidean space in some way, i.e. reparametrise some arbitrary basis to .

I believe so. In fact, the notation we're using here is that , hence we're already assuming the domain to actually be in , i.e. there always exists a reparametrisation to "normal" Euclidean coordinates.