This post is organized as a part of the UT Austin Spring 2020 DRP program.I’d like to thank my mentor Richard Wong for useful directions throughout the semester and introducing me to much of the foundational material in homotopy theory.

A spectrum is a sequence of pointed spaces and structure maps . For example, one can take a pointed space and consider the suspensions . This forms a spectrum with the structure maps the identity. This is the suspension spectrum of .

One can consider this general spectrum and suspend it by the integer $i$ to get a spectrum whose components are given by .

The homotopy groups of a spectrum are defined as the colimit:

.

Since is a functor on , one is motivated to define maps between spectra as a a collection of maps such that the obvious diagrams commute.This may not actually be the best definition of maps between spectra but it doesn’t matter too much for now.

For example, one can consider and suspend this to get , the sphere spectrum, an example of a suspension spectrum.

Given the suspension and loop space functors , one has the loop-space adjunction for pointed topological spaces, namely,

Note that (with compact-open toplogy). This allows us to define as spectrum where the adjoint map is a weak homotopy equivalence.

Let’s look at an example.

Eilenberg-MacLane Spectrum

Recall that an Eilenberg-Maclane space corresponding to where is an abelian group is a CW-complex such that and for .Call this CW-complex

We know the following correspondence from algebraic topology : For any CW-complex .

Collect these spaces on the index to get . We claim this forms a spectra. What are the structure maps though?

In particular, note that we we have . By uniqueness of the Eilemberg-McLane space upto homotopy equivalence, we have a homotopy equivalence . Take the adjoint of this under the loopspace-adjunction to get the structure maps .

Hence, this is also an spectrum.

CW-spectra

Well, I suppose some analogy with the classical case is required here. Consider the Quillen-Serre model structure on pointed topological spaces(aka the usual one). Here, the cofibrations are retracts of generalized relative CW inclusions(see [1] for definitions). In particular, the CW complexes are cofibrant objects in this case. We’d like our so called CW complexed to be cofibrant objects in the stable model category on topological sequential spectra(these are the same things as the spectra defined above where ).

A CW spectra is a sequence of CW-complexes where the structure maps map isomorphically into a subcomplex of . TO prove the above fact, see Prop 0.49 of this.

Edit: This was supposed to be published on 14/04/2020 but due to some issues, it was published on 16/04/2020.

If you’re wondering what in the world a political post is doing in a mathematics blog, then don’t. Ambedkar’s experiences and pervasive effect on society merits far greater attention than any fancy mathematical abstraction or theorem I could slap unto the front page of my blog. Today is Ambekar Jayanthi, a day in remembrance of B.R Ambedkar and I could hardly resist myself in making some kind of a comment on how inspiring his views have been in shaping my worldview and political inclinations. Perhaps, the first question to ask is: Who is Ambedkar? I’ll give a very brief introduction to this legendary character but I mostly assume that anyone who takes interest in this post has some idea about him. I wanted to discuss in some depth his interactions with other leading figures of the time like Nehru, Gandhi and Jinnah(the ideological founder of Pakistan), his thoughts on Indian Independence, Dalit representation and the ‘Islamic Problem’, and the contemporary image of his legacy.

Some may find it strange that I should call him an ‘unsung hero’ as every soul in the country knows his name yet I stick by it, for his true importance is woefully understated, both today and in the past.

I am neither a political expert nor a historian so to some extent, I absolve myself from any frivolous errors I might make in the post. Needless to say, this is not a biography; I will state my opinions clearly on Ambedkar’s ideas and interactions. I am not being promoted or paid, in any measure, in publishing this post. He is, like most important characters in the Indian Independence episode, a controversial figure, especially his scathing rebuke of Gandhi and Islam.

Introduction

Bhimrao Ramji Ambedkar(भीमराव रामजीआंबेडकर)(1891-1956) was born in Mhow, currently in Madhya Pradesh, into a low-middle class family(economic) from the Mara caste, one of the lowest caste denominations in Maharashtra. Quite early in his life, he moved back to Maharashtra. He is mainly remembered as one of the foremost Dalit(untouchables in the caste hierarchy) activists of his era and a key founder of the Indian Constitution. He was also a highly reputed economist, obtaining a doctorate degree at Columbia University, and practicing law at the famous Gray’s Inn while studying at LSE(London School of Economics).

The story of the discrimination he faced in his schooling days are well-known to many Indians. As a child in school, he was often made to sit separately from the other students, if not outside the class, on account of his low caste. He was not even allowed to drink water, unless it was poured from a designated student of higher caste into his mouth.

Almost 6 months ago, I came across an interesting update through the UT CS theory seminar group on the classic problem on the computation of the Discrete Fourier Transform(DFT) of finite groups. Chris Umans, from Caltech, made some pretty nice improvements to the complexity of the general DFT through some slick combinatorics , which I’ll shortly explain. The paper is here.

Introduction

Well, before I even start, perhaps, I should make some kind of a comment to motivate the problem at hand. A sequence of complex numbers can be represented by a function .

A Discrete Fourier Transform of sends it to the sequence where .

This motivates the definition of the Fourier transform of a function from a finite group which takes as input a representation , i.e

where one must note that here, more generally, can be matrix-valued.From Maschke’s theorem, the DFT of a finite group reduces to a map which takes as input an element (group ring consisting of all functions from to under convolution)and sends it to . Throughout, one assumes that we have already have the and chosen a basis for each .

Quite simply, one trivially gets that this requires operations. Exponent-one algorithms exist for abelian groups(as one would expect using the decomposition into cyclic groups), supersolvable and certain classes of symmetric and alternating groups.

Throughout the post, let be the exponent of matrix multiplication(currently it is around 2.38 I guess).

The main result is that Chris Umans achieves an operation complexity of . Some basic facts from representation theory will be used throughout though I guess most readers would already be familiar with that. For example, it is easy to prove that through the Schur orthogonality relations is that where the is the set of all irreducible representations of . This gives us the useful lemma.

Lemma 1:

For every real number , (Proof omitted)

Some facts about the induced representation and Clifford theory are used but it shouldn’t be too much of a hassle to discuss it on-the-fly.

The Support Trick

Let be a group and , a subset. Let’s assume that we can caluclate the generalized DFT with respect to for all inputs supported on (i.e components for ) in operations. Surely, with some extra computational cost(involving the index of ), one can extend this to all inputs because if we can compute DFT supported on then we can compute DFT supported on by multiplying by which has an additional operation cost of . We can apply Lemma 1,take appropriate number of ‘shifts/translates’ by elements of to get the following result(see Section 2.3):

This series(probably three posts) will be on the monad. The first part will deal with only an introduction to monads, algebras over moands and the adjunction problem . The second part will deal with coequalizers and Beck’s theorem in connection with Kleisli categories. In keeping with this blog’s slight CS tilt, the third part will deal with examples of monads from functional programming which I think are crucial to wholly understand the monad. I’ve noticed that I am running too many ‘series posts’ simultaneously. I am still working on the third part of the Grothendieck Spectral Sequence series and will continue the Hopf fibration post which I haven’t updated for a year!

I think that before I continue to introduce the main players of today’s post, I should review the definitions of an adjoint functor as it’ll be quite useful as a tool to see the power of monads.

Consider two categories and functors and . are said to be adjoint i.e if there exists a natural isomorphism of Hom-sets for all . Equivalently, by the unit=counit definition, if there exists the pair of natural transformations (unit) and (counit) which satisfy the following commutative diagrams:

Throughout the post, I’ll represent natural transformations by the symbol . I recommend the confused reader to refer to Aluffi’s Algebra Chapter 0 book in the section on category theory and also my post on the importance of adjoint functors.

If is a natural transformation of functors whose components are given by for an object in . If is another functor, then I’ll represent the components of the induced natural transformation by .

If instead, there is a functor , then I’ll represent the components of the natural transformation by where is an object in .

Monad

Let’s say that is an endofunctor equppied with two natural transformations (unit) and such that the following diagrams commute:

The commutative diagram is a kind of generalization of associativity. I direct the reader to the nlab page to learn more about the basics of it. Much of the importance of monads is derived from its connection to adjoint functors and its connection to programming paradigms(see continuation monad for more information).

The format of this post will be a little unorganized. I start off with the definition of group cohomology without giving much motivation. Then, I properly define a bar resolution for groups in complete detail. With this setup, I properly motivate group cohomology, work out some extremely interesting examples to see why one should care about this particular free resolution. I’ve also decided to make another post later on a generalization of the bar resolution to the case of monads and a further generalization to algebras. I am currently taking a course on Homotopy Type Theory and I’ve found it blissfully interesting so there might be a few posts in the future on that too. I’ve only recently noticed that I’ve never actually posted anything on either algebraic topology or homotopy theory, which is actually my main interest so I might update that soon enough.

A representation of a group over a field is just a module. It is a common philosophy that one can study the structure of a group by studying its representations. In some sense, group cohomology relates to topology but I’ll get to this later on in the post.

Consider as a where acts trivially and let be a representation,i.e module. Extend this to a . The group cohomology with coefficients in is defined by

. One can immediately see that . A map in satisfies . But since the action on is trivial,. This implies that . Sending to any element of corresponds exactly to the fixed points . So, corresponds to the fixed points .

Bar resolution

So, let’s describe these things in terms of the bar resolution. The calculation of the functor entails finding a projective resolution $latex F_{\bullet} \mapsto X \to 0$ and applying the contravariant functor. Taking ‘homology’ yields . The bar resolution actually gives such a resolution, in fact, a free resolution.

Let be the free on , that is, the set of symbols for and where all those expressions are simply formal symbols for the generated free module. We have the following free resolution, whose maps will be described soon.

[Insert Image]

where

the augmentation map is given by on the basis elements.

is given by:

where the maps is defined on the basis of the free module which extends to the entire group.

,

for . Keep note of the dimension above. Finally,

.

There is topological motivation for this seemingly bizarre construction.Let us say that the aim is to somehow construct a simplicial object from . For every ordered tuple, one can insert a simplex and can act on these faces by diagonal action:

If any element of the tuple is , the simplex degenerates into lower dimension, for example from the differential maps we saw above, if . The differnetial maps match with the this interpretation of degeneracy and face maps of the simiplicial object.You can alternatively also define a normalized version of the bar resolution where the maps are replaced by tuples where the element are non-identity. It is easy to check that the sequence is a chain complex though this involves some annoying calculations. Also, the sequence is exact as we’ll show in the following lemma.

Lemma 1:

The sequence above is a chain complex that is for and . It is also a -free resolution i.e it is exact.

Proof of Lemma 1:

Step 1:Proving that it is a chain complex

Pick a basis element ..

I encourage the reader to work out the case for . Now, for , we can do a little trick to simplify the calculations. Define a new set of modules by

for with the action given by a diagonal action on the tuples as follows

.

I will omit the details but essentially, what happens is that where we have a bijection of those tuples. So, we just hacked our module to be a free module on one lower dimension. Verify that it is indeed a bijection and you get an isomorphism for given by

.

Isomorphism follows trivially from the fact that left multiplication by is a bijection on a group. There may be other ways to map the basis elements too.

Define by

.

We have that by the usual calculation one encounters in chain complexes. This will obviously be useful soon. We shall prove that from which it follows that iff since is an isomorphism.

Now apply to this expression and check that it is equal to . It is a slightly annoying calculation but much less painful than dealing with the original equation. Now, it remains to deal with the case for (we’ve already done j=1), which I’ve already left as an exercise to the reader. Anyways, moving onto the next step

Step 2:Complex is chain-homotopic to the identity

We define the chain homotopy for (taking the -1 case as )on the basis elements by

, simply adding the element to the symbol. We simply use the isomorphism and transfer the entire problem to the ‘s to get

so we get that

. Now I leave it to the reader to verify through simple calculations that

for . Lower cases can be dealt with separately.

Group cohomology again and a few applciations

Ok, now that we’ve constructed this module resolution. By definition,

Now, it’s great that we’ve got a free resolution but obviously the hope is that one gets some meaningful results with the bar resolution. Let’s observe a few key ideas.

on the level of sets. Basically, assigning the values at the basis elements gives all maps from to by extending linearly.

Motivation

Let’s look at some low-dimensional cases to see what exactly is going on.

I describe in the first section of the post.

Just like as described in the first section, even , the collection of maps since specifying the values on the basis elements determines the entire homomorphism. Under this representation, the boundary maps can be written more explicitly as:

We can call the elements of and the elements of as , the cycles and boundaries respectively or cocycles and coboundaries, if one wishes to be more precise. It is a chain complex and it is exact though I won’t bother much with the details. The upshot is that know we have the following description of the cohomology group

.

Great! Now we have just one final simple thing to put in place before we look at some applications.

Theorem 1

Suppose is a short exact sequence of modules, then there is right-derived long exact sequence:

I covered the proof of this theorem for the general case of derived functors in my other post.

I guess, as a simple consequence of the proof of dimension shifting I completed in the other post(or you could prove easily on your own), we have the following theorem which is useful in the context of group cohomology.

Theorem 2(Dimension shifting):

If is a short exact sequence of modules such that is trivial, then we have the following natural(wrt to chian complex morphisms) isomorphisms

for .

The result essentially allows us to compute homology by computing homology of with respect to some other representation in a lower dimension. Now, let us move onto some examples.

The Extension Problem

This is probably most the classical application of group cohomology which I am sure the reader is already quite familiar with so I shall discuss it only little detail.

A classical problem in group theory was to find all possible extensions of a group by a normal subgroup. If we have two groups , it is natural to ask what groups are extensions of by , that is, fit into the exact sequence

An obvious example is a the direct product . One could also add the extra condition that it be a split exact sequence which is equivalent to a semidirect-product . Two extensions are said to be equivalent if the following diagrams commute:

[Insert Image]

Let’s think of a few extensions of groups. For the groups , we have the trivial extension and the well known extension where acts by inversion on .This is the dihedral group of order 6.(Sorry if I have messed up the order of placement of the normal subgroup, I never remember how it works. I think I may have even messed up the semidirect product symbol too. Who cares, I always hated that symbol anyways and abhor the idea of trying to fix it).

We’re going to prove that the elements of the cohomology group are in one-to-one correspondence to extensions up to equivalence. We begin by first defining a section on the level of sets as follows: If there is an extension of the form

[Insert Image]

We would like to define an action on from the exact sequence. Since the sequence is exact, there exists some such that $latex \pi(e_{g})=g$. A section is basically a choice of such ‘s. We’ll explain the representative notation soon. acts on by conjugation . Note that and also that conjugation by this element since is abelian. So, the action is independent of the choice of .

Let the section be . Each represents an equivalence class . Both and are in the same equivalence class for , so by exactness, there is a unique satisfying

Define a function by , the value determined above. Note that every element in can be written uniquely as by exactness. Let’s see how multiplication works in :

. Since ,. Since is abelian

where is the action of defined above.

Associativity of multiplication in (I leave it to the reader to check this) gives the equality:

which is exactly the condition for to be in . Finally, let be another section of the extension and let be its factor set. We’ll show next that and differ by a coboundary, an element of which implies that every extension has a well-defined cohomology class in .

Obviously, lie in the same coset since they both map to under . This implies that there is a unique such that . Define by .

Also,

Equating the two expressions yields,

giving the needed result.

Now, we have to prove the opposite assertion , that, an element of yields a unique(upto equivalence) extension of by . I encourage the reader to find the remaining details in a textbook like Dummit and Foote or Aluffi(I am nor sure if he covers it though).

The element in corresponds to a split extension because then the section is a genuine homomorphism and so .

Schur-Zassenhaus Theorem

Consider the exact sequence of groups:

[Insert Image]

If then the sequence is split, equivalently,

If the order of , a normal subgroup of , is coprime to the order of the index of in , then there is a semidirect product such that for some complement .

We’ll first deal with the abelian case and that’ll turn out to be useful in the general proof. Towards, that we have the following Lemma.

Lemma 2:

Let be a finite group of order and let be an abelian group with , then are modules, that is, they are annihilated on multiplication by .

Proof of Lemma 2:

Let be the bar resolution of and let be the endomorphism of given by multiplication by on and multiplication by for for . We show that is nullhomotopic. Consider the chain homotopy given by the formula on the basis elements

I leave it as an exercise to the reader to show that it is nullhomotopic, a trivial calculation.

Corollary 2.1

Let be a finite group of order and let be either a vector space over or a module then for .

Proof of Corollary 2.1

Multiplication by is an isomorphism on in either case. Applying to the bar resolution , consider a class .

which is only if is . So, .

This resolves the Schur-Zassenhaus theorem for the abelian case below.

Corollary 2.2(Abelian Case)

If is an abelian group, then so every extension of by is split.

With this, we reduce the proof of the general case to this specific case through some techniques. Before that though, there’s just a random group theory fact we’re going to use.

Lemma 3:

Let be a group and let by a Sylow-p subgroup. Then, is a subgroup of .

Proof of Lemma 3:

Proof of Schur Zassenhaus Theorem

We proceed by induction on , the order of , the first element in the sequence. Let be a prime that divides $n=ord(A)$ and consider , a Sylow-p subgroup of . We know that by standard group theory via. the class equation, for example. Consider , the normalizer of in . Using Frattinis’s argument,

.

So, the index divides and is a subgroup of by Lemma 3. So, . So, we get an extension

by the Second Isomorphism Theorem and using Frattini’s argument. If , using the induction hypothesis, this sequence splits so there is a subgroup of which is isomorphic to . On the other hand, if , we get that and we get the extension

by the Third Isomorphism theorem. This splits by the induction hypothesis. Let be the subgroup of isomorphic to . Pulling back, let where is the natural quotient map . From this, we get the extension,

. Using Corollary 2.2(since is abelian), there is subgroup of isomorphic to so there is a subgroup of isomorphic to which, as established, is isomorphic to . Q.E.D

In the previous post, I gave a little glimpse into derived functors and somewhat motivated their construction. In this post, we’ll get our hands dirty with homological algebra to continue setting up the required machinery and go through many proofs.

In the previous post, I promised to continue the proof of a lemma which establishes a long exact sequence. Before giving the proof, let me mention a few facts which will be useful.

If we’re given a short exact sequence in an abelian category where is an injective object. From the map and the monomorphism , we can extend to a map such that the composition is the identity. Using the Splitting Lemma, one obtains a non-canonical splitting . Applying ,

.

The identity map factors through the projection map , the same holds true after applying , in particular, the last map is surjective!

.

Proof of Lemma 1:

Step 1:A morphism between objects with injective resolution induces a chain map between the resolutions

Let be a morphism between two objects with injective resolutions . In the figure below, the map is constructed from the fact that is a monomorphism and there is a map from to an injective object.

Now, there is a monomorphism . Next, note that by the commutativity of the square already defined, takes to by the fact that by exactness of the lower sequence. This means that the map induces a morphism and by exactness, we can compose this with to get a map . Since , we get the required map . Inductively continue this process to get the entire chain map. Note that all the maps defined from the injective object property are not unique.

This is a series of posts which will develop the required material to present the Grothendieck spectral sequence, a few applications of it in algebraic geometry(Lerray sequence, sheaf cohomology etc.) and some other categorical nonsense. This post is meant to be a light glimpse into derived functors and not a full introduction.

I’ve wanted to post this for a few months but unfortunately I overestimated how much free time I would have in my first semester of college(hurrah!). At one point, I totally forgot that I even maintain a blog! That’s why I haven’t posted for about 5 months. The title of my blog is now officially fraudulent. There were many new things(both math-related and otherwise) at UT Austin that I wanted to explore in my first semester so I could be forgiven for putting aside my blog. I think I’ve realized that it is really just a matter of consistency. If I do something regularly, I’ll stick to the routine. So maybe, the title of the blog is more aspirational than it is real.

Anyways, enough of the nonsense. Though the Grothendieck spectral sequeuce encodes a lot of data(like all other spectral sequences),it is actually surprisingly simple to motivate and feels almost ‘natural’ to construct. But I suppose that is really indicative of the Grothendieck’s spectacular reformulation of homological algebra that has seeped into modern textbooks that makes it feel so ‘natural’. It is truly a beautiful sight to witness how derived functors lead to this elegant construction.So first, let’s set up the machinery.

An object in a category is said to be an injective object if for every morphism and every monomorphism , there exists a morphism extending the map such that the diagram commutes.

In the abelian category setting, the importance lies in the fact that is an inejctive object if and only if the Hom functor is is exact. If an injective object is at the beginning of a short exact sequence in , the sequence splits.

A category has enough injectives if for every every object , there exists a monomorphism for some injective object .

An injective reslution of an object $X \in C$, an abelian category is a resolution

where are injective objects. In particular, this is a quasi isomorphism of chain complexes in given by where is the complex .

Derived Functors

Let’s say is an abelian category. Consider a short exact sequence in :

An exact functor is a functor between abelian categories which preserves such sequences. Taking the direct sum, for example, preserves preserves a short exact sequence. Accordingly, we say that functors are left and right exact if they preserve the left and right parts of the short exact sequence. It is a well known that in the case of the category of modules over a ring ,the covariant Hom functor is left exact. , then

where is the Hom functor.

The tensor product functor where is an is known to be right exact. Many of these facts and their proofs can be found in many standard texts on commutative algebra or homological algebra. Some of the arguments in these proofs tend to be quite arduous to work through. An easier way to prove it is to notice that the functors are adjoint and show the equivalent statement that Hom preserves limits which is not too difficult. Taking the dual, yields the statement for tensoring and infact, it yields the completely general version which states that left adjoint functors preserve finite colimits using the Yoneda Lemma. See my other post on adjoint functors if you wish to learn more.

The point of derived functors(which I’ll shortly introduce) is to take these ‘incomplete’ exact sequences where we’ve ‘lost data’ to try and construct a long exact sequence. Remember that chain complex

equipped with ‘boundary maps(which I’ve not labelled) allows us to compute homology which measures how far the sequence is from being exact. ALL the data is in the chain complex itself and the entire process of computing homology/cohomology is just a formalization which turns out to be quite handy. In the same manner, one should treat derived functors as a comfortable formalization using the data we possess. For an object , we know just one thing:that there is an injective resolution:

.

Now, we take our left exact functor (contravariant for example) and apply it to the injective resolution to get

Now, just ‘take homology/cohomology’ and call it the right derived functor

. But wait, did you notice something?On the left hand side, I don’t refer to the injective resolution. That is the essence of the construction, it is independent of the injective resolution of upto canonical isomorphism. A proof of this can be found in any standard textbook on algebra in the homological algebra section(Aluffi, Dummit and Foote;I think Hatcher also proves it for the dual case in the section on cohomology). Let’s take a closer look at this sequence. Since is left exact, we get the following exact sequence:

W get . If we is exact, then the all would be trivial for ! I guess you could think of this as a way to encode approximation just like in homology/cohomology.

The converse isn’t necessarily true. An object is said to be $\bf{F-acyclic}$ for a left exact functor if for .

The final step of this formalization is ensuring that we have the long exact sequence

Lemma 1:

If is a short exact sequence in an abelian category with enough injectives and is a left exact functor, there is an LES

The proof will be deferred to the next post with discussions on its dual, other theorems and special cases such as Ext, Tor.

Below, I’ll repost an answer I gave to a question on Quora about the importance of adjoint functors. It is a simple exposition which focuses on the intuition behind their construction and gives a few examples with explanations that can aid anyone who is beginning to learn about adjoint functors. I’ll probably add more detail on the and throw in a few more diagrams later.

As another answer has already pointed out, adjoint functors are ubiquitous in many constructions across mathematics and that is enough reason to consider them important. But to help appreciate it more, I think a slightly different interpretation of a natural transformation will be helpful.

Let C be a small category and consider the nerve of the category which is a simplicial set whose n-simplices are the strings:

where are obviously the objects and the arrows are the morphisms in the category. The kth-face maps can be obtained by simply deleting . The classifying space is then realized as the geometrization of (turn the entire construction into a formal CW complex). For example, in the simple case of a poset where morphisms are determined by order, has the points of as its vertices and the size-k totally ordered subsets as it’s .

So, a functor induces a CW-complex map from the category of small categories to the category of CW complexes. I encourage the reader to work through this construction for the sake of clarity.

I’ll leave out a few details here but I’ll say that it is in this sense that a natural transformation is essentially a homotopy . You can then think of adjoint functors as homotopy equivalences on categories. Hence, they do correspond to something weaker than a homeomorphism which is often why many call adjoint functors ‘conceptual inverses’. The concept of an adjoint functor is a special case of an adjunction in 2-categories where these ideas make more sense. The Wikipedia page motivates it by considering a functor F:C \to D and finding the problem for which F is the most efficient solution. This is why these adjoint functors always come in pairs. Let’s review the definition and look at a few interesting examples for all this to make more sense.

Two functors are said to be adjoint i.e if there exists a natural isomorphism of the hom-sets . The equivalent counit-unit definition which is often easier to work with is that are natural isomorphisms and such that the following triangles commute:

The other answer has presented one of the simplest and most important examples, the free-forgetful functor on groups. I’ll present an example of a ‘free-forgetful’-type adjunction which may be a little harder to notice.

Let be a category, say the category of groups or something else. Take two objects . We will study the product . The product is defined by the maps to and so that if maps to both and , then there is a unique morphism which makes the diagram commute. Let’s construct a functor defined by . Damn, did you notice that? We just lost information as there is no map from that can you can compose to get the identity. can probably be constructed in different ways as a product. Well, it hardly matters, let’s try to do something inverse-like. Take an object and do the only thing you can think of, that is, send it to . Let this be the diagonal map . Notice that a pair of morphisms in is the same thing as a morphism . So we have the hom-set natural isomorphism . The counit is the pair of projection maps(a single morphism in ) for and the unit takes to (verify these facts and the components of the natural transformations). A nice mnemonic I once heard was that left adjoints(here ) are ‘liberal’ and generate free things.

Let’s look at a simple example in topology. Let be the category of compact Hausdorff spaces. Define which essentially returns the topological space and forgets the separation axiom and compactness. The ‘most efficient solution’ to this optimization is the functor , the Stone-Cech compactification functor. satisfies the universal property that any map where Y is compact and Hausdorff factors through \beta X .

Another extremely important adjunction is the Hom-Tensor adjunction. Let be rings and consider the categories of modules over these rings. Let be a bi-module and define and . We have a natural isomorphism which can be verified by constructing the units,counits,a simple exercise. It is not free-forgetful in any obvious sense like the previous examples.

Again, a basic example arises from considering partially ordered sets. If and are two posets then they are naturally categories. A pair of adjoint functors in this case is a Galois connection which means that if and only if , in other words, a natural isomorphism . Again, this doesn’t look like a free-forgetful pair.

The best way to understand adjoint functors is to encounter more exmaples of adjunction. As a guiding principle, you can expect to find adjunctions in ‘symmetric and inverse-like’ constructions. The left and right adjoints have many interesting properties, like preserving colimits and limits respectively. This can be proven quite easily once you establish that the Hom-functor preserves arbitrary limits. In fact, there is a class of theorems that allow one to establish left/right adjoints for a functor based on how it acts on limits and colimits.

If you aren’t aware, one month ago, the Sensitivity Conjecture, a 30-year old problem, was proven by Hao Hung, an assistant professor at Emory University in just a little more than 2 pages using particularly simple methods albeit cleverly implemented. The proof is very easy to understand and requires nothing more than basic linear algebra. You can find it here.

A few days ago, Donald Knuth simplified the already simple proof of H. Hung and could fit all his arguments in half a page. What really shocked me when I heard the news is that Knuth is still alive. I could have sworn that it was only a few days ago when I was skimming through a textbook on Discrete Mathematics and saw a bio with a black-and-white pic of him in one of the chapters related to computational complexity in the same manner that I’d see pictures and mini-biographies of other 20th century giants such as Grothendieck and Nash relegated to the margins of math textbooks and online articles romantically detailing the course of their fabled lives.

Now that I’ve realized that he’s well and alive, it shocks me equally to learn that at the age of 81, he is still able to contribute to research.

I’m not exactly going to regurgitate Knuth’s proof but what I’ll present certainly uses the same ideas. I must note here than Knuth’s proof itself is inspired by a comment on Scott Aronson’s blog which provided a method to prove the conjecture without using Cauchy’s Interlacing Theorem.

If you don’t know what the Sensitivity Conjecture, I’ll state it below.

If is a boolean function where the domain is the graph of the cube denoted by . So, a particular input is just a string of 0s and 1s. The localsensitivity of at an input is the number of indices in the ‘string’ of that you can flip and not change the output. The local block sensitivity of at input is the number of disjoint subsets of the set of indices such that the output doesn’t change when every index corresponding to a block flips in the input string of . The sensitivity and block sensitivity are defined to be the maximum of these corresponding measures over all the inputs.

The title may seem like a contradiction given that there is such a thing as the Universal Approximation Theorem which simply states that a neural network with a single hidden layer of finite width(i.e finite number of neurons) can approximate any function on a compact set of given that the activation function is non-constant,bounded and continuous.

Needless to say, I haven’t found any kind of flaws in the existing proofs(see Kolmogrov or Cybenko). However, I thought of something a little simple and scoured the internet for an answer.

What if we allow an arbitrary number of hidden layers and bound the dimension of the hidden layers making them ‘Deep, Skinny Neural Networks’? Would that be a universal approximator?

It is a well known fact that there are multiple proofs for the Nullstellensatz which do not use Noether’s normalization lemma. Serge Lang’s proof(in his book) and Zariski’s proof both fall under this category. In fact, Daniel Allcock of UT Austin posted a proof which essentially builds from the edifice of Zariski’s proof(see that here). He claims that it is the simplest proof for the Nullstellensatz and frankly this is quite true considering the proof uses nothing more than basic field theory, is only one page long and just requires some familiarity with transcendence bases, and the transcendence degree. Yet in the true spirit of simplicity, T. Tao has presented a proof which uses nothing more than the extended Euclidean algorithm and “high school algebra” albeit with a lengthy case-by-case analysis(see the proof here).

Most of these proofs(except Tao’s) go about proving the ‘Weak’ Nullstellensatz and obtain the ‘Strong’ Nullstelensatz through the famous Rabinowitsch trick.

But a few days, I found something truly magnificent, a proof by Enrique Arrondo in the American Mathematical Monthly which proves the Nullstellensatz using a weaker version of Noether normalization and techniques similar to that of Tao, Ritrabata Munshi. The proof is essentially a simplification of a proof by R. Munshi.

Here, I present a special case of the normalization lemma.

Lemma

If is an infinite field and is a non-constant polynomial and whose total degree is , then there exists such that the coefficient of in

is non-zero.

Proof:

Let represent the homogenous component of of degree .So, the coefficient of in is . As a polynomial in , there is some point where doesn’t vanish. Choose such a point to establish the and this guarantees a non-zero coefficient of .

Reflection groups

In order to understand the intuition underlying the theory of Coxeter groups(and Weyl groups in particular groups in particular), you can go through the theory of reflection groups which I’ll only superficially treat preceding my exposition of Coxetr systems and the associated representation theory.

Consider some Euclidean space and a vector . A reflection associated with is a linear operator which sends to and point-wise fixes the orthogonal hyperplane/subspace .

If the reflection is , then it can represented as:

Clearly is an orthogonal transformation and the set of all reflections in can be recognized as the subgroup of (orthogonal group of ) consisting of elements of order 2.

Before I begin discussing the Hopf fibration of the 3-spbhere, one of the simplest yet deeply profound example of a non-trivial fiber bundle, I’d like to recall the definition of a fiber bundle.

Let represent the entire space, base space and the fiber respectively where are connected. If is a continuous surjection onto the base space, then the structure is said to be a fiber bundle if for every , there exists a neighborhood of such that there exists a homeomorphism such that .

What this basically means is that locally, the fiber bundle looks like the product but globally, it may have different topological properties.

A trivial fiber bundle is a fiber bundle which in which the total space is . In fact, any fiber bundle over a contractible CW Complex is trivial.

The real projective space represented by is the space obtained from under the equivalence relation . Basically, is the set of lines which passed through the origin in . It can also be understood by identifying antipodal points(points which lie on opposite ends of the diameter) of a unit -sphere,.

One very basic yet deeply interesting example of these spaces is , known as the real projective plane. While is a points and is homeomorphic to a circle with infinity, the real projective plane turns out to be far more interesting indeed. It can’t be embedded in and its immersion/s such as Boy’s surface, Roman surface and Cross Cap have far stranger structures than a mere circle as in the case of . In fact, I will delineate some of the calculations involved in the Kusner-Bryant 3-dimensional parametrization of Boy’s surface. It’s a little fascinating how much complexity can be added to mathematical structures upon generalization especially in the case of the projective space which I believe have a remarkably simple and ‘non-threatening’ definition.

Nets/Moore-Smith Sequences

Sequences are common objects in the field of topology. Often, sequences can help identify continuous functions, limit points and compact spaces in metric spaces.

Moore-Smith sequences(or nets) are essentially a generalization of the sequence for an arbitrary topological space and we can see that many foundational theorems of general topology can be stated in terms of nets.

So first, let’s recall what a partial order and direct set is:

A partial order is an order realtion satisfying:-

If and , then .

If , then .

A directed set is a set with a partial order relation such that for any ,there exists such that .

It’s best to think of a directed set as a sort of analogue to an indexing set.

I thought of a random topological problem a week ago in my Analysis class and it has been bugging me for quite a while. I tried searching around but couldn’t find anything.

Consider a non-intersecting curve . It can even be a closed loop but it shouldn’t be ‘too straight’ else the problem is trivial. Considering a ‘thickening of the curve’. If is the curve at time and be obtained by adjoining closed disks of radius at every point of the curve (I won’t even bother to formalize it). Now, consider the relative complement homology(aka local at subspace) . The vague problem is to find nice examples of curves and how their homology groups vary with the parameter . For example, if one considers an except it doesn’t exactly intersect(instead, there is a small gap), then at , we obviously have the homology of a circle. If we thicken it a little bit, we’ll get a 4-torus which gains information in () and thicken it even more, the entire thing collapses to 0. The same thing works with a circle(you get a torus on thickening). For a trivial example, if you just take a line segment then any thickening deformation retracts to the segment which contracts to a point.

I asked my Analysis professor about it, unfortunately, he didn’t have much of an idea and directed me to faculty who may know something about it(and who I’ve yet to contact). Then, I approached my linear algebra professor after class to ask if he knew anything about it(probably a bad idea since he works in representation theory), he just laughed and walked out. I have some stuff jotted down on the problem making little(obvious) progress but I wanted to gather more examples of classes of curves and some inkling of a theorem before I post anything.

I am currently working on the third part of the Derived Functors sequence, some interesting things in stable homotopy theory.