From Stanley Xiao:I am looking to organize a series of learning seminars on the topic of bounded gaps between primes. In particular we will focus on the paper of Yitang Zhang and the subsequent paper by the polymath project headed by Terence Tao. We will also look at the preliminary work of Bombieri, Friedlander, and Iwaniec, as well as the results of Goldston, Pintz, and Yildirim. I would like to hold an organizational meeting on Wednesday, January 15th at 2:30 PM (time is subject to change), at MC 5046. At the meeting we will discuss: - How often will talks be given, and on what subject; and - Who will contribute talks (the former will depend on the latter). For undergraduates, they should have read some books or took some courses on analytic number theory and sieve theory before attending. I also assume that they are familiar with rudimentary material like real analysis and undergraduate algebra. Thank you for your interest!
is more or less how I would describe my life this term. I almost never stop doing assignments. I really need to at least write a statement of purpose so that I’m ready to apply to grad schools, but I haven’t even had the chance to give that any thought lately. Before I give my impressions of courses so far, I want to ask for your opinions. Next Winter is my last term as an undergraduate, and I’m wondering which courses I should take (of the ridiculous number that are listed on my “Plan” page). I feel mainly interested in algebraic geometry and functional analysis, and I probably have a decent enough analysis background already (for an undergrad), so I was thinking about focusing on geometry, with something like this:
- Riemann surfaces
- Index theorems (I will have all the prerequisites for this, except algebraic topology)
- Commutative algebra or algebraic topology (which one?)
- 2 CO courses: nonlinear and combinatorial designs (to finish CO major requirements)
- Some lame non-math course (since I need 0.25 more non-math units… sigh)
(If you want to see the official course codes/titles, again, check the “Plan” page). Thoughts?
Here’s a summary of how I’m finding my courses this term. I’m also sitting in on PMATH 955, but I won’t talk about that here. I ordered them by ascending difficulty/time commitment.
PMATH 445 (Representations of Finite Groups): So far, this course has been really easy. I never thought I would use that word to describe a 4th year PMATH course. The assignments are doable in a couple of hours tops, and the material seems pretty standard. Initially I thought it was just because I had experience with representation theory from when I took Lie Groups, but it seems like almost everyone is finding the course easy going. I can’t complain since I honestly don’t think I would have the time to throw at this course if it were much more demanding. It’s my first class at 9:30am, and notes are posted online, so admittedly I haven’t been attending very regularly (things became hectic in the past week or two).
PMATH 900 (Valued Fields): This course was also easy going in the beginning. It’s a bit more technical now, but still rather palatable since all the objects involved are just fields and certain kinds of rings, and other stuff we all know and love. There’s no highly sophisticated machinery to deal with (I’ll save that for last). The first assignment was very reasonable, but not something I would want to attempt doing in a single day. Algebraic cleverness comes in short bursts for me.
PMATH 465 (Riemannian Geometry/”Diff Geo 2″): Just like representation theory, the material here is pretty tame, or maybe it just seems that way since (having been through 753, 763, 441, 365, and just about every other course) I’m used to seeing linear algebra everywhere. The concepts seem natural enough to me, although I feel kind of uncomfortable when I have to get my hands dirty and think about solutions of ODEs. I don’t really know the first thing about differential equations. Lectures aren’t hard to follow for the most part, but I just zone out when he starts grinding through horrific tensor calculations, mainly because I have to focus so much on TeXing them, which is always a pain. Assignments are long and routinely absorb my weekends, including the current one.
CO 430 (Algebraic Enumeration): This is a serious course on enumeration. The operations on species are really cleverly chosen to make the generating functions behave as you’d expect, and they allow you to write down extremely concise formulas that capture some pretty nontrivial ideas. Even if I’m dealing with an equation involving 3 species, it can take me a few minutes to unwrap the definition and grok what it’s actually trying to say. Lectures aren’t the easiest to follow, and the assignments take me a while to solve and write up (he could probably make them significantly more difficult while staying within the margins of reason, though).
PMATH 822 (Operator Spaces): I don’t even know where to start. A lot of people who took Lie Groups with me last Winter would describe it as the hardest course they ever took. This course is at least 5 times more insane. In addition to the disfigured tensor products everywhere, there are now von Neumann algebras flying around and we never even defined them. The other students (who are mostly PhD students) seem way better than I am at analysis, and math in general. I’m not blaming myself for this, after all I’m just an undergrad; hopefully I will be able to reach a similar level in a couple of years. To be honest, it feels like the lectures are aimed at the PhD students in analysis, and my background in functional analysis just feels inadequate (I also have no measure theory background beyond PMATH 450). Keep in mind that this is coming from someone who generally wrote perfect assignments in Functional Analysis and meticulously checked every detail. It seems like we are also expected to have a lot of time to spend thinking about the material outside of class, which is understandable since grad students usually only take two courses. Unfortunately, I have to deal with five. I can’t even emphasize this enough; if you can’t do functional analysis upside-down blindfolded while reciting the alphabet backwards in your sleep UNDERWATER, you probably barely have a chance of grokking this material during the lecture. I started the first assignment fairly late, and proceeded to sink an enormous amount of time into it. I still didn’t completely finish. I should have known better; operator theory is notoriously subtle. I did feel like I could solve the remaining things with a couple more days, but even so, there was no room in my schedule to continue working on it past today.
Most people have heard either about the golden ratio
or about the Fibonacci numbers
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ….
They are intimately related, and I could write several enormous posts enumerating all of their amazing properties.
The golden ratio is an irrational number which satisfies the polynomial equation , that is, we have . In fact, since this is the lowest-degree (hence “simplest”) polynomial annihilating , we refer to it as the minimal polynomial of . Many surprising facts can be derived from this innocuous-looking relation. For example, immediately yields that , from which we get the “continued surd” expression
Stated differently, is a fixed point of the mapping . When you were a kid, if you were bored with a calculator, maybe you had the idea of starting with some number, adding 1 to it and taking its square root, and then repeating this process ad nauseum. If you did that, you would have found that eventually the numbers on your calculator stop changing at exactly the value .
For something else, take our original equation , and divide through by to obtain . This tells us is also fixed by the map . It follows immediately that the so-called “continued fraction expansion” of , which (in a precise sense) provides the data of the “best rational approximants” to , must look like:
With a lot of irrational numbers, we get much less pretty continued fraction expansions:
One thing to note is that the appearance of a large number in the continued fraction expansion, like 292 above, is telling us something about Diophantine approximation: that is, how well we’re able to approximate our number by rationals of a given denominator. The rationals you obtain by truncating a number’s continued fraction expansion are provably always the “best” in this sense. Thus, if we look at , whose continued fraction is just all 1′s, we can say that in this precise sense, is the number for which this “approximability” is the worst. It is as hostile as can be towards rational numbers.
Anyway, this is just a small taste of . Now let’s do something seemingly random: take the polynomial and “flip” the sequence of coefficients around, to obtain (alternatively, replace each exponent in the expression with the new exponent ). Now we’ll take a reciprocal:
We’re going to look at the series expansion of this thing, which we get from the geometric series formula. For what it’s worth, I don’t care at all about convergence (it’s the summer break; my operator theory course doesn’t start until 2 weeks from now!), so just work completely formally.
Stare at this thing for a while and you notice that the coefficient of in the above is given by
which is (of course) actually a finite sum since we nonchalantly ditch any terms with or . The sequence is the Fibonacci sequence (okay, except for being offset by one or something). It is not hard to show that as , we have . In fact these common ratios are nothing more than the convergents of the continued fraction expansion of .
One cool thing Wikipedia mentions is that you can tile the plane with a “spiral” of squares whose side lengths are given by the Fibonacci sequence:
In the next post I will discuss why all this number-theoretic information related to (root of the polynomial ) shows up in the power series expansion of the reciprocal of the “reversed” polynomial . I’ll also apply the same general procedure to , which has the so-called “plastic constant” as its root. The sequence we’ll obtain is called the Padovan sequence, and you can perform a similar tiling of the plane using triangles of those side lengths. Once you look at , the sequence you get from the series expansion is no longer nice and monotonic. This is odd, but in hindsight unsurprising since by analogy we would expect it to correspond to some kind of “tiling of the plane by a spiral of 2-sided polygons”, which is absurd.
Since we’ve been discussing sheaf cohomology for the last few weeks of the algebraic geometry seminar, and I’m leaving Waterloo soon, I was thinking about possible topics for what will probably be the last seminar talk. I figured that having drudged through all this machinery, it would be nice to look at a cohomological characterisation of affine schemes: namely, the fact that a scheme is affine if and only if all quasi-coherent sheaves of -modules are acyclic, i.e. for . In this post I’ll go over the treatment in [Hartshorne III.3, "Cohomology of a Noetherian Affine Scheme"]. I will probably explain all this stuff more coherently in a video sometime down the road.
This is called Serre’s affineness criterion, and the key to the proof (or at least one direction of it) lies in the fact that if you start with an injective -module , and consider its associated sheaf of -modules (just defined by ), then in fact this is flasque. We saw before that flasque sheaves are acyclic for the global sections functor , so in particular we can use flasque resolutions to compute cohomology (this will be important later).
We also saw that injective sheaves are flasque, so one might be tempted to claim that the “key” we mentioned above is a mere triviality: indeed, why not just observe that (in view of the equivalence of categories) any injective -module will give rise to an injective sheaf, and then finish? The problem with this argument is that the category of -modules is equivalent to the category of quasicoherent sheaves, and not the full category of -modules. So yes, we will always have an injective of the former category, but we would need an injective of the latter category to conclude flasqueness — and in general this does not happen.
The starting point is a theorem of Krull from commutative algebra. The full statement concerns the -adic topology on an -module, and I don’t really know much (nor do I currently have time to read Atiyah-Macdonald) about completions. However, we only really need one containment:
Krull’s Theorem. Let be a Noetherian ring and be an ideal. If are finitely generated -modules, then for any there is such that .
Now, define the following submodule of :
Before proceeding, let us mention a remark about injectives. We said an object of an abelian category was injective if the functor is exact. This (contravariant) functor is always left exact, so the important thing to take away is the following: “ injective” means that if is a submodule and is a morphism, then extends to a morphism .
Surprisingly, the above turns out to be equivalent to the following seemingly weaker condition (Baer’s criterion), namely: if is an ideal of and is a morphism, then extends to a morphism . This equivalence is a basic result from commutative algebra.
This reminds me of a similar thing that came up when trying to formulate the universal property of the Stone-Cech compactification: in some sense the closed unit interval is a “good enough” representative of the class of all compact Hausdorff spaces (this is formalised in the fascinating notion of an injective cogenerator).
Lemma 1. Let be a Noetherian ring, let be an ideal. Then if is an injective -module, then is also an injective -module.
To prove this, we only need to establish Baer’s criterion for , and this is done by observing one can apply Krull’s theorem to the inclusion , pulling back from to , and finally using the natural map to pull back to as required.
Lemma 2. Let be a Noetherian ring, and an injective -module. Then for any , the natural map to the localisation is surjective.
This lemma isn’t very difficult either. If is defined as the annihilator of , then you get some ascending chain of ideals in , but is Noetherian, so , yada yada. Then, letting be the natural map, you take some , write for some and (you can do this by definition of localisation), and define a map by sending (this turns out to be fine since as -modules, and ). Lift to a map by injectivity of , and then let . Then . Magic.
Proposition. If is an injective -module, then is a flasque sheaf of -modules, where .
To establish this, we use Noetherian induction on the support of the sheaf (call it ). The basic idea is, for some open , to choose some and consider some open of the form . Noting that , we can invoke the lemma above, and then the problem reduces to showing is surjective, where . But this follows by induction (put and note this is an injective -module by the lemma, hence , whose support is strictly contained in , is flasque; at this point we win since for all opens ).
Theorem. Let for some Noetherian ring . Then if , for all quasicoherent sheaves on .
To see this, let . Take an injective resolution in the category of -modules, apply the Serre functor to get a flasque resolution of . Applying the global sections functor, we just get back the original resolution, so we’re done.
Theorem (Serre). Let be a Noetherian scheme. Then TFAE:
- is affine.
- for any quasicoherent sheaf on and .
- for any coherent sheaf of ideals on .
We’ve already shown that (1) => (2), and (2) => (3) is easy. (3) => (1) can be proved using the following characterisation of affineness: is affine if and only if there are such that , each set is affine, and is covered by .
by Wei Xi Fan
R: What is
R: What is the quantum analogue of
L: I have no clue.
R: Let’s try This is the sequence
L: OK. . I can’t get anywhere from here that would make it look remotely close to .
R: What do we do?
L: We call upon the fire of inspiration itself, Ignis.
IGNIS: The truth ye seek is but a cascading staircase, but one that ends.
L: What do you think that means?
R: Instead of trying , let’s try
there are terms in total just like , but here, each term is one less than the previous.
L: Like a factorial that suddenly ends. All right, so
Oh! The factors line up. The subtraction yields
This is exactly what we are looking for.
R: Great! So we have the identity
in the real calculus.
L: OK. What about when or when ?
R: Well… what do we do to get from to
L: Well, we divide by its last term, which is , to get to , because the only difference between them is that is missing the term.
R: What do we get when we go from to
L: Well, , and so we divide this by to get to , so I guess we should define
R: What about
L: Well, we would divide this by so I guess it is
R: So what’s the general formula?
Are you sure the identity (*) still holds for these new definitions?
R: For these situations, we should call upon the minion of non-illuminating bashing, Grunt.
GRUNT: ‘Tis done.
R: There, this fact has been verified.
L: I shall call these Pochhammer symbols.
L: Or we can call them falling factorials. Or falling powers.
Another “soft post”. It has been a pretty long time…
I’m not really sure how to get started here, so let me just start with an observation. As I’ve mentioned many times in my posts, I tend to be very nocturnal; although I sometimes manage to “normalize” my sleep schedule, it never lasts for more than a week. For example, it is currently 6:00 in the morning. When I have classes to attend, or other daytime responsibilities, I usually compensate by sleeping twice a day for half as long.
Fortunately, my plight is shared by a fair number of mathematics students (although admittedly it seems to be shared by even more computer science students). The result is that I spend a lot of time talking to those people (mostly undergrads in lower years) about mathematics. You all know who you are.
What I find ironic is that lower-year students seem to assume they’re wasting my time, or that I will find their thoughts and questions boring. This couldn’t be further from the truth; in fact I’ve found all such conversations quite intriguing and useful. I feel like I probably learn as much from these people as they do from me. For example, I kind of understand the implicit function theorem now! And it’s been almost 3 years since I took Calculus 3. Anyone who knows me will agree that unless I’m honestly rushing to finish some work, I’m always happy to think about a problem, or explain some concept.
The other thing I want to mention is the burden of correcting others when they make a false statement. I choose the word “burden” carefully, because it is usually a feat (especially for people like me, whose body language tends to be pretty arbitrary) to execute this favour without rendering the atmosphere uneasy, or provoking a defensive reaction. The smallest difference in the tone of your voice could make the difference between the person laughing and thanking you, or becoming irritated as if you had stood up and straight out called them an imbecile.
Alas, such is the untrained human ego. Over the years, I’ve grown completely accustomed to being flat out wrong. It’s unavoidable. Mathematics is done with such acute precision, and such calculated deliberation, that recalling everything correctly all the time is practically impossible, and trust me, there are diminishing returns on trying to achieve that kind of perfection. Often I’ll state a flawed version of something I remember reading in a book, or miss an edge case in a definition. The whole point is that there are usually enough people in the room to tell me I’m wrong.
If you were the only person left in the world, doing mathematics would be utterly pointless. As they say: “a proof in a vacuum is no proof at all.” Others are rendering you an enormous service by correcting your mistakes. Whenever the primal ego-goose inside you starts ruffling its feathers, just give it a swift slap of rationality, and remind yourself of this fact. The alternative is that people remain silent and thereby leave you to spoil in the cesspool of your own ignorance for that much longer. Is that really desirable?
One thing you need to understand about doing mathematics is that you are working in a veritable exosphere of abstraction. You are standing on a magnificent, skillfully crafted mountain, and should you ever choose to trek down to its base, you will discover that this mountain was merely rooted on yet another mountain, double its size, and so on and so forth. It’s way too difficult to remember how to prove every single fact on the “dependency chain” to where you are now. That’s like remembering each and every stone you saw on your way up the mountain. Try to remember as much as you can, but don’t forget there are plenty of pretty yellow photo albums of all those stones anyway, should you ever forget one of them.
Okay, I think that’s all I wanted to say. I’ll try to type Chapter 2 of Wei Xi’s quantum calculus saga soon. Until next time, cheers.
Also, it looks like representation theory got slightly nerfed due to the fact that we now only have one course for both group and ring theory: the material on the Sylow theorems and semidirect products has been moved to rep theory instead of in a group theory course where it actually belongs (and where I saw it). Sigh… less time for actual rep theory. The old system was better; upper-year courses are being watered down more and more.
by Wei Xi Fan
Let us take a detour into the world of quantum calculus. <insert weird chromatic 8-bit beeps>
L: It is rather embarrassing that sums are not additive, isn’t it?
R: Hmm…you mean for
L: Yes, precisely. Let’s fix it.
R: OK. How about for let’s use
L: Yes, that works! After this change, we will now have a sum that is additive. To wit, in (*) we now have in the left-hand side terms and on the right-hand side the last term of the first sum is while the first term of the second sum is thus matching up perfectly.
R: What about when or
L: Well, to make additivity work, we will have to define
R: Does additivity still hold with these definitions?
L: Why, yes, this is why we made such definitions in the first place.
R: The fundamental theorem of calculus is nice.
L: Why did you bring that up?
R: Well, it’s nice. It says integration and differentiation are inverse operations.
L: Hmm. Wouldn’t it be nice if there is somehow an inverse operation to summation?
R: Suppose we had a sequence of numbers and we transformed it into a new sequence where (Using our new notation, of course.) What can we do to recover the original sequence
L: Well, we can take the successive differences:
R: Which is?
L: Let’s see…
Oh! This reminds me of the fundamental theorem of calculus. The analogy is that our sequence is like a function in calculus. When we transform it to a new, accumulative sequence this corresponds to us transforming into another, accumulative function
R: Exactly. The accumulation occurs discretely for our sequence, while the accumulation is infinitesimal for our function (a.k.a. integration).
L: But what does taking successive difference correspond to?
R: Hint: discrete versus infinitesimal.
L: I understand now. Taking successive differences corresponds to differentiation: given a sequence we can make a new sequence corresponding to making a new function by differentiating some function at each point.
R: So does summing and then differencing cancel each other out?
L: Well let’s see. Let’s start with
Let’s sum it.
Let’s difference it.
We got back our original sequence. This tells us that differencing is the opposite of summation. Wait a minute… that’s what we did a minute ago.
R: If we really waited a minute, this is a tautology. What happens if we difference first, and then sum?
L: Well, let’s start with again and first difference it:
Let’s sum it. Watch this telescoping:
Hmm…we got our original sequence, except each term had subtracted away from it.
R: What does this remind you of?
R: That’s right: the second fundamental theorem of calculus.
L: Great! We should give the operation of finite difference a symbol so we can write down our fundamental theorems in a succinct form.
R: Given a sequence let us define a new sequence by
L: Our sequences are double-ended now?
R: Why not? Such sequences are just functions from into anyway.
L: Oh. I guess all of the above remains unchanged anyway, so this is fine. Using as a starting point is arbitrary anyway.
R: What are our fundamental theorems?
R: Let us call this shadow of the real calculus some kind of umbral calculus.
L: Why not quantum calculus? It is discrete, after all.
R: Why not finite calculus then?