Majoring in Mathematics

From Conservapedia

Jump to: navigation, search

This article is intended to provide a brief introduction to what college mathematics is all about. It is aimed at high school students who have enjoyed mathematics in high school, and are starting to think about what to study in college. This article is a work in progress -- please expand and leave suggestions at the talk page. The theorems and ideas mentioned here aren't all things that you'll necessarily encounter in classes; indeed, most of them you probably won't. Instead, they are presented to indicate the sorts of thinking and methods that go into mathematics, and the sorts of problems that mathematicians work on today.

Mathematics is a strong and marketable major for students, relatively (but not completely) free of liberal bias.

Here is an outline of what I hope to write.

Contents

What College Mathematics is Not

Before we look at the sorts of problems that actually are dealt with by mathematics in college, let's look at some that are not. Rest assured, many of the most painful aspects of math classes you've taken before aren't so bad in college!

  1. Simply learning more advanced versions of mathematical ideas from high school. Tired of learning tricks for integration? Majoring in math won't just teach you new methods to integrate functions -- if you've taken an introductory calculus class, chances are you already know the important ones! What it will do is force you to think carefully about basic questions about what an integral is. Given some subset of space, what does it really mean to find its volume?
  2. Doing lots of computations. Chances are you'll have to do a few, but most mathematics in college is based around "proofs" -- very careful and rigorous arguments which show that mathematical statements are true. Many proofs won't involve any computations at all!
  3. Learning a bag of tricks to do Olympiad-type problems. If you've ever taken the AMC, Math League, or other similar competition tests, and not done well, don't worry! Much of these tests check nothing more than whether you're familiar with some collection of techniques for solving special problems. One math professor at Harvard keeps in his desk a copy of a math competition on which he scored a perfect 0/120. Mathematical theory goes far beyond these problems.

Instead, college mathematics is about developing new ways to rigorously approach a wide array of problems. More than aiming to find tricky techniques to solve certain problems, it is about building powerful approaches that solve many problems, and learning to think rigorously about them.

Liberal Bias

It is important to be aware that like many college degrees, mathematics has been subject to liberal bias. Fortunately, more conservative fields such as real analysis and differential geometry are perfectly viable paths for a student to take.

Major Topics of Study

Given that a math major won't just be getting more practice at the sorts of math encountered in high school, what is it about? Here's a list of some of the major fields that are the foundation of a mathematics education.

For each field, I plan to give a very general description and to describe a few problems/theorems that are nice results and capture some of the essence of a subject. I'll aim for a few paragraphs about each one. Then I will give a problem or two that is a much more open-ended question that has motivated the development of large amounts of theory. For now I have no written about any of these -- please leave feedback and suggestions on the talk page and I will try to arrive at the best set of motivating problems!

Algebra

  1. How many positions are there for a Rubik's cube. Brief discussion of group theory, and how a group acts on the Rubik's cube.

Bezout's theorem

It's a very basic fact that two lines intersect at exactly one point. But we can ask a similar question about other shapes. At how many points do two parabolas intersect? A circle and a line? The graphs of two cubics? A circle and a line may intersect at either 0 points (for example, the unit circle x2 + y2 = 0 and the line y = 2), 1 point (if the line is tangent to the circle), or 2 points. It's easy to see that we can arrange for two parabolas to intersect at four points: just consider the examples y = x2 − 10 and x = y2 − 10. Similarly, we can up with two cubics that intersect at nine points. You might be noticing a pattern here: a parabola is defined as the set of solutions to a polynomial equation in two variables, whose biggest exponent is 2: we say that a parabola has degree 2. For the same reasons, we say that a line has degree 1, and a cubic as degree 3. Bezout's theorem then gives a very rough answer to our question: a curve of degree m and a curve of degree n intersect in at most mn points.

We have seen that sometimes the number of intersections is less than this, but Bezout's theorem, properly generalized, says that the number of intersections is in fact equal to mn: in the case of the circle and the line given earlier, there are exactly two solutions as predicted, if we allow x and y to be complex numbers. Similarly, a point of tangency is in some sense a "double intersection", and should count twice: if we nudged the line just a little bit, the point of intersection would turn into two distinct points. After we make these adjustments (counting complex solutions, changing the setting to projective space, and counting points of tangency multiple times), Bezout's theorem provides a simple and elegant answer to our original motivating question: curves of degree m and degree n intersect at exactly mn points!

  1. Is the quintic solvable in radicals? And what in the world does this problem have to do with group theory?
  2. The ancient Greeks were fascinated by geometric constructions with straightedge and compass. They could bisect angles and take square roots, but they couldn't trisect angles or take cube roots. Why not?

Algebra and Number Theory

The development of the modern field of algebra has been deeply influenced by research in number theory. An example of a problem from number theory that has provided far-reaching motivation for work in pure algebra has been the question of unique factorization.

It's a well-known fact, the fundamental theorem of arithmetic, that every integer can be factored in a unique way as the product of prime numbers. It's natural to wonder whether this unique factorization holds in other simple rings, for example, the set of complex numbers whose real and complex parts are both integers (for example, 3 + 7i). These are the so-called Gaussian integers, and there is a good notion of "prime", and that every Gaussian integer can be uniquely factored into primes! For example, 7+0i is a Gaussian prime, while 5+0i is not: 5 + 0i factors as (2 + i)(2 − i), and both 2 − i and 2 + i are prime. In general, it turns out that a Gaussian integer a + bi is prime if either one of a or b is zero, and the other is a prime congruent to three mod 4, or if a and b are nonzero, and a2 + b2 is a prime (in the usual sense).

So unique factorization works well for the Gaussian integers, but what about other settings? An example similar to the Gaussian integers is the set of real numbers of the form a+b\sqrt{7}: note that (a+b\sqrt{7})(c+d\sqrt{7}) = (ac+7bd)+(ad+bc)\sqrt{7}, so the numbers of this form are closed under multiplication. Here too, and in many other cases, there is a good notion of unique factorization. It was formerly assumed that \mathbb Z[\sqrt{d}] would have unique factorization, but this turns out not to be the case. Consider the ring of complex numbers of the form a+b\sqrt{-5}. We can write 6 = 2 \cdot 3 = (1-\sqrt{-5})(1+\sqrt{5}). A bit more work shows that both of these are prime factorizations, and they're not equivalent! This discovery necessitated the careful investigation of unique factorization, a purely algebraic notion that had not been seriously considered. Ernst Kummer proposed a proof of Fermat's Last Theorem which failed only because it assumed some properties of unique factorization, but further consideration of this problem led to a revolution in algebra.

Linear Algebra

If you throw a beach ball into the air, letting it spin in any way whatsoever, and then catch it, all of its rotations will comprise one rotation about some axis. That is, there will be two "poles" that will end exactly where they started, and all other points will just have rotated around them. What does this have to do with orthogonal operators? What does it have to do with the fact that every cubic equation has at least one real root?

If you do the same with a ring (e.g. a Hula Hoop), spinning it and letting it fall on its original "footprint", there will be no points that land exactly on their original position (except in the case in which it didn't move at all.) What does this have to do with the fact that quadratic equations are not guaranteed to have any real roots?

(Something about eigenvectors of distinct eigenvalues of an orthogonal/Hermitian matrix being orthogonal, can't think of a cute example just now.)

Geometry

In college math, "geometry" does not refer to more work on Euclidean high school geometry. Instead, college geometry includes a challenging field known as "differential geometry." This includes Tensor Calculus, Riemannian Geometry, Covariant Vector Fields, Minkowskian Manifolds, and General Relativity.

Some problems include:

  1. The Theorema Egregium - that there exists a quality inherent to a surface no matter how it is "bent."
  2. What happens if we remove some the Euclidean postulates taught in high school? Specifically, the parallel postulate?

The Double Bubble Conjecture

You've probably noticed that when you blow a soap bubble, it quickly stretches into a nearly spherical shape. But why should a bubble tend towards a spherical conformation, instead of a cube or an icosahedron? According to physics, the bubble will tend toward a shape with the lowest energy, and the physical effect of surface tension is that this lowest energy is achieved when the bubble is in the shape with the least possible surface area enclosing a given volume. For example, if we want to enclose one cubic inch of air, the smallest surface that could contain it is a perfect sphere. The proof of this case, for a simple bubble, involves techniques from the mathematical study of differential geometry, but turns out not to be too difficult. Related problems in finding minimal surfaces, however, turn out to be quite fiendish, and involve computations of "mean curvature" related to the notions of curvature defined above. The double bubble conjecture is one such.

  • insert a picture here!

What is the smallest possible surface area for two bubbles joined stuck together, each enclosing some fixed volume? Thinking back to childhood experience, we know such bubbles end up as two partial spheres, connected together at a perfectly flat surface. But there are many other possibilities: why couldn't it be two tetrahedra, connected along some wavy surface? Quite sophisticated mathematical techniques are needed to prove this seemingly obvious result. The proof was not completed until 2000, by a team of mathematicians including Roger Schlafly.

Topology

The Bridges of Konigsberg

The earliest roots of the modern study of topology (and graph theory) lie in Leonhard Euler's investigations of a famous puzzle from the 1730s. The town of Konigsberg (now Kaliningrad) was bisected by a river, which contained two large islands. There were seven bridges connecting the islands, as indicated in the accompanying image. The problem asked for a walk through the town which would cross each bridge exactly once: every bridge must be crossed once, and none more than once.

Layout of Konigsberg

Euler proved that no such path existed, with the following simple and elegant argument: observe that the path one takes while on the islands is irrelevant to the problem at hand, and that all that matters is the order in which the bridges are traversed. Every time one enters one landmass by means of a bridge, he leaves it by means of another. This means that the number of bridges entering and exiting each landmass must be even, except possibly the landmass in which he starts and the one in which he ends (since he does not both enter and exit these). However, in Konigsberg, each of the four landmasses is served by an odd number of bridges! This means that no tour of the sort desired is possible.

This argument is fundamentally a "topological" one because of the recognition that the rigid shapes and geometry of the problem make no difference. If the shape of the islands were changed, or a bridge were moved but stayed between the same two islands, the solution of the problem would not be affected. The only thing that matters is the number of islands and which islands are connected by bridges. The modern field of topology is at its heart the study of geometric objects whose rigid shape is not important.


  1. Connection between topological spaces and algebraic groups
  2. Ham sandwich theorem (easy to state, harder to motivate solution. Possibly there's something better.

How can we tell the difference between a sphere and a torus?

This is the sort of question that motivated the development of the field of topology for many years. Recall that a torus is the shape of an inner tube, or the surface of a donut: it is a two-dimensional surface with one hole. If a torus were made out of clay, one could stretch and squish it, but it is intuitively clear that no amount of stretching could possibly deform in into a sphere: it's simply impossible to get rid of the hole! Similarly, a sphere made out of clay can't be deformed into a torus without ripping it. The question of why the two shapes are not the same is fundamentally a topological one: we are interested in some intrinsic geometric properties of these objects, but not their specific rigid shapes.

It turns out that methods from algebra provide the easiest way to prove that it is impossible to deform a sphere into a donut. Observe that a sphere has the following property: given any loop drawn on the sphere (say, a rubber band stretched around it somehow), it is possible to "contract" that loop to a point (i.e., to scrunch up the whole rubber band at one place). On the other hand, if a rubber band is placed on a donut in such a way that it loops around the central whole, there is no way to contract it to a single point. A crucial observation in the field of algebraic topology is that if a space can be obtained by squeezing and stretching a sphere, then it will still have the property that every loop is contractible. Since a torus doesn't, we conclude that it must be fundamentally different from a sphere. Arguments like this are the basis for the field of algebraic topology, and this problem of contracting loops in a space is studied using the fundamental group.

Analysis

  1. Riemann mapping theorem. State this in terms of conformal mappings of the unit disk and draw lots of pictures!

Fixed points

Often in mathematics or physics, it is interesting to think about what happens when we apply the same function repeatedly. For example, one might take a calculator, punch in a random number, and then hit the button "cos" repeatedly. What happens when we do this? If you try it, you might see something that looks like this:1.3, 0.267499, 0.964435, 0.569881, 0.841965, 0.665998, 0.7863, 0.706469, 0.760659, 0.724382, 0.748909, ...

It looks like these are getting closer and closer to some fixed number, and if you hit the button a few more times, you'd discover that indeed these terms approach some constant around C=0.739085. This is the value of C for which cos(C) = C. There's no way to solve for this C algebraically, but we can see from the intermediate value theorem that there must be some value of C for which this is the case, since cos(0) > 0 but cos(1) < 1. Generally, a value x is called a fixed point of a function f if f(x)=x, and it is interesting to study when a function must have a fixed point. Although many functions have no fixed points, there are theorems that give certain conditions under which any function must have a fixed point.

One of the most famous such theorems is the Brouwer fixed-point theorem, which says in its simplest form that any continuous function f: D^2 \to D^2 has a fixed point. This fact has a very intuitive description: if we take a sheet of graph paper (so we can label each point with coordinates), crumple it up, and set it on top of another sheet of graph paper, then there is some point on the crumpled up sheet which is directly above the corresponding point on the second sheet! While this sounds like a simple fact, proving it rigorously relies on the techniques of algebraic topology.

Another celebrated example of a fixed point theorem is known as Sharkovsky's theorem. In addition to fixed points, some functions have points for which repeating the function several times yields the same result: for example, a value of x for which cos(cos(cos(x))) = x is called a 3-periodic point of the function cosine. Sharkovsky's theorem is the surprising result that if a continuous function f has points of period 3, then it has points of all other periods! For example, consider the quadratic f(x) = -\frac{5}{2} x^2 + \frac{7}{2} +1 has f(0) = 1, f(f(0)) = f(1) = 2, f(f(f(0))) = f(f(1))=f(2) = 0. So 0 is a point of period 3 of f(x). Sharkovsky's theorem implies that there are points of every other period. For example, there is some x such that f(f(f(f(f(f(f(f(x)))))))=x! Solving this explicitly would involve solving a polynomial of degree 14, an extremely difficult task that is not guaranteed to have a solution expressible in terms of ordinary arithmetic operations and taking nth roots (cf. the Abel-Ruffini Theorem). But Sharkovsky's theorem guarantees that there is some value of x satisfying this.

Mathematical Billiards

Imagine that you're playing a game of pool on a standard rectangular table, but with a twist: there's no friction, so after you hit the ball, it continues along its trajectory forever. Obviously if you hit the ball so that its path is perpendicular to one of the walls of the table, it will bounce straight back, then off the opposite wall, straight back again, etc.: the ball will follow a repeating pattern of bouncing between the two walls forever. Similarly, if you aim just right, you can hit the ball so that it moves in a diamond-shaped path forever, bouncing off the middle of the sides of the walls repeatedly. A trajectory of the cue ball is called "periodic" if it's like one of these, in that the ball eventually ends up back where it started, and moving in the same direction. A trajectory is said to have "period p" if it hits the wall exactly p times in the course of this repeating cycle. The two examples given above have periods of two and four, respectively. One question we might ask is whether there is an orbit of period p for every possible choice of p. Can you come up with a path on the standard billiards table that has period 3? If not, can you prove that one doesn't exist?

As is often the case is mathematics, we attempt to generalize these results on periodic orbits to other situations. What if the billiards table is not rectangular, but triangular? It's true, but extremely difficult to prove, that given any triangular billiards table, there exists a periodic orbit of some period. The existence of periodic orbits is an extremely subtle question. This can be generalized even further, to ask whether all polygons admit periodic orbits. What about polyhedra? Can you imagine some periodic billiards orbits on a cube-shaped "table" (no gravity! think billiards in space).

The orbits that aren't periodic turn out to be just as interesting as those that are. Imagine that you place the cue ball at random on your billiards table, and smack it in some random direction. Chances are it won't end up in a periodic orbit -- putting the ball in a periodic orbit requires a very careful shot. So the ball will bounce around forever without retracing its path. But will it ever come back to the point where it started, perhaps moving in a different direction? The answer to this question is "no", but the Poincare Recurrence Theorem gives a somewhat satisfactory replacement: given any distance (e.g., an eighth of an inch), and any angle (e.g. two degrees), after some amount of time, the ball will be within that distance of its starting point, and directed within that angle of its initial path! This is a very suprising fact.

While this problem seems extremely elementary, sophisticated mathematical techiques from a range of fields of geometry and analysis have been brought to bear on it. Even so, many easily-stated questions remain open.

  1. Periodic orbits in billiards in polyhedra. I think it should be possible to say some interesting things about this but with some real content. Possibly there's a better idea.

Hard problem: What does an integral really mean? Babble a bit about integrating the characteristic function of Q.

Probability and Statistics

  1. Discussion of Benford's law, application to recognizing fraud.
  2. Understanding the concept of the "random variable"
  3. Proving the Central Limit Theorem

Motivating problem: Deriving conclusions from rarely occurring events, such as multiple no-hitters by the same pitcher. Specifically, how many no-hitters must a pitcher toss before one can conclude from that evidence alone that he is a great pitcher?

Partial Differential Equations

If you take a uniform circular metal disk and hold the temperatures at the periphery to some fixed pattern (say, lighting fires at some places and applying ice at others), and give it plenty of time to reach equilibrium, can you calculate the final temperature at every point inside? What does this have to do with Fourier series? (Hint: It has everything to do with Fourier series. This is the problem that Joseph Fourier was studying when he invented the series.)

Other fields

  1. Logic: still need a good problem
  2. Number theory: connected to the other fields above in many ways. Prime number theorem?
  3. Set theory: some discussion of axioms, maybe talk about AoC.
  4. applied mathematics, emphasizing the solution of partial differential equations which are essential to mechanical engineering

Related fields

Many other subjects are often considered parts of mathematics as well. Depending on specific undergraduate programs, a math major may or may not be required to take courses in these areas, but many of the same ideas are used in these subjects as in the others.

  1. Computer science: something about algorithmic complexity: maybe the fact that primality testing may be done in polynomial time?
  2. Computer science: How can Fourier transforms be used to multiply enormous integers much faster than normal methods?

...

Interplay

None of these fields exists in a vacuum, and there is rich interplay between them. If you insert "algebraic" or "differential" before the name of just about any branch of math, you get a more specialized, but important field. Motivate some of these connections:

  1. Algebra+topology: discussion of fundamental group.
  2. Analysis+topology: discuss conservative fields, and hint at de Rham cohomology without actually saying those words.
  3. Algebra+analysis: The set of derivations on the tangent bundle of a smooth manifold form a Lie algebra, connecting solutions to partial differential equations and algebra.

So What?

After finishing a major in mathematics, most people don't keep working in pure math. But the ways of thinking and skills that the study of mathematics provide make available a wide range of career opportunities.

Specifically, mathematicians seeking to remain connected to their field pursue career opportunities in:

  • actuary or insurance work
  • defense-related work
  • market trading analysis for investment banks
  • mathematical modeling

Less-related fields that are also of some interest to mathematicians are:

  • computer programming
  • accounting
  • K-12 teaching

Some mathematicians enter entirely unrelated fields. Math majors have the highest average scores of students in any field on the LSAT and GMAT, the tests required for entrance into law school and business school [1]: the skills acquired in studying mathematics in invaluable in the logic section of the LSAT and for subsequent careers in law.

Further Reading

Numerous expository books cover the topics mentioned on this page, along with many others, in a very accessible way. Below are a few recommended books whose content is comparable to that of this article.

  1. Mathematics: The New Golden Age, by Keith Devlin. Accessible discussions of many important ideas.
  2. Professor Stewart's Cabinet of Mathematical Curiosities, by Ian Stewart. A similar book covering the exciting side of mathematics and requiring little background.
  3. Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, by John Derbyshire. Focuses on the famous Riemann hypothesis and related material.
  4. Hexaflexagons, Probability Paradoxes, and the Tower of Hanoi: Martin Gardner's First Book of Mathematical Puzzles and Games. A collection of articles by one of mathematics' foremost expositors.

References

Personal tools