Difference between revisions of "Calc3.2"
m (→Curve Integrals: The right symbol for huge norm of huge Jacobian!) |
(Move material around to integrate Jacob's material and mine.) |
||
Line 181: | Line 181: | ||
Then the integral is the difference between the values of <math>g</math> at <math>a</math> and <math>b</math>: | Then the integral is the difference between the values of <math>g</math> at <math>a</math> and <math>b</math>: | ||
::<math>\int_a^bf(x) dx = g(b) - g(a)</math> | ::<math>\int_a^bf(x) dx = g(b) - g(a)</math> | ||
+ | |||
+ | We will generalize the familiar integration of mathematical functions defined on intervals of the real numbers, to integration over geometric curves, surfaces, adn volumes. First, we will look at the intuitive meaning in terms of dividing a surface into tiny squares or a volume into tiny cubes. Then we will see how the process may actually be carried out in terms of the techniques of integral calculus. | ||
=Double and Triple Integrals= | =Double and Triple Integrals= | ||
+ | |||
+ | ==Definition of the Double Integral== | ||
+ | The definition of the double integral follows much that same pattern as the definition in the single variable case. As in the single variable case, we divide the part of the domain we're interested in into pieces, and multiply the size of those pieces by a value of the function on that piece. Adding up all our products gives us a Riemann sum, and taking the limit as the pieces' size goes to zero gives us the integral. | ||
+ | |||
+ | In the case of a function of two variables, say, ''x'' and ''y'', the domain will naturally be the ''xy''-plane, or some region in it ''R''. A region could be a disc (a circle plus its interior), a square and its interior, or any other shape you can think of. | ||
+ | |||
+ | While technically it doesn't matter too much what shapes we divide the region into, or that they be the same size, etc., we're going to use as simple and straightforward a definition as we can, to make our lives easier when we try to actually compute these things. | ||
+ | [[Image:Approximation.jpg|thumb|200px|right|As the size of the squares decrease, they approximate the region better and better.]] | ||
+ | So we divide ''R'' into ''n'' square subregions <math>R_k, 1\leq k\leq n</math>, and in each region pick a point <math>(x_k,y_k)\in R_k</math>. Note that all the <math>R_k</math> might not cover all of our original region ''R''; but as the size of our squares decreases, they approximate the region better and better - in the limit, they will cover the whole thing. | ||
+ | |||
+ | The Riemann sum for a function ''f'' is then simple enough to construct, using <math>A</math> to note the area of one of our squares: <math>S= \sum_{k=1}^n f(x_k,y_k)A</math>. If we use <math>dA=dxdy</math> to note we are differentiating with respect to area, we can define our double integral: | ||
+ | |||
+ | <math>\iint_R f(x,y)dA = \lim_{n\to\infty,A\to 0} \sum_{k=1}^n f(x_k,y_k)A</math> | ||
+ | |||
+ | ==Computation of the Double Integral== | ||
+ | |||
+ | It would be difficult to actually use this definition to compute a double integral. Instead, we find it convenient to use our familiar techniques from single variable to evaluate double integrals. | ||
+ | |||
+ | Let's examine a rectangular region first, whose sides are aligned with the ''x'' and ''y'' axes, with the lower ''x'' bound at ''x=a'', the upper ''x'' bound at ''x=b'', the lower ''y'' bound at ''y=c'', and the upper ''y'' bound at ''y=d''. See illustration. | ||
+ | |||
+ | The double integral of a function couldn't be simpler: we just integrate the function twice, each time treating one variable like a constant (just like we do when we take partial derivatives. | ||
+ | |||
+ | <math>\iint_{Rectangle} f(x,y)dA = \int_c^d \left({ \int_a^b f(x,y)dx }\right) dy</math>. | ||
+ | |||
+ | Let's do an example problem on the square with corners at ''(1,1), (1,2), (2,1), (2,2)'': | ||
+ | |||
+ | <math>\iint_{Square} \frac{1}{x+y} dA = \int_1^2 \left({\int_1^2 \frac{1}{x+y} dx}\right)dy = \int_1^2 \left({ \log(2+y) - \log(1+y) }\right) dy = \int_1^2 \log \left({ \frac{2+y}{1+y} }\right)dy </math> | ||
+ | |||
+ | which is a problem of single variable calculus. | ||
+ | |||
+ | ==Definition of the Triple Integral== | ||
+ | |||
+ | By now, defining this should be a simple matter for the student. Since for a function of three variables, the domain is three dimensional space, we'll be considering regions in space and dividing it into cubes, just as we divided two dimensional regions into squares before. | ||
+ | |||
+ | So we divide ''R'' into ''n'' cubic subregions <math>R_k, 1\leq k\leq n</math>, and in each region pick a point <math>(x_k,y_k,z_k)\in R_k</math>. Just like before, it may be that all the <math>R_k</math> might not cover all of our original region ''R''; but as the size of our cubes decreases, they approximate the region better and better - in the limit, they will cover the whole thing. | ||
+ | |||
+ | The Riemann sum for a function ''f'' is then simple enough to construct, using <math>V</math> to note the volume of one of our squares: <math>S= \sum_{k=1}^n f(x_k,y_k,z_k)V</math>. If we use <math>dV=dxdydz</math> to note we are differentiating with respect to volume, we can define our triple integral: | ||
+ | |||
+ | <math>\iiint_R f(x,y,z)dV = \lim_{n\to\infty,A\to 0} \sum_{k=1}^n f(x_k,y_k,z_k)V</math> | ||
We will not give rigorous proofs of the various theorems here. Such proofs are taught in college-level analysis and differential geometry courses, and often dwell upon esoteric issues of singularities and infinities. | We will not give rigorous proofs of the various theorems here. Such proofs are taught in college-level analysis and differential geometry courses, and often dwell upon esoteric issues of singularities and infinities. | ||
Line 289: | Line 330: | ||
The multiplier to use is called the ''[[Jacobian]]'' of the coordinate change. It is the [[determinant]] of the matrix of partial derivatives. That matrix is: | The multiplier to use is called the ''[[Jacobian]]'' of the coordinate change. It is the [[determinant]] of the matrix of partial derivatives. That matrix is: | ||
− | ::<math>\begin{pmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta}\end{pmatrix}</math> | + | ::<math>\begin{pmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta}\end{pmatrix}\,</math> |
The Jacobian is written <math>J(x, y / r, \theta)\,</math>. Working out the derivatives and the determinant, its value in thsi case is r. | The Jacobian is written <math>J(x, y / r, \theta)\,</math>. Working out the derivatives and the determinant, its value in thsi case is r. | ||
Line 360: | Line 401: | ||
To find, for example, the area of a sphere only from <math>\theta = 0\,</math> to <math>\theta = T\,</math> (that is, the area of the Earth north of latitude <math>\pi/2 - T\,</math>), we have | To find, for example, the area of a sphere only from <math>\theta = 0\,</math> to <math>\theta = T\,</math> (that is, the area of the Earth north of latitude <math>\pi/2 - T\,</math>), we have | ||
::<math>\int_0^{T}\int_0^{2\pi} R^2 \sin \theta\ d\phi\ d\theta = 2 \pi R^2 \int_0^{T} \sin \theta\ d\theta = 2 \pi R^2\ (1 - \cos T)</math> | ::<math>\int_0^{T}\int_0^{2\pi} R^2 \sin \theta\ d\phi\ d\theta = 2 \pi R^2 \int_0^{T} \sin \theta\ d\theta = 2 \pi R^2\ (1 - \cos T)</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
But how useful is such a technique, since it is so easy to think of regions which are NOT rectangles? Let's move on to consider a more general kind of region. | But how useful is such a technique, since it is so easy to think of regions which are NOT rectangles? Let's move on to consider a more general kind of region. | ||
Line 411: | Line 423: | ||
<math>= \frac{2}{3} \frac{1}{8}\left({ 0 + \frac{\pi}{2} + 0 + 3\frac{\pi}{2} -0 - \frac{-\pi}{2} -0 -3\frac{\pi}{2} }\right) = \frac{\pi}{3}</math> | <math>= \frac{2}{3} \frac{1}{8}\left({ 0 + \frac{\pi}{2} + 0 + 3\frac{\pi}{2} -0 - \frac{-\pi}{2} -0 -3\frac{\pi}{2} }\right) = \frac{\pi}{3}</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==Computation of the Triple Integral== | ==Computation of the Triple Integral== |
Revision as of 01:13, January 25, 2010
Contents
Vectors, and Vector Functions
Vectors
A vector is a mathematical object which has magnitude like a number, but also direction. For example, if I wanted to describe how a person is driving at any particular time, I would need two numbers: their speed, and some measure of their direction (the angle it makes with true north, for example). These two quantities together constitute the motion vector of that person.
We also use vectors to describe position: for example, to locate an object, it suffices to know how far it is from you, and in what direction.
Vectors are the bread and butter of multivariable calculus because they pack multiple variables into a single, easily manipulated quantity. Let's look at some more specific examples.
In our first descriptions, we used a coordinate system you should be familiar with call "polar coordinates." If you aren't familiar with this system, we will review them shortly. When using Cartesian coordinates, vectors are very, very familiar. The point (x,y) has a position vector, or put more simply, is the vector written . See illustration at right. Unlike variables which may be numbers, like
etc., vectors are usually written
or
. While the latter is common in print and online, we will use the former since it is more easily duplicated in handwriting.
In three dimensional space we use three variables to write down a vector. In Cartesian coordinates, this would look like .
We can, if we wish, talk about just the length of the vector - this is its absolute value, or . For a vector written in Cartesian coordinates, the Pythagorean theorem tells us this is just
.
Vectors can be added and subtracted, just like numbers. See the illustration below.
In Cartesian coordinates, this is particularly easy: and
.
Multiplication by a number is also simple: the vectors direction is unchanged, but its length is multiplied by that number. Again, in Cartesian coordinates, this becomes simple: .
There are two ways to multiply a vector by another vector. In two dimensions, we can perform only one, which is called the dot product or in more advanced terminology, inner product. Given two vectors, this dot product is how far one vector goes in the direction of the other, using the other as the "unit" of length. It is defined , where
is the angle between
and
. As before, this takes a particularly simple form in Cartesian coordinates:
. This expression is similar for the three dimensional case:
.






While we often write to describe a vector, we often find it more convenient to define three vectors
and
, unit vectors which point along the positive
and
axes, respectively.
With these vectors, we can write our original vector . It means exactly the same thing as
, but is a little friendlier to the printed page. As an additional notational convenience, we often write
, since keeping track of all the
's and
's would get unwieldy very fast if we are working with several vectors.
As we are about to see, these conveniences can make the calculation of the cross product relatively simple. Hopefully, the student will remember the definition of the determinant of a matrix, because this will make calculating cross products much, much easier. For those who are unfamiliar with the idea, we will review the concept.
For a matrix , we define the determinant
to be
. To compute the determinant of a larger square matrix, then, we choose any row of the matrix and take the left entry, and multiply that number by the determinant of the matrix formed by eliminating the first row and column of the original matrix. From this, we subtract the next entry in our row times the matrix formed by eliminating THAT row and column of the orginal matrix. Then we add the next term, and so on and so forth until we have a complete expansion.
Below is an example of this technique, expanding along the first row.
Examine this equation, and make sure you understand how each of the matrices on the right hand side were formed. Once we have reached this point, we can use our definition of a 2x2 determinant to arrive at a value.
We now arrive at a means of calculating the cross product of two vectors. It turns out that the definition we gave of cross product amounts to
Normally, we don't allow vectors (like ) to be elements of a matrix or a determinant, but in this case, we make an exception because the formula is so useful and easy to remember. We will encounter a similar exception to the rules when we give an easy way to remember the formula for curl.
Vector Functions and Vector Fields
Before, we described scalar functions as taking a point in space and ascribing a number to it - for example, . With our new notation, we can describe this as
, which is a scalar function written with vectors.
There is another class of functions, though, which return not numbers but vectors. Mathematicians define a function as anything that maps some set (the "domain") to some set (the "range".) For the functions we have seen so far, the range is the real numbers (scalars.)
When the range is a vector space, it can be called a vector function. But the most common types of vector functions are those for which the domain is physical space. In that case, the function is called a vector field.
Vector fields are useful for describing many phenomena, most notably flows, winds, heat dispersion, and electric and magnetic forces. For example, at every point in the ocean, the water is moving with both a speed and a direction. If we have some vector which can be used to describe points in the ocean, we can write a function
, which will give us the velocity vector of the water at
. Very often, we use
as a variable vector describing a position or location,
for velocity, and later we'll encounter
for acceleration.
Coordinate Systems
We're all very familiar with the Cartesian coordinates in two dimensions, and once the student has learned the "right hand rule" and committed it to memory, they have learned all there is about the Cartesian system in three dimensions.
Hopefully, the students will be familiar with polar coordinates as well, but we shall review.
In the plane, it is frequently useful to describe shapes, not by their distance along two axes, but by their straight-line distance to a point, and their angle to a fixed line. We use instead of the usual
, where
is the distance and
is the angle.
We can easily move between Cartesian and polar coordinates with these equations:
Cylindrical coordinates are just polar coordinates, with a z-axis. They are frequently useful for describing phenomenon which have circular motion in a plane, but are moving in some direction (like a spiral).
More common than cylindrical coordinates are spherical coordinates, which use distance from a center point together with "latitude and longitude" to describe points in three dimensional space.
In the spherical coordinate system, a point is described with its distance to a central point, , the angle made in the
plane between the points projection into that plane and the x-axis
, and the angle made between the point and the z-axis
.
As with polar coordinates, there are equations which allow us to transfer between spherical and Cartesian coordinates.
Limits of Multivariate Functions
- See the limit article.
The limit of a multivariate function is almost identical to the single-variable limit in concept, but complicated by the realization that (x,y) can approach the target point from any direction, not just the left or right hand sides.
The definition is not dissimilar from the single-variable case. The limit of a function is defined so that if, for any tiny little number
you can think of, there is some distance
such that every point within
of
is taken by the function to a number within
of
.
Nevertheless, the kinds of phenomenon we see in the multivariate case is very different. Note that while it is still possible to define a limit from a direction, we don't bother to do so - unlike the one-variable case when a one-sided deriative told us half of the functions behavior at a point, in higher dimensions such a limit tells us almost nothing. Indeed, the only kind of phenomenon that these kinds of limits are capable of describing are "point discontinuties," which can be "fixed" just be defining the function to be the limit at that point.
Partial Derivatives
- See the proof page for details.
When dealing with functions of more than one variable, one has to be careful about what a "derivative" is. The operative concept is the partial derivative, which is the derivative of the function with respect to a given argument while holding all the other arguments constant. It is written like a derivative, but with a different "d" character.
If
we have
Here are some alternative notations for partial derivatives:
One can take higher-order partial derivatives as well, such as
or
which means
A question that arises is: do mixed partial derivatives commute? That is, do we have:
The answer is yes, if the derivatives are continuous. In problems that arise in practice, the derivatives are always continuous, and switching derivative order is a staple of work in multivariate calculus.
The Jacobian
Suppose you have a series of functions . We define the Jacobian of these functions to be
We'll be using Jacobians soon, so it was important to introduce them now. They are particuarly useful for describing changes in coordinate systems.
- expand
Introduction to Integrals in Multivariable Calculus
In multivariable calculus, and in the related areas of physics, we will be extending the notion of the integral well beyond the simple case of the Riemann Integral from elementary calculus. These extensions will consist of
- Integration over higher-dimensional regions, such as surfaces and volumes, with sophisticated ways of specifying these regions.
- Integration of functions, typically representing physical quantities, such as integration of density to get total mass, or integration of a gravitational or electric field along a path to get potential energy or electric potential.
- Integration of vector fields, for example, a magnetic field, across a surface to get the total flux.
- Integration in coordinate systems other than the familiar Cartesian coordinates, such that the physically correct result is always obtained, independently of the coordinate system used.
In all cases, the additional sophistication in the integrals will just involve things that are done to set up the mathematical problem. The actual integration will always be the same—one-dimensional Riemann integration of some function. That is, the integration that we will do will always reduce to the familiar definite integration for elementary calculus:
Recall that the solution to such a problem is easy to state, though maybe not easy to solve: Find a function such that
is the derivative of
:
Then the integral is the difference between the values of at
and
:
We will generalize the familiar integration of mathematical functions defined on intervals of the real numbers, to integration over geometric curves, surfaces, adn volumes. First, we will look at the intuitive meaning in terms of dividing a surface into tiny squares or a volume into tiny cubes. Then we will see how the process may actually be carried out in terms of the techniques of integral calculus.
Double and Triple Integrals
Definition of the Double Integral
The definition of the double integral follows much that same pattern as the definition in the single variable case. As in the single variable case, we divide the part of the domain we're interested in into pieces, and multiply the size of those pieces by a value of the function on that piece. Adding up all our products gives us a Riemann sum, and taking the limit as the pieces' size goes to zero gives us the integral.
In the case of a function of two variables, say, x and y, the domain will naturally be the xy-plane, or some region in it R. A region could be a disc (a circle plus its interior), a square and its interior, or any other shape you can think of.
While technically it doesn't matter too much what shapes we divide the region into, or that they be the same size, etc., we're going to use as simple and straightforward a definition as we can, to make our lives easier when we try to actually compute these things.
So we divide R into n square subregions , and in each region pick a point
. Note that all the
might not cover all of our original region R; but as the size of our squares decreases, they approximate the region better and better - in the limit, they will cover the whole thing.
The Riemann sum for a function f is then simple enough to construct, using to note the area of one of our squares:
. If we use
to note we are differentiating with respect to area, we can define our double integral:
Computation of the Double Integral
It would be difficult to actually use this definition to compute a double integral. Instead, we find it convenient to use our familiar techniques from single variable to evaluate double integrals.
Let's examine a rectangular region first, whose sides are aligned with the x and y axes, with the lower x bound at x=a, the upper x bound at x=b, the lower y bound at y=c, and the upper y bound at y=d. See illustration.
The double integral of a function couldn't be simpler: we just integrate the function twice, each time treating one variable like a constant (just like we do when we take partial derivatives.
.
Let's do an example problem on the square with corners at (1,1), (1,2), (2,1), (2,2):
which is a problem of single variable calculus.
Definition of the Triple Integral
By now, defining this should be a simple matter for the student. Since for a function of three variables, the domain is three dimensional space, we'll be considering regions in space and dividing it into cubes, just as we divided two dimensional regions into squares before.
So we divide R into n cubic subregions , and in each region pick a point
. Just like before, it may be that all the
might not cover all of our original region R; but as the size of our cubes decreases, they approximate the region better and better - in the limit, they will cover the whole thing.
The Riemann sum for a function f is then simple enough to construct, using to note the volume of one of our squares:
. If we use
to note we are differentiating with respect to volume, we can define our triple integral:
We will not give rigorous proofs of the various theorems here. Such proofs are taught in college-level analysis and differential geometry courses, and often dwell upon esoteric issues of singularities and infinities.
The first thing we do is develop more sophisticated ways of measuring areas of plane figures. Ordinary integration can give us the area of a figure with a straight horizontal bottom edge, straight vertical left and right edges, and an arbitrary function giving the top edge. Suppose we have a circle of radius R centered at (A, B), as shown in the figure at the right.
(We have placed the circle completely in the upper-right quadrant so that we won't have to deal with visualizing negative areas—this is just for illustrative purposes. The technique works in all cases.)
The blue area is given by:
The blue and yellow areas combined is:
The area of the circle is the yellow area alone, which is the difference between the two:
Before going any further, we use the change-of-variable theorem to shift x (which also shows that A and B don't matter; the area is the same anywhere):
(We'll have a lot more to say about the change-of-variable theorem later, but now we are just using the version from elementary calculus, that helps us calculate ordinary integrals.)
- Of course we could evaluate this integral, perhaps with a table of integrals, or a computer program, or a web site, getting:
- Of course we could evaluate this integral, perhaps with a table of integrals, or a computer program, or a web site, getting:
Let's look at the integrand, and its geometrical interpretation, more carefully. is the height of a thin vertical strip of the circle, at a given x. We could write that as an integral in its own right:
This means that the area of the circle is:
The geometrical interpretation is that we have divided the region (circle, in this case) into thin vertical strips abutting each other from left to right, and then divided each strip into tiny squares, abutting each other from bottom to top, and added everything up.
In general, the area of a region is:
This is two nested integrals, or a double integral. The quantity in parentheses is the integrand of the outer integral. As such, it is a function of x, so its limits are permitted to depend on x. In more complicated problems, such as finding the total electric charge, we might replace the inner integrand ("1" in the present example) with the density of electric charge, which could be a function of x and y.
This is the essence of how double (and later, triple) integrals are used to calculate areas, volumes, and other things over 2 or 3-dimensional regions. The big parentheses are usually omitted, but their implict meaning must be followed. The innermost integral, and its limits of integration, relate to the innermost "d" symbol, and so on.
A triple integral could be written:
Each successive integral may use the values of all the outer integration variables in specifying its limits.
Exercise [belongs in a separate section, no doubt]: Set up the triple integral giving the volume of a sphere of radius R, and evaluate same.
This use of double integrals to calculate areas may seem like excessive make-work, and using "1" as the integrand may seem boring, but this sort of analysis is the basis for everything we will do.
We could have performed the double integration in the other order, dividing the circle into horizontal strips first, and then subdividing those, so that dx is the "inner" integral and dy the "outer" one. This would have gotten the same result. The theorem that says that this is so is Fubini's theorem. Proving it is outside the scope of this course, or of conservapedia.
Now, given that Fubini's theorem says that the order of the integrations doesn't matter, we can "abstract away" that order, say that we are really just "integrating over a region", and use a more abstract notation. Once we have defined a 2-dimensional region R, we can just write two successive integral signs with the subscript R instead of the written-out limits of integration, and use the symbol "dA" to mean "infinitesimal piece of area". (Recall that "dx" means, in an informal way, "infinitesimal bit of length".) The integral would look like:
or, more generally:
where r is some way of indicating a point in the region. We will always use a coordinate system, so, for example, if we are using polar coordinates, the integral might look like:
For triple integrals over a volume, we do something similar:
The symbols dA and dV are often called the "area element" and "volume element", respectively.
Change of Variable
The change-of-variable theorem is an extremely important tool of integral calculus. It is even more important in multivariate calculus, since it is central to the concepts of coordinate systems and coordinate system change operations. It is implicit in nearly all integrals over curves, surfaces, and volumes.
We would like to be able to perform the integrations of the previous section in coordinate systems other than plain Cartesian coordinates. For example, the integral to find the area of a circle would be easier to set up if we were using polar coordinates, because a circle is trivial to describe in polar coordinates. To do this, we need to revisit the change-of-variable theorem from elementary calculus, and extend it to higher dimensions.
Basically, the change-of-variable theorem of elementary calculus says that algebraic substitution and manipulations actually work, even when the manipulations involve the "d" symbol. (Remember that things like "dx" are infinitesimals; their actual value would have to be considered to be zero. By a miracle of calculus notation, we can manipulate them anyway.)
Here is a simple example, not (yet) involving integrals. Suppose u is a function of y, and y is a function of x, as follows:
We know that
Can we get from this? The change-of-variable theorem says that we can multiply the derivatives and "cancel" the du:
We can remove u from this, getting the answer in terms of x:
This is the same answer as the chain rule would have given:
It's easy to see why this is true—the multiplication in the chain rule is just the multiplication from which we canceled the dy.
We can do the same ting for integrals, observing why the "dx" symbol in an integral is such an important part of the notation that integrals use. Given:
we introduce a new variable u:
so the integral is
We have
So, taking the usual liberties with the notation:
so the integral is:
Tricks like this are staples of integration technique.
What we have really done is a change of coordinate system. Whenever we change coordinate systems while calculating integrals, we are actually using the change-of-variable theorem.
Now consider what happens when we try to change the previous integral over a circle in Cartesian coordinates to the equivalent integral in polar coordinates. The earlier integral was:
That would change to:
The limits of integration are, as expected, easy to set up. But we need something that is the equivalent of
to use as the coordinate-change multiplier.
The multiplier to use is called the Jacobian of the coordinate change. It is the determinant of the matrix of partial derivatives. That matrix is:
The Jacobian is written . Working out the derivatives and the determinant, its value in thsi case is r.
So the area of a circle is:
Integration Over Parametrically Defined Regions
A variation of this method is used when integrating over curves in 2 or 3-dimensional spaces, or surface in 3-dimensional spaces. Recall that a parametric description of such a thing is closely related to a change of coordinate system, but with different numbers of coordinates in the two systems. This means that the matrix of partial derivatives is not square, so it has no determinant.
The general problem, in arbitrary dimensions, goes deeply into advanced topics of differential geometry. We will state the methods, without proof, for the cases of parametrically defined curves and surfaces.
Curve Integrals
For a curve in a 2-dimensional plane or 3-dimensional volume, let the parameter (the single coordinate in the 1-dimensional curve) be denoted by t. Work out the partial derivatives, and form the vector
for a curve in a plane, or
for a curve in a volume. Let J (not really the Jacobian, but we treat it that way) be the norm of that vector, that is, the square root of the sum of the squares of its 2 or 3 components.
Then
is the integral over the curve, in terms of the parameter t.
Example:
Suppose a circle of radius R is described in terms of a parameter t, as
We have
So, to integrate any function over the full circle (with t running from 0 to ), we have:
The length of the curve (that is, the circumference of the circle) is:
Surface Integrals
For a surface in a 3-dimensional volume, let the parameters be u and v. Work out the six partial derivatives, and form these two vectors:
Calculate their cross product, and let J be the norm of that. Use the cross product formula.
Then
is the integral over the surface, in terms of the parameters u and v.
Unless one is careful to choose the order of the parameters so the "outward" direction of the surface follows a right-hand rule, and use that order in forming the cross product, there may be ambiguity in the sign of J. In practice, one just figures out what to do.
Example:
Parameterize a sphere of radius R in terms of and
. (These are two of the same coordinates that are used in 3-dimensional spherical coordinates—
,
and
—but
is held constant, so it is no longer a coordinate.)
etc.
So an integral over the entire surface would be:
The area of a sphere is:
To find, for example, the area of a sphere only from to
(that is, the area of the Earth north of latitude
), we have
But how useful is such a technique, since it is so easy to think of regions which are NOT rectangles? Let's move on to consider a more general kind of region.
Suppose a region R happens to be such that we can identify its lowest extreme x value, which we'll call a, and its highest x value, which we'll call b, and that a vertical line drawn through the region intersects the boundary in exactly two points everywhere except those extremes, where a vertical line only intersects that boundary once. Then there is some function of x which will give the y value of the lower of all of those intersections of the boundary with vertical lines, and some some function which will give the y value of the upper of those intersections. Let's call these functions respectively. See the illustration to the right.
A region like this is very easily integrated, just like the rectangle. We simply use these boundary functions in place of the y-extremes:
Let's do an example of this. The region we'll be integrating over is the unit disc, so the x minimum and maximum values will be and the y functions will be
. Let's integrate the function
.
We have turned a double integral into a single-variable integral that is solved more with bookkeeping than intelligence:
Computation of the Triple Integral
In our computation of the double integral, we considered only regions in the plan such that the "top" and "bottom" of the boundary could be described as functions of x. We then pointed out that all regions could be broken up into such shapes, and we could just perform each evaluation separately. It should come as no surprise, then, that we consider 3D regions where the "top" and "bottom" of the surface enclosing the region can be described with functions of the form ; in other words, every line which is parallel to the z-axis and intersects our region R only intersects its surface boundary in at most two points.
Suppose R is such a region in space. Now, suppose that the "top" of the boundary is described by , and the "bottom", by
. This is where it will get a bit tricky: for a given value of x, let
be the greatest value of y such that (x,y,z) is in R for any z, and
, the least value of y. Finally, let the minimum x value for a point in R be a, and the maximum, b.
Then
Polar Coordinates
The definition of a double integral is always the same, we need only change for
. But the area element
, which before was
, is going to be slightly different.
In Cartesian coordinates, it was a simple matter to describe the area of a small rectangle - it was simply the length times the width, hence the . But what is the area of a little bit of a region in polar coordinates? The r sides of such a rectangle will be
, since r is measured in length. But
is measured in angles, which is just a number!
Remember the formula for the circumfrence of a circle of radius r: . Therefore, the length of an arc of angle
at radius r is going to be
. Multiplying the length and width together gives the area element in polar coordinates:
. Hence, the double integral in polar coordinates is
.
For spherical coordinates in three dimensions, we can rely heavily on what we've just learned. Since the circumference of a circle in three dimensions is the same as in two, the same reasoning tells us and hence
.
Problems
Review Problems
1. Find an antiderivative of
2. Find the Taylor series of at 0. What do you notice about this sum?
3. Sketch level curves of in the xy plane.
4. Describe the level surface of .
Main Problems
1. If and
, find
. Moving to three dimensional space now, what can you say about
before you even compute it? Carry out the computation. Was your prediction correct?
2. Let . Find a spherical coordinate expression for
.
3. Let . Describe
in words. Find a formula for
in Cartesian coordinates.
4. Find the partial derivatives and
of
5. Find the partial derivatives and
of
.
Challenging Problems
1. Prove that the geometric definition of matches our coordinate expression.
2. Prove that the geometric definition of matches our coordinate expression.
3. Suppose has continuous derivatives of all orders. Prove
.
|