Calc3.5

From Conservapedia

Jump to: navigation, search

Contents

Introduction to Integrals in Multivariable Calculus

In multivariable calculus, and in the related areas of physics, we will be extending the notion of the integral well beyond the simple case of the Riemann Integral from elementary calculus. These extensions will consist of

  • Integration over higher-dimensional regions, such as surfaces and volumes, with sophisticated ways of specifying these regions.
  • Integration of functions, typically representing physical quantities, such as integration of density to get total mass, or integration of a gravitational or electric field along a path to get potential energy or electric potential.
  • Integration of vector fields, for example, a magnetic field, across a surface to get the total flux.
  • Integration in coordinate systems other than the familiar Cartesian coordinates, such that the physically correct result is always obtained, independently of the coordinate system used.

In all cases, the additional sophistication in the integrals will just involve things that are done to set up the mathematical problem. The actual integration will always be the same—one-dimensional Riemann integration of some function. That is, the integration that we will do will always reduce to the familiar definite integration for elementary calculus:

\int_a^bf(x) dx

Recall that the solution to such a problem is easy to state, though maybe not easy to solve: Find a function g(x) such that f is the derivative of g:

f(x) = \frac{dg}{dx}

Then the integral is the difference between the values of g at a and b:

\int_a^bf(x) dx = g(b) - g(a)

We will generalize the familiar integration of mathematical functions defined on intervals of the real numbers, to integration over geometric curves, surfaces, adn volumes. First, we will look at the intuitive meaning in terms of dividing a surface into tiny squares or a volume into tiny cubes. Then we will see how the process may actually be carried out in terms of the techniques of integral calculus.

Double and Triple Integrals

Definition of the Double Integral

The definition of the double integral follows much that same pattern as the definition in the single variable case. As in the single variable case, we divide the part of the domain we're interested in into pieces, and multiply the size of those pieces by a value of the function on that piece. Adding up all our products gives us a Riemann sum, and taking the limit as the pieces' size goes to zero gives us the integral.

In the case of a function of two variables, say, x and y, the domain will naturally be the xy-plane, or some region in it R. A region could be a disc (a circle plus its interior), a square and its interior, or any other shape you can think of.

While technically it doesn't matter too much what shapes we divide the region into, or that they be the same size, etc., we're going to use as simple and straightforward a definition as we can, to make our lives easier when we try to actually compute these things.

As the size of the squares decrease, they approximate the region better and better.

So we divide R into n square subregions R_k, 1\leq k\leq n, and in each region pick a point (x_k,y_k)\in R_k. Note that all the Rk might not cover all of our original region R; but as the size of our squares decreases, they approximate the region better and better - in the limit, they will cover the whole thing.

The Riemann sum for a function f is then simple enough to construct, using A to note the area of one of our squares: S= \sum_{k=1}^n f(x_k,y_k)A. If we use dA = dxdy to note we are differentiating with respect to area, we can define our double integral:

\iint_R f(x,y)dA = \lim_{n\to\infty,A\to 0} \sum_{k=1}^n f(x_k,y_k)A

Computation of the Double Integral

It would be difficult to actually use this definition to compute a double integral. Instead, we find it convenient to use our familiar techniques from single variable to evaluate double integrals.

Let's examine a rectangular region first, whose sides are aligned with the x and y axes, with the lower x bound at x=a, the upper x bound at x=b, the lower y bound at y=c, and the upper y bound at y=d. See illustration.

The double integral of a function couldn't be simpler: we just integrate the function twice, each time treating one variable like a constant (just like we do when we take partial derivatives.

\iint_{Rectangle} f(x,y)dA = \int_c^d  \left({   \int_a^b f(x,y)dx  }\right) dy.

Let's do an example problem on the square with corners at (1,1), (1,2), (2,1), (2,2):

\iint_{Square} \frac{1}{x+y} dA = \int_1^2 \left({\int_1^2 \frac{1}{x+y} dx}\right)dy = \int_1^2 \left({ \log(2+y) - \log(1+y) }\right) dy = \int_1^2 \log \left({ \frac{2+y}{1+y}  }\right)dy

which is a problem of single variable calculus.

Definition of the Triple Integral

By now, defining this should be a simple matter for the student. Since for a function of three variables, the domain is three dimensional space, we'll be considering regions in space and dividing it into cubes, just as we divided two dimensional regions into squares before.

So we divide R into n cubic subregions R_k, 1\leq k\leq n, and in each region pick a point (x_k,y_k,z_k)\in R_k. Just like before, it may be that all the Rk might not cover all of our original region R; but as the size of our cubes decreases, they approximate the region better and better - in the limit, they will cover the whole thing.

The Riemann sum for a function f is then simple enough to construct, using V to note the volume of one of our squares: S= \sum_{k=1}^n f(x_k,y_k,z_k)V. If we use dV = dxdydz to note we are differentiating with respect to volume, we can define our triple integral:

\iiint_R f(x,y,z)dV = \lim_{n\to\infty,A\to 0} \sum_{k=1}^n f(x_k,y_k,z_k)V

We will not give rigorous proofs of the various theorems here. Such proofs are taught in college-level analysis and differential geometry courses, and often dwell upon esoteric issues of singularities and infinities.

The first thing we do is develop more sophisticated ways of measuring areas of plane figures. Ordinary integration can give us the area of a figure with a straight horizontal bottom edge, straight vertical left and right edges, and an arbitrary function giving the top edge. Suppose we have a circle of radius R centered at (A, B), as shown in the figure at the right.

The blue area is the first integral, and the blue and yellow areas combined are the second integral.

(We have placed the circle completely in the upper-right quadrant so that we won't have to deal with visualizing negative areas—this is just for illustrative purposes. The technique works in all cases.)

The blue area is given by:

\int_{A-R}^{A+R}\ B-\sqrt{R^2 - (x-A)^2}\ dx

The blue and yellow areas combined is:

\int_{A-R}^{A+R}\ B+\sqrt{R^2 - (x-A)^2}\ dx

The area of the circle is the yellow area alone, which is the difference between the two:

\int_{A-R}^{A+R}\ 2\ \sqrt{R^2 - (x-A)^2}\ dx

Before going any further, we use the change-of-variable theorem to shift x (which also shows that A and B don't matter; the area is the same anywhere):

\int_{-R}^{R}\ 2\ \sqrt{R^2 - x^2}\ dx

(We'll have a lot more to say about the change-of-variable theorem later, but now we are just using the version from elementary calculus, that helps us calculate ordinary integrals.)

Of course we could evaluate this integral, perhaps with a table of integrals, or a computer program, or a web site, getting:
x \sqrt{R^2 - x^2} + R^2 \sin^{-1} (x/R)\left|\right.^{R}_{-R}

Let's look at the integrand, and its geometrical interpretation, more carefully. 2\ \sqrt{R^2 - x^2} is the height of a thin vertical strip of the circle, at a given x. We could write that as an integral in its own right:

\int_{-\sqrt{R^2 - x^2}}^{\sqrt{R^2 - x^2}}\ 1\ dy

This means that the area of the circle is:

\int_{-R}^{R}\left(\int_{-\sqrt{R^2 - x^2}}^{\sqrt{R^2 - x^2}}\ 1\ dy\right) dx

The geometrical interpretation is that we have divided the region (circle, in this case) into thin vertical strips abutting each other from left to right, and then divided each strip into tiny squares, abutting each other from bottom to top, and added everything up.

In general, the area of a region is:

\int_{leftmost point of the region}^{righttmost point of the region}\left(\int_{bottom of the region at a given x}^{top of the region at a given x}\ 1\ dy\right) dx

This is two nested integrals, or a double integral. The quantity in parentheses is the integrand of the outer integral. As such, it is a function of x, so its limits are permitted to depend on x. In more complicated problems, such as finding the total electric charge, we might replace the inner integrand ("1" in the present example) with the density of electric charge, which could be a function of x and y.

This is the essence of how double (and later, triple) integrals are used to calculate areas, volumes, and other things over 2 or 3-dimensional regions. The big parentheses are usually omitted, but their implict meaning must be followed. The innermost integral, and its limits of integration, relate to the innermost "d" symbol, and so on.

A triple integral could be written:

\int_{A}^{B}\int_{C(x)}^{D(x)}\int_{E(x, y)}^{F(x, y)}\ G(x, y, z)\ dz dy dx

Each successive integral may use the values of all the outer integration variables in specifying its limits.

Exercise [belongs in a separate section, no doubt]: Set up the triple integral giving the volume of a sphere of radius R, and evaluate same.

This use of double integrals to calculate areas may seem like excessive make-work, and using "1" as the integrand may seem boring, but this sort of analysis is the basis for everything we will do.

We could have performed the double integration in the other order, dividing the circle into horizontal strips first, and then subdividing those, so that dx is the "inner" integral and dy the "outer" one. This would have gotten the same result. The theorem that says that this is so is Fubini's theorem. Proving it is outside the scope of this course, or of conservapedia.

Now, given that Fubini's theorem says that the order of the integrations doesn't matter, we can "abstract away" that order, say that we are really just "integrating over a region", and use a more abstract notation. Once we have defined a 2-dimensional region R, we can just write two successive integral signs with the subscript R instead of the written-out limits of integration, and use the symbol "dA" to mean "infinitesimal piece of area". (Recall that "dx" means, in an informal way, "infinitesimal bit of length".) The integral would look like:

\iint_{R}\ 1\ dA

or, more generally:

\iint_{R}\ f(r)\ dA

where r is some way of indicating a point in the region. We will always use a coordinate system, so, for example, if we are using polar coordinates, the integral might look like:

\iint_{R}\ f(r, \theta)\ dA

For triple integrals over a volume, we do something similar:

\iiint_{R}\ f(r, \theta, \phi)\ dV

The symbols dA and dV are often called the "area element" and "volume element", respectively.

Change of Variable

The change-of-variable theorem is an extremely important tool of integral calculus. It is even more important in multivariate calculus, since it is central to the concepts of coordinate systems and coordinate system change operations. It is implicit in nearly all integrals over curves, surfaces, and volumes.

We would like to be able to perform the integrations of the previous section in coordinate systems other than plain Cartesian coordinates. For example, the integral to find the area of a circle would be easier to set up if we were using polar coordinates, because a circle is trivial to describe in polar coordinates. To do this, we need to revisit the change-of-variable theorem from elementary calculus, and extend it to higher dimensions.

Basically, the change-of-variable theorem of elementary calculus says that algebraic substitution and manipulations actually work, even when the manipulations involve the "d" symbol. (Remember that things like "dx" are infinitesimals; their actual value would have to be considered to be zero. By a miracle of calculus notation, we can manipulate them anyway.)

Here is a simple example, not (yet) involving integrals. Suppose u is a function of y, and y is a function of x, as follows:

y = \log u\,
u = \sin x\,

We know that

\frac{dy}{du} = \frac{1}{u}
\frac{du}{dx} = \cos x

Can we get \frac{dy}{dx} from this? The change-of-variable theorem says that we can multiply the derivatives and "cancel" the du:

\frac{dy}{dx} = \frac{dy}{du} \frac{du}{dx} = \frac{1}{u}\ \cos x

We can remove u from this, getting the answer in terms of x:

\frac{dy}{dx} = \frac{\cos x}{\sin x} = \cot x

This is the same answer as the chain rule would have given:

y = \log (\sin x)\,
\frac{dy}{dx} = \frac{1}{\sin x} \cos x

It's easy to see why this is true—the multiplication in the chain rule is just the multiplication from which we canceled the dy.

We can do the same ting for integrals, observing why the "dx" symbol in an integral is such an important part of the notation that integrals use. Given:

\int \cot x\ dx

we introduce a new variable u:

u = \sin x\,

so the integral is

\int \frac{\cos x}{u} dx

We have

\frac{du}{dx} = \cos x

So, taking the usual liberties with the notation:

dx = \frac{du}{\cos x}

so the integral is:

\int \frac{\cos x}{u} \frac{du}{\cos x} = \int \frac{du}{u} = \log u = \log \sin x

Tricks like this are staples of integration technique.

What we have really done is a change of coordinate system. Whenever we change coordinate systems while calculating integrals, we are actually using the change-of-variable theorem.

Now consider what happens when we try to change the previous integral over a circle in Cartesian coordinates to the equivalent integral in polar coordinates. The earlier integral was:

\int_{-R}^{R}\int_{-\sqrt{R^2 - x^2}}^{\sqrt{R^2 - x^2}}\ 1\ dy dx

That would change to:

\int_{0}^{2\pi}\int_{0}^{R}\ 1\ ??\ dr d\theta

The limits of integration are, as expected, easy to set up. But we need something that is the equivalent of

\frac{d\ x, y}{d\ r, \theta}

to use as the coordinate-change multiplier.

The multiplier to use is called the Jacobian of the coordinate change. It is the determinant of the matrix of partial derivatives. That matrix is:

\begin{pmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta}\end{pmatrix}\,

The Jacobian is written J(x, y / r, \theta)\,. Working out the derivatives and the determinant, its value in thsi case is r.

So the area of a circle is:

\int_{0}^{2\pi}\int_{0}^{R}\ 1\ J(x, y / r, \theta) dr d\theta = \int_{0}^{2\pi}\int_{0}^{R}\ r dr d\theta = \int_{0}^{2\pi}\frac{R^2}{2} d\theta = \pi R^2

Integration Over Parametrically Defined Regions

A variation of this method is used when integrating over curves in 2 or 3-dimensional spaces, or surface in 3-dimensional spaces. Recall that a parametric description of such a thing is closely related to a change of coordinate system, but with different numbers of coordinates in the two systems. This means that the matrix of partial derivatives is not square, so it has no determinant.

The general problem, in arbitrary dimensions, goes deeply into advanced topics of differential geometry. We will state the methods, without proof, for the cases of parametrically defined curves and surfaces.

Curve Integrals

For a curve in a 2-dimensional plane or 3-dimensional volume, let the parameter (the single coordinate in the 1-dimensional curve) be denoted by t. Work out the partial derivatives, and form the vector

\left(\frac{\partial x}{\partial t}, \frac{\partial y}{\partial t}\right)

for a curve in a plane, or

\left(\frac{\partial x}{\partial t}, \frac{\partial y}{\partial t}, \frac{\partial z}{\partial t}\right)

for a curve in a volume. Let J (not really the Jacobian, but we treat it that way) be the norm of that vector, that is, the square root of the sum of the squares of its 2 or 3 components.

J = \Bigg\Vert\left(\frac{\partial x}{\partial t}, \frac{\partial y}{\partial t}, \frac{\partial z}{\partial t}\right)\Bigg\Vert = \sqrt{\left(\frac{\partial x}{\partial t}\right)^2 + \left(\frac{\partial y}{\partial t}\right)^2 + \left(\frac{\partial z}{\partial t}\right)^2}

Then

\int f(t)\ J dt

is the integral over the curve, in terms of the parameter t.

Example:

Suppose a circle of radius R is described in terms of a parameter t, as

x = R \cos t\,
y = R \sin t\,

We have

\frac{\partial x}{\partial t} = - R \sin t
\frac{\partial y}{\partial t} = R \cos t
J = \sqrt{R^2 \sin^2 t + R^2 \cos^2 t} = \sqrt{R^2} = R

So, to integrate any function over the full circle (with t running from 0 to ), we have:

\int_0^{2\pi} f(t)\ R dt

The length of the curve (that is, the circumference of the circle) is:

\int_0^{2\pi} 1\ R\ dt = 2 \pi R

Surface Integrals

For a surface in a 3-dimensional volume, let the parameters be u and v. Work out the six partial derivatives, and form these two vectors:

\stackrel{\textstyle{\rightarrow}}{U} = \left(\frac{\partial x}{\partial u}, \frac{\partial y}{\partial u}, \frac{\partial z}{\partial u}\right)
\stackrel{\textstyle{\rightarrow}}{V} = \left(\frac{\partial x}{\partial v}, \frac{\partial y}{\partial v}, \frac{\partial z}{\partial v}\right)

Calculate their cross product, and let J be the norm of that. Use the cross product formula.

J = ||\stackrel{\textstyle{\rightarrow}}{U} \times\stackrel{\textstyle{\rightarrow}}{V}||

Then

\int\int f(u, v)\ J\ dv\ du

is the integral over the surface, in terms of the parameters u and v.

Unless one is careful to choose the order of the parameters so the "outward" direction of the surface follows a right-hand rule, and use that order in forming the cross product, there may be ambiguity in the sign of J. In practice, one just figures out what to do.

Example:

Parameterize a sphere of radius R in terms of θ and φ. (These are two of the same coordinates that are used in 3-dimensional spherical coordinates—r, θ and φ—but r is held constant, so it is no longer a coordinate.)

x = R \sin \theta \cos \phi\,
y = R \sin \theta \sin \phi\,
z = R \cos \theta\,
\frac{\partial x}{\partial \theta} = R \cos \theta \cos \phi

etc.

\stackrel{\textstyle{\rightarrow}}{U} = \left(R \cos \theta \cos \phi, R \cos \theta \sin \phi, - R \sin \theta\right)
\stackrel{\textstyle{\rightarrow}}{V} = \left(- R \sin \theta \sin \phi, R \sin \theta \cos \phi, 0\right)
\stackrel{\textstyle{\rightarrow}}{U} \times\stackrel{\textstyle{\rightarrow}}{V} = R^2 \left(\sin^2 \theta \cos \phi, \sin^2 \theta \sin \phi, \sin \theta \cos \phi\right)
J = ||\stackrel{\textstyle{\rightarrow}}{U} \times\stackrel{\textstyle{\rightarrow}}{V}|| = R^2 \sin \theta

So an integral over the entire surface would be:

\int_0^{\pi}\int_0^{2\pi} f(\theta, \phi)\ R^2 \sin \theta\ d\phi\ d\theta

The area of a sphere is:

\int_0^{\pi}\int_0^{2\pi} R^2 \sin \theta\ d\phi\ d\theta = \int_0^{\pi} R^2 \sin \theta \left(\int_0^{2\pi} d\phi\right) d\theta = 2 \pi R^2 \int_0^{\pi} \sin \theta\ d\theta = 4 \pi R^2

To find, for example, the area of a sphere only from \theta = 0\, to \theta = T\, (that is, the area of the Earth north of latitude \pi/2 - T\,), we have

\int_0^{T}\int_0^{2\pi} R^2 \sin \theta\ d\phi\ d\theta = 2 \pi R^2 \int_0^{T} \sin \theta\ d\theta = 2 \pi R^2\ (1 - \cos T)

But how useful is such a technique, since it is so easy to think of regions which are NOT rectangles? Let's move on to consider a more general kind of region.

The kind of region under discussion.

Suppose a region R happens to be such that we can identify its lowest extreme x value, which we'll call a, and its highest x value, which we'll call b, and that a vertical line drawn through the region intersects the boundary in exactly two points everywhere except those extremes, where a vertical line only intersects that boundary once. Then there is some function of x which will give the y value of the lower of all of those intersections of the boundary with vertical lines, and some some function which will give the y value of the upper of those intersections. Let's call these functions y1,y2 respectively. See the illustration to the right.

A region like this is very easily integrated, just like the rectangle. We simply use these boundary functions in place of the y-extremes:

\iint_{R} f(x,y)dA = \int_a^b \left({ \int_{y_1(x)}^{y_2(x)} f(x,y) dy     }\right) dx

Let's do an example of this. The region we'll be integrating over is the unit disc, so the x minimum and maximum values will be − 1,1 and the y functions will be y_2 = \sqrt{1-x^2}, y_1=-\sqrt{1-x^2}. Let's integrate the function f(x,y) = (x + y)2.

\iint_{R}(x+y)^2dA=\int_{-1}^1\left({\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}(x+y)^2dy}\right)dx=\int_{-1}^1\left({\frac{(x+\sqrt{1-x^2})^3}{3}-\frac{(x-\sqrt{1-x^2})^3}{3}}\right)dx

We have turned a double integral into a single-variable integral that is solved more with bookkeeping than intelligence:

\frac{1}{3}\int_{-1}^1\left({   (x+\sqrt{1-x^2})^3   -    (x-\sqrt{1-x^2})^3    }\right)dx

= \frac{2}{3}\int_{-1}^1  \left({    x^2\sqrt{1-x^2}  + (1-x^2)^{3/2} }\right) dx

= \frac{2}{3} \frac{1}{8}\left({   0 + \frac{\pi}{2} + 0 + 3\frac{\pi}{2} -0 - \frac{-\pi}{2}    -0    -3\frac{\pi}{2}                         }\right) = \frac{\pi}{3}

Computation of the Triple Integral

In our computation of the double integral, we considered only regions in the plan such that the "top" and "bottom" of the boundary could be described as functions of x. We then pointed out that all regions could be broken up into such shapes, and we could just perform each evaluation separately. It should come as no surprise, then, that we consider 3D regions where the "top" and "bottom" of the surface enclosing the region can be described with functions of the form z = g(x,y); in other words, every line which is parallel to the z-axis and intersects our region R only intersects its surface boundary in at most two points.

Suppose R is such a region in space. Now, suppose that the "top" of the boundary is described by z = g1(x,y), and the "bottom", by z = g2(x,y). This is where it will get a bit tricky: for a given value of x, let h1(x) be the greatest value of y such that (x,y,z) is in R for any z, and h2(x), the least value of y. Finally, let the minimum x value for a point in R be a, and the maximum, b.

Then

\iiint_R f(x,y,z)dV  = \int_a^b \int_{h_2(x)}^{h_1(x)} \int_{g_2(x,y)}^{g_1(x,y)} f(x,y,z) dz dy dx

Polar Coordinates

The definition of a double integral is always the same, we need only change (x,y) for (r, / theta). But the area element dA, which before was dxdy, is going to be slightly different.

In Cartesian coordinates, it was a simple matter to describe the area of a small rectangle - it was simply the length times the width, hence the dxdy. But what is the area of a little bit of a region in polar coordinates? The r sides of such a rectangle will be dr, since r is measured in length. But θ is measured in angles, which is just a number!

Remember the formula for the circumfrence of a circle of radius r: r. Therefore, the length of an arc of angle dθ at radius r is going to be rdθ. Multiplying the length and width together gives the area element in polar coordinates: dA = rdrdθ. Hence, the double integral in polar coordinates is

\iint_R f(r,\theta)dA = \iint_R f(r,\theta)rdrd\theta.

For spherical coordinates in three dimensions, we can rely heavily on what we've just learned. Since the circumference of a circle in three dimensions is the same as in two, the same reasoning tells us dV = (dr)(rd\theta)(rd\phi) = r^2 d\theta d\phi dr  \ and hence


\iiint_R f(r,\theta,\phi)dV = \iint_R f(r,\theta,\phi)r^2drd\theta d\phi.

Line Integrals

Definition of the Line Integral

We've integrated single variable functions along lines, and two variable functions over regions of the plane, and even three variable functions over regions of space. But lines, and curves, can be contained in the plane, and surfaces can be contained in space. Why have we been restricting ourselves to integrating functions over regions which have the same dimensions as their domains?

Well, because it is simpler, and useful to have under our belts before we expand our repetoire. But there is no reason we can't integrate a function over a region in its domain which is a lower dimension than the domain itself.

We'll start with the line integral. Technically, it should be called the curve integral, but let's let that slide for now and get to the definition. We're trying to integrate over a curve, say, C. At first, we'll let this curve lie in the xy plane - it's not difficult to generalize to three dimensions. So we proceed as before: we divide this region into n pieces C_k, 1\leq k \leq n, and we pick a point (x_k,y_k)\in C_k for each piece. Letting the length of Ck be Lk. The definition follows as expected:

\int_C f(x,y)dL = \lim_{L_k\to 0, n\to\infty} \sum_{k=1}^n f(x_k,y_k)L_k

Computation of the Line Integral

As the length goes to zero, the Pythagorean theorem provides a more and more precise result for the length element dL.

As before, we're not going to be using the Riemann sum definition to compute line integrals. Let x = x(t),y = y(t) be a parametric description of C, so that C starts x(t0),y(t0) and ends at x(t1),y(t1). This allows us to compute the length element, dL:

dL = \sqrt{dx^2+dy^2}

But we don't have any formulas for dx or dy! The closest we've got are formulas for dx / dt,dy / dt; we'll have to plug these in and see if we can patch things up:

= \sqrt{ \left({  \frac{dx}{dt} }\right)^2 + \left({  \frac{dy}{dt} }\right)^2 } \sqrt{dt^2}

Now we can just plug in our formula for dL into the integral to get an easy-to-use formula for a line integral:

\int_C f(x,y)dL = \int_{t_0}^{t_1} f(x,y)\sqrt{ \left({  \frac{dx}{dt} }\right)^2 + \left({  \frac{dy}{dt} }\right)^2 } dt

Line Integrals of Vector Fields

Consider a function \vec{F}(x,y,z) = P(x,y,z)\vec{i}+Q(x,y,z)\vec{j}+R(x,y,z)\vec{k}, and suppose we wished to take the integral of this function over some curve in space

C= \left\{{ (x,y,z):x=x(t),y=y(t),z=z(t),a\leq t\leq b }\right\}.

We are only interested in f2, the component tangent to the curve. We get this by taking the dot product with the unit tangent: \vec{F}\cdot\vec{T}.

It is not terribly useful to us to compute the integral directly, in physical applications. If, for example, we wished to know the work done on a particle traveling along C by a flowing fluid or electromagnetic field or something, \vec{F}, we need to consider only what \vec{F} is doing along the direction of the curve. Luckily, the dot product exists, so we know it won't be hard to find the projection of \vec{F} in the direction of the curve - we simply take \vec{F}\cdot\vec{T}, where \vec{T} is the tangent to the curve at a point. We of course have

\vec{T}=\begin{pmatrix}\   \frac{dx}{dt} \\\ \frac{dy}{dt}\\\ \frac{dz}{dt}\end{pmatrix}

Let's pause for a moment and realize that if we've adjusted our parametrization of C so that a particle following that parametrization in time always has unit speed, then the length along the curve will just be time, ie, L=t \longrightarrow dL=dt and the tangent \vec{T} will always have unit length.

But then look what happens:

\int_C (\vec{F}\cdot\vec{T})dL = \int_C \left({  P\frac{dx}{dt} + Q\frac{dy}{dt} + R\frac{dz}{dt} }\right) dt .

An exression like this is usually simplified in notation

= Pdx + Qdy + Rdz = Pdx + Qdy + Rdz
CCCC
.

Surface Integrals

Definition of the Surface Integral

Now we turn to the idea of integrating a three-variable function over a surface, S. This is called a surface integral. As we always do, we divide this region into n pieces S_k, 1\leq k \leq n, and we pick a point (x_k,y_k,z_k)\in S_k for each piece. Letting the area of Sk be Ak. The definition follows as expected:

\iint_S f(x,y,z)dL = \lim_{A_k\to 0, n\to\infty} \sum_{k=1}^n f(x_k,y_k,z_k)A_k

Computation of the Surface Integral

Let x = x(u,v),y = y(u,v),z = z(u,v) be a parametric description of S, so that S will be defined as the image under these functions these three functions x,y,z of some region R of the uv plane.

The surface area element and the vectors \vec{u},\vec{v}.

As before, the trick to finding a convenient formula for a surface integral is finding a formula for that surface area element, dA. What does such a surface element even look like?

You'll remember that in our second lecture, we defined the cross product of two vectors \vec{u}\times\vec{v} as having length equal to the area of the parallelogram bordered on two sides by \vec{u} and \vec{v}, and direction perpendicular to either vector, with a right handed orientation. We can use this to our advantage now - all we need to do is figure out two vectors which will border a surface area element, say, \vec{u}, \vec{v}, and we'll be able to compute the surface integral as

\iint_S f(x,y,z)dA = \iint_R f(x(u,v),y(u,v),z(u,v)) |\vec{u}\times\vec{v}| dudv

It's easy enough to do this: each border vector will have some small x component, some small y component, and some small z component. By choosing our surface elements S_k \ to be in line with the curves of constant u and curves of constant v on the surface, these small components will then be

\vec{u}=\begin{pmatrix}\partial_u x\\\partial_u y\\\partial_u z\end{pmatrix}, \vec{v}=\begin{pmatrix}\partial_v x\\\partial_v y\\\partial_v z\end{pmatrix}

Surface Integrals of Vector Functions

Like the line integral case, we wish to pick out a component, but this time it is the component normal to the surface, \vec{F}\cdot\vec{n}.

Frequently we will find we wish to know how much fluid is passing through a surface (like a mesh), or how much pressure a force field is exerting on a surface (like a dam). This time, it is not the component of the vector in the tangent direction we're interested in, but the component of the vector in the normal direction to the surface. So, we must compute

\iint_S (\vec{F}\cdot\vec{n}) dS

Recall that for a surface defined parametrically as \vec{r}(u,v) = x(u,v)\vec{i} + y(u,v)\vec{j}+ z(u,v)\vec{k}, the vector \vec{r}_u \times \vec{r}_v is normal to the surface. Hence the surface normal \vec{n}(u,v) is just

\vec{n}(u,v) = \frac{\vec{r}_u \times \vec{r}_v}{\left\|{\vec{r}_u \times \vec{r}_v}\right\|}

If D \ is the region in the uv plane such that \vec{r}(U) = S \ , then for a vector function in space \vec{F}(\vec{r}) = f_1(\vec{r})\vec{i}+f_2(\vec{r})\vec{j} +f_3(\vec{r})\vec{k}, we get a formidable equation indeed:

\iint_S (\vec{F}\cdot\vec{n}) dS  = \iint_D  \frac{(y_u z_v - y_v z_u)f_1(\vec{r}) - (x_uz_v - x_v z_u)f_2(\vec{r})+(x_u y_v - x_v y_u)f_3(\vec{r})}{\left\|{\vec{r}_u \times \vec{r}_v}\right\|} \left\|{\vec{r}_u \times \vec{r}_v}\right\|dudv

Fortunately, an interesting thing happens when the surface is not defined parametrically, but as a function z=g(x,y) \ . We can still use the parametric results above, with x(u,v)=u, y(u,v)=v, z(u,v)=g(x,y) \ . All of a sudden, the partial derivatives simplify: x_v = y_u =0, x_u=y_v=1 \ . Plugging 0 into these values in the monstrous equation above yields something much simpler:

\iint_S (\vec{F}\cdot\vec{n}) dS  = \iint_D  (( - z_x)f_1(\vec{r}) - (z_y)f_2(\vec{r})+(1)f_3(\vec{r})) dudv

This is a much simpler form to work with.

Integral Applications

Work Done by a Force Field

Pressure on a Surface

Problems

Review Problems

Main Problems

1. We discussed what happened to a surface integral of a vector function when that surface is described a z=g(x,y) \ (rather than parametrically). Describe what happens to a surface integral of a scalar function under those same circumstances.

Challenging Problems

1. In the first of the main problems, you came up with a particular form for \left\|{\vec{r}_u \times \vec{r}_v}\right\| when z=g(x,y) \ . Consider now the angle \gamma \ between the surface normal at a point, \vec{n}(\vec{r}) \ , and the positive z-axis. Can you find a relation between \gamma \ and your formula for \left\|{\vec{r}_u \times \vec{r}_v}\right\| when z=g(x,y) \ ? This can sometimes work for surfaces defined parametrically as well; but why not always?

Personal tools