Difference between revisions of "Fundamental Theorem of Calculus"

From Skulepedia
Jump to navigation Jump to search
(Created page with "The '''fundamental theorem of calculus''' specifies the relationship between the two central operations of calculus: differentiation and [[integral|integration...")
 
(Blanked the page)
Line 1: Line 1:
The '''fundamental theorem of calculus''' specifies the relationship between the two central operations of [[calculus]]: [[derivative|differentiation]] and [[integral|integration]].
 
  
The first part of the theorem, sometimes called the '''first fundamental theorem of calculus''', shows that an [[antiderivative|indefinite integration]]<ref>
 
More exactly, the theorem deals with [[integral|definite integration]] with variable upper limit and arbitrarily selected lower limit. This particular kind of definite integration allows us to compute one of the infinitely many [[antiderivatives]] of a function (except for those which do not have a zero). Hence, it is almost equivalent to [[antiderivative|indefinite integration]], defined by most authors as an operation which yields any one of the possible antiderivatives of a function, including those without a zero.</ref> can be reversed by a differentiation. The first part is also important because it guarantees the existence of [[antiderivative]]s for [[continuous function]]s.<ref>{{Citation |last=Spivak|first=Michael|year=1980|title=Calculus|edition=2nd|publication-place=Houstan, Texas|publisher=Publish or Perish Inc.}}</ref>
 
 
The second part, sometimes called the '''second fundamental theorem of calculus''', allows one to compute the [[definite integral]] of a function by using any one of its infinitely many [[antiderivative]]s. This part of the theorem has invaluable practical applications, because it markedly simplifies the computation of [[definite integral]]s.
 
 
The first published statement and proof of a restricted version of the fundamental theorem was by [[James Gregory (astronomer and mathematician)|James Gregory]] (1638–1675).<ref>
 
See, e.g., Marlow Anderson, Victor J. Katz, Robin J. Wilson, ''Sherlock Holmes in Babylon and Other Tales of Mathematical History'', Mathematical Association of America, 2004, [http://books.google.com/books?vid=ISBN0883855461&id=BKRE5AjRM3AC&pg=PA114&lpg=PA114&ots=Z01TZKrQXY&dq=%22james+gregory%22+%22fundamental+theorem%22&sig=6xDqL0oNAhWw66IqPdI5fQX7euA p. 114].
 
</ref> [[Isaac Barrow]] (1630–1677) proved the first completely general version of the theorem,<ref>http://www.archive.org/details/geometricallectu00barruoft</ref> while Barrow's student [[Isaac Newton]] (1643–1727) completed the development of the surrounding mathematical theory. [[Gottfried Leibniz]] (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities.
 
{{TOC limit|3}}
 
 
==Physical intuition==
 
Intuitively, the theorem simply states that the sum of [[infinitesimal]] changes in a quantity over time (or over some other quantity) adds up to the net change in the quantity.
 
 
In the case of a particle traveling in a straight line, its position, ''x'', is given by ''x''(''t'') where ''t'' is time and ''x''(''t'') means that ''x'' is a [[function (mathematics)|function]] of ''t''.  The derivative of this function is equal to the infinitesimal change in quantity, d''x'', per infinitesimal change in time, d''t'' (of course, the derivative itself is dependent on time).  This change in displacement per change in time is the velocity ''v'' of the particle. In [[Leibniz notation|Leibniz's notation]]:
 
 
:<math>\frac{dx}{dt} = v(t). </math>
 
 
[[differential (infinitesimal)|Rearranging this equation]], it follows that:
 
 
:<math>dx = v(t)\,dt. </math>
 
 
By the logic above, a change in ''x'' (or Δ''x'') is the sum of the infinitesimal changes d''x''.  It is also equal to the sum of the infinitesimal products of the derivative and time.  This infinite summation is integration; hence, the integration operation allows the recovery of the original function from its derivative. It can be concluded that this operation works in reverse; the result of the integral can be differentiated to recover the original function.
 
 
==Geometric intuition==
 
[[Image:FTC geometric.svg|500px|thumb|right|The area shaded in red stripes can be computed as ''h'' times ''&fnof;''(''x''). Alternatively, if the function ''A''(''x'') were known, it could be estimated as ''A''(''x''&nbsp;+&nbsp;''h'')&nbsp;&minus;&nbsp;''A''(''x''). These two values are approximately equal, particularly for small ''h''.]]
 
For a continuous function {{nowrap|1=''y'' = ƒ(''x'')}} whose graph is plotted as a curve, each value of ''x'' has a corresponding area function ''A''(''x''), representing the area beneath the curve between 0 and ''x''. The function ''A''(''x'') may not be known, but it is given that it represents the area under the curve.
 
 
The area under the curve between ''x'' and ''x''&nbsp;+&nbsp;''h'' could be computed by finding the area between 0 and ''x''&nbsp;+&nbsp;''h'', then subtracting the area between 0 and ''x''. In other words, the area of this “sliver” would be {{nowrap|''A''(''x'' + ''h'') − ''A''(''x'')}}.
 
 
There is another way to ''estimate'' the area of this same sliver. ''h'' is multiplied by ƒ(''x'') to find the area of a rectangle that is approximately the same size as this sliver. It is intuitive that the approximation improves as ''h'' becomes smaller.
 
 
At this point, it is true ''A''(''x''&nbsp;+&nbsp;''h'')&nbsp;&minus;&nbsp;''A''(''x'') is approximately equal to ƒ(''x'')·''h''. In other words, {{nowrap|ƒ(''x'')·''h'' ≈ ''A''(''x'' + ''h'') − ''A''(''x'')}}, with this approximation becoming an equality as ''h'' approaches 0 in the [[limit of a function|limit]].
 
 
When both sides of the equation are divided by ''h'':
 
 
: <math>f(x) \approx \frac{A(x+h)-A(x)}{h}.</math>
 
 
As ''h'' approaches 0, it can be seen that the right hand side of this equation is simply the [[derivative]] ''A''’(''x'') of the area function ''A''(''x''). The left-hand side of the equation simply remains ƒ(''x''), since no ''h'' is present.
 
 
It can thus be shown, in an informal way, that {{nowrap|1 = ƒ(''x'')  = ''A''’(''x'')}}. That is, the derivative of the area function ''A''(''x'') is the original function ƒ(''x''); or, the area function is simply the [[antiderivative]] of the original function.
 
 
Computing the derivative of a function and “finding the area” under its curve are "opposite" operations. This is the crux of the Fundamental Theorem of Calculus. Most of the theorem's proof is devoted to showing that the area function ''A''(''x'') exists in the first place.
 
 
==Formal statements==
 
There are two parts to the Fundamental Theorem of Calculus. Loosely put, the first part deals with the derivative of an [[antiderivative]], while the second part deals with the relationship between antiderivatives and [[definite integral]]s.
 
 
=== First part ===
 
 
This part is sometimes referred to as the '''First Fundamental Theorem of Calculus'''.<ref>{{harvnb|Apostol|1967|loc=§5.1}}</ref>
 
 
Let ''&fnof;'' be a continuous real-valued function defined on a [[Interval (mathematics)#Terminology|closed interval]] [''a'', ''b'']. Let ''F'' be the function defined, for all ''x'' in [''a'', ''b''], by
 
:<math>F(x) = \int_a^x f(t)\, dt\,.</math>
 
Then, ''F'' is continuous on [''a'', ''b''], differentiable on the open interval (''a'',&nbsp;''b''), and
 
 
:<math>F'(x) = f(x)\,</math>
 
 
for all ''x'' in (''a'', ''b'').
 
 
===Corollary===
 
 
The fundamental theorem is often employed to compute the definite integral of a function ''&fnof;'' for which an antiderivative ''g'' is known.  Specifically, if ''ƒ'' is a real-valued continuous function on [''a'',&nbsp;''b''], and ''g'' is an antiderivative of ''ƒ'' in [''a'',&nbsp;''b''], then
 
 
:<math>\int_a^b f(x)\, dx = g(b)-g(a).</math>
 
 
The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following theorem.
 
 
=== Second part ===
 
 
This part is sometimes referred to as the '''Second Fundamental Theorem of Calculus'''<ref>{{harvnb|Apostol|1967|loc=§5.3}}</ref> or the '''Newton-Leibniz Axiom'''.
 
 
Let ''&fnof;'' be a real-valued function defined on a [[closed interval]] [''a'', ''b''] that admits an [[antiderivative]] ''g'' on [''a'',&nbsp;''b'']. That is, ''ƒ'' and ''g'' are functions such that for all ''x'' in [''a'',&nbsp;''b''],
 
 
:<math>f(x) = g'(x).\ </math>
 
 
If ''&fnof;'' is integrable on [''a'',&nbsp;''b''] then
 
 
:<math>\int_a^b f(x)\,dx\, = g(b) - g(a).</math>
 
 
Notice that the Second part is somewhat stronger than the Corollary because it does not assume that ''&fnof;'' is continuous.
 
 
Note that when an antiderivative ''g'' exists, then there are infinitely many antiderivatives for ''ƒ'', obtained by adding to ''g'' an arbitrary constant. Also, by the first part of the theorem, antiderivatives of ''ƒ'' always exist when ''ƒ'' is continuous.
 
 
==Proof of the first part==
 
 
For a given ''f''(''t''), define the function ''F''(''x'') as
 
:<math>F(x) = \int_{a}^{x} f(t) \,dt\,.</math>
 
 
For any two numbers ''x''<sub>1</sub> and ''x''<sub>1</sub> + Δ''x'' in [''a'', ''b''], we have
 
:<math>F(x_1) = \int_{a}^{x_1} f(t) \,dt</math>
 
and
 
:<math>F(x_1 + \Delta x) = \int_{a}^{x_1 + \Delta x} f(t) \,dt\,.</math>
 
 
Subtracting the two equations gives
 
:<math>F(x_1 + \Delta x) - F(x_1) = \int_{a}^{x_1 + \Delta x} f(t) \,dt - \int_{a}^{x_1} f(t) \,dt. \qquad (1)</math>
 
 
It can be shown that
 
:<math>\int_{a}^{x_1} f(t) \,dt + \int_{x_1}^{x_1 + \Delta x} f(t) \,dt = \int_{a}^{x_1 + \Delta x} f(t) \,dt. </math>
 
:(The sum of the areas of two adjacent regions is equal to the area of both regions combined.)
 
Manipulating this equation gives
 
:<math>\int_{a}^{x_1 + \Delta x} f(t) \,dt - \int_{a}^{x_1} f(t) \,dt = \int_{x_1}^{x_1 + \Delta x} f(t) \,dt. </math>
 
 
Substituting the above into (1) results in
 
:<math>F(x_1 + \Delta x) - F(x_1) = \int_{x_1}^{x_1 + \Delta x} f(t) \,dt. \qquad (2)</math>
 
 
According to the [[mean value theorem]] for integration, there exists a ''c'' in [''x''<sub>1</sub>, ''x''<sub>1</sub> + Δ''x''] such that
 
:<math>\int_{x_1}^{x_1 + \Delta x} f(t) \,dt = f(c) \Delta x \,.</math>
 
 
Substituting the above into (2) we get
 
:<math>F(x_1 + \Delta x) - F(x_1) = f(c) \Delta x \,.</math>
 
 
Dividing both sides by Δ''x'' gives
 
:<math>\frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = f(c). </math>
 
:Notice that the expression on the left side of the equation is Newton's [[difference quotient]] for ''F'' at ''x''<sub>1</sub>.
 
 
Take the limit as Δ''x'' → 0 on both sides of the equation.
 
:<math>\lim_{\Delta x \to 0} \frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = \lim_{\Delta x \to 0} f(c). </math>
 
 
The expression on the left side of the equation is the definition of the derivative of ''F'' at ''x''<sub>1</sub>.
 
:<math>F'(x_1) = \lim_{\Delta x \to 0} f(c). \qquad (3) </math>
 
 
To find the other limit, we will use the [[squeeze theorem]]. The number ''c'' is in the interval [''x''<sub>1</sub>, ''x''<sub>1</sub> + Δ''x''], so ''x''<sub>1</sub> ≤ ''c'' ≤ ''x''<sub>1</sub> + Δ''x''.
 
 
Also, <math>\lim_{\Delta x \to 0} x_1 = x_1</math> and <math>\lim_{\Delta x \to 0} x_1 + \Delta x = x_1\,.</math>
 
 
Therefore, according to the squeeze theorem,
 
:<math>\lim_{\Delta x \to 0} c = x_1\,.</math>
 
 
Substituting into (3), we get
 
:<math>F'(x_1) = \lim_{c \to x_1} f(c)\,.</math>
 
 
The function ''f'' is continuous at ''c'', so the limit can be taken inside the function. Therefore, we get
 
:<math>F'(x_1) = f(x_1) \,.</math>
 
which completes the proof.
 
 
<small>(Leithold et al, 1996)</small>
 
 
==Proof of the corollary==
 
Let&nbsp; <math>F(x) = \int_a^x f(t)\, dt,</math>&nbsp; with ''&fnof;'' continuous on [''a'',&nbsp;''b''].  If ''g'' is an antiderivative of ''ƒ'', then ''g'' and  ''F'' have the same derivative, by the ''first part''&thinsp; of the theorem.  It follows by the mean value theorem that there is a number ''c'' such that {{nowrap|''F''(''x'') {{=}} ''g''(''x'') + ''c''}}, for all ''x'' in [''a'',&nbsp;''b''].  Letting {{nowrap|''x'' {{=}} ''a''}},
 
 
:<math>F(a) = \int_a^a f(t)\, dt = 0 = g(a) + c\,,</math>
 
 
which means ''c''&nbsp;= &minus;&nbsp;''g''(''a''). In other words ''F''(''x'')&nbsp;= {{nowrap|''g''(''x'') &minus; ''g''(''a'')}}, and so
 
 
:<math>\int_a^b f(x)\, dx = g(b)-g(a).</math>
 
 
==Proof of the Second Part==
 
This is a limit proof by [[Riemann integral|Riemann sums]].
 
Let ''&fnof;'' be (Riemann) integrable on the interval [''a'',&nbsp;''b''], and let ''ƒ'' admit an antiderivative ''F'' on&nbsp;[''a'',&nbsp;''b''].  Begin with the quantity {{nowrap|''F''(''b'') &minus; ''F''(''a'')}}.  Let there be numbers ''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>
 
such that
 
 
:<math>a = x_0 < x_1 < x_2 < \ldots < x_{n-1} < x_n = b\,.</math>
 
 
It follows that
 
 
:<math>F(b) - F(a) = F(x_n) - F(x_0) \,.</math>
 
 
Now, we add each ''F''(''x''<sub>''i''</sub>) along with its additive inverse, so that the resulting quantity is equal:
 
 
:<math>\begin{matrix} F(b) - F(a) & = & F(x_n)\,+\,[-F(x_{n-1})\,+\,F(x_{n-1})]\,+\,\ldots\,+\,[-F(x_1) + F(x_1)]\,-\,F(x_0) \, \\
 
& = & [F(x_n)\,-\,F(x_{n-1})]\,+\,[F(x_{n-1})\,+\,\ldots\,-\,F(x_1)]\,+\,[F(x_1)\,-\,F(x_0)] \,. \end{matrix}</math>
 
 
The above quantity can be written as the following sum:
 
 
:<math>F(b) - F(a) = \sum_{i=1}^n \,[F(x_i) - F(x_{i-1})]\,. \qquad (1)</math>
 
 
Next we will employ the [[mean value theorem]].  Stated briefly,
 
 
Let ''F'' be continuous on the closed interval [''a'', ''b''] and differentiable on the open interval (''a'', ''b''). Then there exists some ''c'' in (''a'', ''b'') such that
 
 
:<math>F'(c) = \frac{F(b) - F(a)}{b - a}\,.</math>
 
 
It follows that
 
 
:<math>F'(c)(b - a) = F(b) - F(a). \,</math>
 
 
The function ''F'' is differentiable on the interval [''a'',&nbsp;''b'']; therefore, it is also differentiable and continuous on each interval {{nowrap|[''x''<sub>''i''&thinsp;&minus;1</sub>, ''x''<sub>''i''</sub>&thinsp;]}}.  According to the mean value theorem (above),
 
 
:<math>F(x_i) - F(x_{i-1}) = F'(c_i)(x_i - x_{i-1}) \,.</math>
 
 
Substituting the above into (1), we get
 
 
:<math>F(b) - F(a) = \sum_{i=1}^n \,[F'(c_i)(x_i - x_{i-1})]\,.</math>
 
 
The assumption implies <math>F'(c_i) = f(c_i).</math>  Also, <math>x_i - x_{i-1}</math> can be expressed as <math>\Delta x</math> of partition <math>i</math>.
 
 
:<math>F(b) - F(a) = \sum_{i=1}^n \,[f(c_i)(\Delta x_i)]\,. \qquad (2)</math>
 
 
[[Image:Riemann.gif|right|thumb|A converging sequence of Riemann sums. The numbers in the upper right are the areas of the grey rectangles.  They converge to the integral of the function.]]
 
 
Notice that we are describing the area of a rectangle, with the width times the height, and we are adding the areas together.  Each rectangle, by virtue of the [[Mean Value Theorem]], describes an approximation of the curve section it is drawn over.  Also notice that <math>\Delta x_i</math> need not be the same for all values of&nbsp;''i'', or in other words that the width of the rectangles can differ.  What we have to do is approximate the curve with ''n'' rectangles.  Now, as the size of the partitions get smaller and ''n'' increases, resulting in more partitions to cover the space, we will get closer and closer to the actual area of the curve.
 
 
By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the [[Riemann integral]]. We know that this limit exists because ''&fnof;'' was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.
 
 
So, we take the limit on both sides of (2). This gives us
 
 
:<math>\lim_{\| \Delta \| \to 0} F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n \,[f(c_i)(\Delta x_i)]\,.</math>
 
 
Neither ''F''(''b'') nor ''F''(''a'') is dependent on ||Δ||, so the limit on the left side remains ''F''(''b'') - ''F''(''a'').
 
 
:<math>F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n \,[f(c_i)(\Delta x_i)]\,.</math>
 
 
The expression on the right side of the equation defines the integral over ''&fnof;'' from ''a'' to ''b''. Therefore, we obtain
 
 
:<math>F(b) - F(a) = \int_{a}^{b} f(x)\,dx\,,</math>
 
 
which completes the proof.
 
 
It almost looks like the first part of the theorem follows directly from the second, because the equation&nbsp; <math>g(x) - g(a) = \int_a^x f(t) \, dt,</math>&thinsp;  where ''g'' is an antiderivative of ''ƒ'', implies that&nbsp; <math>F(x) = \int_a^x f(t)\, dt\,</math>&thinsp; has the same derivative as ''g'', and therefore {{nowrap|''F''&thinsp;&prime; {{=}} ''&fnof;''}}. This argument only works if we already know that ''ƒ'' has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem.<ref>{{Citation |last=Spivak|first=Michael|year=1980|title=Calculus|edition=2nd|publication-place=Houston, Texas|publisher=Publish or Perish Inc.}}</ref>
 
For example if ''ƒ''(''x'')&nbsp;= e<sup>&minus;''x''<sup>2</sup></sup>,  then ''ƒ'' has an antiderivative, namely
 
 
:<math>g(x) = \int_0^x f(t) \, dt\,</math>
 
 
and there is no simpler expression for this function.  It is therefore important not to interpret the second part of the theorem as the definition of the integral.  Indeed, there are many functions that are integrable but lack antiderivatives that can be written as an [[elementary function]].  Conversely, many functions that have antiderivatives are not Riemann integrable (see [[Volterra's function]]).
 
 
==Examples==
 
As an example, suppose the following is to be calculated:
 
 
:<math>\int_2^5 x^2\, dx. </math>
 
 
Here, <math>f(x) = x^2 \,</math> and we can use <math>F(x) = {x^3\over 3} </math> as the antiderivative. Therefore:
 
 
:<math>\int_2^5 x^2\, dx = F(5) - F(2) = {125 \over 3} - {8 \over 3} = {117 \over 3} = 39.</math>
 
 
Or, more generally, that
 
 
:<math>{d \over dx} \int_0^x t^3\, dt </math>
 
is to be calculated. Here, <math>f(t) = t^3 \,</math> and <math>F(t) = {t^4 \over 4} </math> can be used as the antiderivative. Therefore:
 
 
:<math>{d \over dx} \int_0^x t^3\, dt = {d \over dx} F(x) - {d \over dx} F(0) = {d \over dx} {x^4 \over 4} = x^3.</math>
 
 
Or, equivalently,
 
 
:<math>{d \over dx} \int_0^x t^3\, dt = f(x) {dx \over dx} - f(0) {d0 \over dx} = x^3.</math>
 
 
==Generalizations==
 
We don't need to assume continuity of ''ƒ'' on the whole interval.  Part I of the theorem then says: if ''ƒ'' is any [[Lebesgue integration|Lebesgue integrable]] function on {{nowrap|[''a'', ''b'']}} and ''x''<sub>0</sub> is a number in {{nowrap|[''a'', ''b'']}} such that ''ƒ'' is continuous at ''x''<sub>0</sub>, then
 
 
:<math>F(x) = \int_a^x f(t)\, dt</math>
 
 
is differentiable for ''x''&nbsp;= ''x''<sub>0</sub> with ''F<nowiki>'</nowiki>''(''x''<sub>0</sub>)&nbsp;= ''ƒ''(''x''<sub>0</sub>). We can relax the conditions on ''ƒ'' still further and suppose that it is merely locally integrable.  In that case, we can conclude that the function ''F'' is differentiable [[almost everywhere]] and ''F<nowiki>'</nowiki>''(''x'')&nbsp;= ''ƒ''(''x'') almost everywhere. On the real line this statement is equivalent to [[Lebesgue differentiation theorem|Lebesgue's differentiation theorem]].  These results remain true for the Henstock–Kurzweil integral which allows a larger class of integrable functions {{harv|Bartle|2001|loc=Thm. 4.11}}.
 
 
In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost every ''x'', the average value of a function ''ƒ'' over a ball of radius ''r'' centered at ''x'' will tend to ''ƒ''(''x'') as ''r'' tends to 0.
 
 
Part II of the theorem is true for any Lebesgue integrable function ''ƒ'' which has an antiderivative ''F'' (not all integrable functions do, though).  In other words, if a real function ''F'' on [''a'',&nbsp;''b''] admits a derivative ''ƒ''(''x'') at ''every''&thinsp; point ''x'' of {{nowrap|[''a'', ''b'']}} and if this derivative ''ƒ'' is Lebesgue integrable on [''a'',&nbsp;''b''], then
 
 
:<math>F(b) - F(a) = \int_a^b f(t) \, \mathrm{d}t.</math> &nbsp;&nbsp;&nbsp;{{harvtxt|Rudin|1987|loc=th. 7.21}}
 
 
This result may fail for continuous functions ''F'' that admit a derivative ''ƒ''(''x'') at almost every point ''x'', as the example of the [[Cantor function]] shows.  But the result remains true if ''F'' is [[Absolute continuity|absolutely continuous]]: in that case, ''F'' admits a derivative ''ƒ''(''x'') at almost every point ''x'' and, as in the formula above, {{nowrap|''F''(''b'') &minus; ''F''(''a'')}} is equal to the integral of ''ƒ'' on [''a'',&nbsp;''b''].
 
 
The conditions of this theorem may again be relaxed by considering the integrals involved as [[Henstock-Kurzweil integral]]s.  Specifically, if a continuous function ''F''(''x'') admits a derivative ''ƒ''(''x'') at all but countably many points, then ''ƒ''(''x'') is Henstock-Kurzweil integrable and {{nowrap|''F''(''b'') &minus; ''F''(''a'')}} is equal to the integral of ''ƒ'' on [''a'',&nbsp;''b''].  The difference here is that the integrability of ''ƒ'' does not need to be assumed. {{harv|Bartle|2001|loc=Thm. 4.7}}
 
 
The version of [[Taylor's theorem]] which expresses the error term as an integral can be seen as a generalization of the Fundamental Theorem.
 
 
There is a version of the theorem for [[complex number|complex]] functions: suppose ''U'' is an open set in '''C''' and ''ƒ''&nbsp;: ''U''&nbsp;→ '''C''' is a function which has a [[holomorphic function|holomorphic]] antiderivative ''F'' on&nbsp;''U''. Then for every curve γ&nbsp;: [''a'',&nbsp;''b'']&nbsp;→ ''U'', the [[curve integral]] can be computed as
 
 
:<math>\int_{\gamma} f(z) \,dz = F(\gamma(b)) - F(\gamma(a))\,.</math>
 
 
The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on [[manifold]]s. One such generalization offered by the [[calculus of moving surfaces]] is the [[time evolution of integrals]].
 
 
One of the most powerful statements in this direction is [[Stokes' theorem]]: Let ''M'' be an oriented [[piecewise]] smooth [[manifold]] of [[dimension]] ''n'' and let <math>\omega</math> be an ''n''&minus;1 form that is a [[compactly supported]] [[differential form]] on ''M'' of class C<sup>1</sup>. If ∂''M'' denotes the [[manifold|boundary]] of ''M'' with its induced [[Orientation (mathematics)|orientation]], then
 
 
:<math>\int_M \mathrm{d}\omega = \oint_{\partial M} \omega\,.</math>
 
 
Here <math>\mathrm{d}\!\,</math> is the [[exterior derivative]], which is defined using the manifold structure only.
 
 
The theorem is often used in situations where ''M'' is an embedded oriented submanifold of some bigger manifold on which the form <math>\omega</math> is defined.
 

Revision as of 20:34, 22 January 2011