{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Introduction to Scientific Computing\n", "### Lecture 11: Numerical Derivatives\n", "#### J.R. Gladden, Spring 2018, Univ. of Mississippi" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Evaluating, or more accurately estimating, a derivative of a function is a common task for scientists and engineers. In words, the derivative is the slope of a tangent line to (or rate of change of) a function at a particular point.\n", "\n", "The formal mathematical definition of a derviative is:\n", "$$ \\frac{df}{dx} = \\lim_{\\Delta x \\rightarrow 0} \\left(\\frac{\\Delta f}{\\Delta x} \\right) $$\n", "This can be estimated using the **Central Difference Approximation** based on Taylor expansions about a point $x_0$ (see the slides for the full derivation).\n", "$$ f'(x_0) = \\frac{f(x_0+h) - f(x_0 -h)}{2h} - \\mathcal{O}(h^2) $$\n", "where $h$ is the evaluation point above and below $x_0$.\n", "\n", "Here is a function to compute the **CDA** for a function:\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "def cda(f,x,h):\n", " return (f(x+h)-f(x-h))/(2*h)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/plain": [ "3.999999999999999" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def funct(x): return x**2\n", "cda(funct,2.0,0.3)\n", "#Exact value here is 2x for x=2.0 -> dfdx=4.000" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "## Also works for arrays of x values\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib wx\n", "\n", "x=np.linspace(-2.0,2.0,50)\n", "dfdx=cda(funct,x,0.1)\n", "plt.plot(x,funct(x),label='$f(x)$')\n", "plt.plot(x,dfdx,label=\"$f'(x)$\")\n", "plt.grid()\n", "plt.legend()\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Simialarly ** Forward ** and **Backward Difference Approximations** can be defined (respectively) as:\n", "$$ f'(x_0) = \\frac{f(x_0+h) - f(x_0)}{h} - \\mathcal{O}(h) $$\n", "$$ f'(x_0) = \\frac{f(x_0) - f(x_0 -h)}{h} - \\mathcal{O}(h) $$\n", "Note the error is now worse - of order $h$ rather than order $h^2$. Here's the code defintions:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "##Forward difference approximation\n", "def fda(f,x,h):\n", " '''\n", " Computes the derivative of f wrt x with interval of h\n", " using the forward difference approximation (fda).\n", " '''\n", " return ( f(x+h) - f(x) )/h\n", "\n", "#Backward difference approximation\n", "def bda(f,x,h):\n", " '''\n", " Computes the derivative of f wrt x with interval of h\n", " using the backward difference approximation (fda).\n", " '''\n", " return ( f(x) - f(x-h) )/h" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Here's a usage example with comparison to an exact solution." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%matplotlib wx\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "\n", "#set up parameters for function\n", "a,b,c=2.,1.,0.\n", "\n", "def func(x):\n", " return a*np.sin(b*x+c)\n", "\n", "def exact(x):\n", " return a*b*np.cos(b*x+c)\n", " \n", "#The x-value at which to compute the derivative\n", "x=2.0\n", "#an initial h-value\n", "h=0.2\n", "\n", "#Compute f' (df/dx)\n", "fp=fda(func,x,h)\n", "\n", "#The actual df/dx\n", "real=a*b*np.cos(b*x+c)\n", "\n", "# Plot the function and it's derivative over a range of x values\n", "x2=np.linspace(0,4*np.pi,200)\n", "plt.plot(x2,func(x2),label = 'function')\n", "plt.plot(x2,exact(x2),label = 'Exact Derivative')\n", "plt.plot(x2,cda(func,x2,h),label = 'CDA Method')\n", "plt.xlabel('x')\n", "plt.ylabel('f(x) and df/dx(x)')\n", "plt.title('Test Numerical Derivative using CDA with h = %2.2e' % h)\n", "plt.legend()\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now let's investigate the effect of of the value of $h$ on the accuracy of the estimation. Note the use of np.logspace which is line np.linspace ut uses logarithmic spacing between values. We also see here the first time the issue of machine precision." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('The approximate value is:', -0.47941754821851368)\n", "('The true value is: ', -0.47942553860420301)\n", "The difference is: 7.990e-06 (-1.667e-03 %)\n" ] } ], "source": [ "#Try a range of h values to see how much better the approximation gets\n", "# Using logspace rather than linspace to cover a wide span (10^-13 to 10^-2 in 20 steps)\n", "\n", "def f(x): return np.cos(x)\n", "\n", "hvalues=np.logspace(-10,-2,20)\n", "errors=[]\n", "real = -np.sin(0.5)\n", "\n", "for h in hvalues:\n", " fp=cda(f,0.5,h)\n", " errors.append(abs(fp-real)/abs(real)*100.0)\n", "\n", "errors=np.array(errors)\n", "\n", "plt.figure()\n", "\n", "plt.loglog(hvalues,errors,'b-o')\n", "plt.show()\n", "plt.xlabel(\"Stencil Value (h)\")\n", "plt.ylabel(\"Error (%)\")\n", "print(\"The approximate value is:\",fp)\n", "print(\"The true value is: \",real)\n", "print(\"The difference is: %2.3e (%2.3e %%)\" % (abs(real-fp), abs(real-fp)/real*100.))\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "**Machine precision**: Note that all computers represent numbers to a *finite* precision. An easy way to think of this is that the precision is the largest value of $\\epsilon$ such that $1.0 - \\epsilon = 1.0$ as far as the computer evaluates it. Numpy has a built in function to test this on a given system:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/plain": [ "2.2204460492503131e-16" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.finfo(np.float64).eps" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "**Class Exercise:** Here's the code for the class exercise in the slides." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "plt.figure()\n", "def f(x): return np.cos(x)\n", "\n", "hvals=[]\n", "errs=[]\n", "errs2=[]\n", "h=1e-12\n", "while h <= 1e-1:\n", " fp=cda(f,0.5,h)\n", " fp2=fda(f,0.5,h)\n", " hvals.append(h)\n", " errs.append(abs(fp-(-np.sin(0.5))))\n", " errs2.append(abs(fp2-(-np.sin(0.5))))\n", " h*=10 #effectively same as using logspace\n", "\n", "plt.loglog(hvals,errs,'b-o',label='CDA Error')\n", "plt.loglog(hvals,errs2,'g-s',label='FDA Error')\n", "plt.xlabel(\"h\")\n", "plt.ylabel('Error')\n", "plt.grid()\n", "plt.legend()\n", "plt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 0 }