{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Introduction to Scientific Computing\n", "### Lecture 12: Numerical Derivatives 2\n", "#### J.R. Gladden, Spring 2018, Univ. of Mississippi" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Higher order derivatives can get tricky when computing / estimating numerically. One option is to simply call the first prder derivative functions repeatedly. But the errors will compund with each call. The second order derivative can be found by adding the 2 Taylor series expansion instead of subtracting. This leads to the estimate:\n", "$$f''(x) = \\frac{f(x_0-h) -2f(x_0)+f(x_0+h)}{h^2} + \\mathcal{O}(h^2) $$\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "%matplotlib wx\n", "#First order derivatives\n", "#Central Difference Approximation\n", "def cda(f,x,h):\n", " return (f(x+h)-f(x-h))/(2*h)\n", "\n", "#Second order derivative\n", "def cda2(f,x,h):\n", " return ( f(x-h) - 2*f(x) + f(x+h) )/(h**2)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "# Test for f(x) = 2 sin(2x)\n", "x=np.linspace(-np.pi,np.pi,100)\n", "def f(x): return 2*np.sin(2*x)\n", "dfdx=cda(f,x,0.2)\n", "df2dx2=cda2(f,x,0.2)\n", "dfdx_true=4.*np.cos(2*x)\n", "\n", "plt.plot(x,f(x),'b-',lw=2,label='Function')\n", "plt.plot(x,dfdx_true,'r-',lw=2,label='1st Derivative (True)')\n", "plt.plot(x,dfdx,'k-',label='1st Derivative (CDA)')\n", "plt.plot(x,df2dx2,'g-',lw=2,label='2nd Derivative (CDA)')\n", "plt.legend(loc=2)\n", "plt.title('Derivatives of: $f(x)=2\\sin(2x)$',fontsize=20)\n", "plt.grid()\n", "plt.ylim(-10,10)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Derivtives on a discrete mesh\n", "Here, the function relating the (x,y) points [ $f(x)$ ] is unknown. Also, the stencile $h$ is no longer arbitrary. It is dictated by the seperation of the points - (h = x[1]-x[0]). Here is a function to compute a derivative on a discrete set of data points." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def meshderiv(x,y):\n", " '''\n", " Evaulates dy/dx over a grid of points.\n", " Grid spacing can be variable.\n", " Usage: dydx = meshderiv(x,y)\n", " Inputs: x -> array, y-> array (same length)\n", " Outputs yp -> array of same length as y\n", " Uses central difference on interior points\n", " and forward/backward difference on edge points.\n", " '''\n", " yp=np.zeros(len(y))\n", " #Use forward diff for first step\n", " yp[0] = (y[1]-y[0])/(x[1]-x[0])\n", " #Now loop over interior points using CDA method\n", " for i in range(1,len(y)-1):\n", " yp[i]=( y[i+1] - y[i-1])/(x[i+1] - x[i-1])\n", " #Use backward diff for last point\n", " yp[-1] = (y[-1]-y[-2])/(x[-1]-x[-2])\n", " return yp" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## Test it with some noisy data...\n", "\n", "import random as r\n", "plt.close('all')\n", "\n", "noise = 30.0\n", "xlow,xhigh,N=(0,10,30)\n", "h = (xhigh - xlow)/float(N-1)\n", "x=np.linspace(xlow,xhigh,N)\n", "y=x**3.+4*x + noise*np.random.rand(len(x))\n", "dydx_true=3.*x**2.+4\n", "dydx_approx=meshderiv(x,y)\n", "error=abs(dydx_true - dydx_approx).sum() / abs(dydx_true).sum() *100\n", "\n", "plt.plot(x,dydx_approx,'b-o',lw=2)\n", "plt.plot(x,dydx_true,'r-')\n", "plt.plot(x,y,'g-o')\n", "plt.legend(('CDA','Exact'),loc=2)\n", "#Put in some LaTex just to be fancy\n", "plt.text(2,500,'$ y(x)=x^3+4x $',fontsize=20,color='blue')\n", "plt.text(2,400,'$ y^\\prime(x)=3x^2+4 $',fontsize=20,color='blue')\n", "plt.xlabel('$x$',fontsize=20)\n", "plt.ylabel('$y\\prime$',fontsize=20)\n", "plt.title('Meshgrid: h= %2.3e and error = %2.3f %%' % (h,error),fontsize=18,color='blue')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Smoothing noisy data\n", "Sometimes it is helpful to smooth experimental data you want to differentiate. Even a small amount of noise in the data can lead to very wild swings in the derivative of that data. This is a **very** simple smoothing function which allows for multiple passes." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "## Simple Smoothing\n", "\n", "def smooth(x,y,level=2):\n", " for i in range(level):\n", " xp=np.zeros(len(x)-1)\n", " yp=np.zeros(len(y)-1)\n", " for i in range(len(xp)):\n", " xp[i]=(x[i]+x[i+1])/2.0\n", " yp[i]=(y[i]+y[i+1])/2.0\n", " x=xp\n", " y=yp\n", " return xp,yp" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.figure()\n", "#close('all')\n", "noiselevel=8.0\n", "xdata=np.linspace(0,10,80)\n", "ydata = 20*np.exp(-xdata)+ noiselevel*(0.5-np.random.rand(len(xdata)))\n", "plt.plot(xdata,ydata,'-o',label='Original')\n", "#plot(xdata,meshderiv(xdata,ydata),'-s')\n", "\n", "levels = [1,2,3,6,10,20]\n", "for level in levels:\n", "\txsmooth,ysmooth = smooth(xdata,ydata,level=level)\n", "\tplt.plot(xsmooth,ysmooth,'-o',label='Smoothing level: '+str(level))\n", "\t#plot(xsmooth,meshderiv(xsmooth,ysmooth),'-s')\n", "plt.legend()\n", "\n", "plt.show()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Richardson Extrapolation\n", "Richardson extrapolation is a neat trick to increase the accuracy of numerical estimate for which the error scales in a power law fasion (such as the CDA method). The slides show the derivation, but the result is that for $h_2 =h_1/2$ \n", "$$ f'(x) = \\frac{2^p cda(h_2) - cda(h_1)}{2^p-1} + \\mathcal{O}(h^4)$$\n", "\n", "Since the error in CDA scales like $h^2$ in CDA, $p=2$." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def cda(f,x,h):\n", " return (f(x+h)-f(x-h))/(2*h)\n", "\n", "def richardson(g,x,h,p=2):\n", " '''\n", " Performs a Richardson Extrapolation step\n", " using h2=h1/2 and error ~ h^2\n", " '''\n", " return (2.0**p*cda(f,x,h/2.) - cda(f,x,h))/(2.0**p-1) \n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Step size (h) = 0.10000000\n", "True value = 64.00000000\n", "CDA approximation / error = 64.16000000 / 2.5000e-03 \n", "Richardson Approximation / error = 64.00000000 / -2.5535e-15 \n" ] } ], "source": [ "def f(x): return 2*x**4\n", "\n", "#Test with x0=2.0, df/dx=8*x**3|x=2\n", "\n", "\n", "xval=2.0\n", "trueval=8.0*xval**3\n", "h=0.1\n", "cda_val=cda(f,xval,h)\n", "rich_val=richardson(cda,xval,h)\n", "\n", "print('Step size (h) = %2.8f' % h)\n", "print('True value = %2.8f'%trueval)\n", "print('CDA approximation / error = %2.8f / %2.4e ' % (cda_val, (cda_val-trueval)/trueval) )\n", "print('Richardson Approximation / error = %2.8f / %2.4e '% (rich_val, (rich_val-trueval)/trueval) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def f(x): return np.cos(x)\n", "xval = 0.5\n", "trueVal = -np.sin(0.5)\n", "hvals = np.logspace(-5,-1,10)\n", "errors = []\n", "for h in hvals:\n", " richVal = richardson(cda,xval,h)\n", " error = abs(richVal - trueVal)/trueval\n", " errors.append(error)\n", "errors = np.array(errors)\n", "\n", "plt.loglog(hvals,errors,'b-o')\n", "plt.grid()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 0 }