## Gain a deeper understanding of derivatives with Python

Mathematics is a notoriously difficult subject for a lot of. Due to its cumulative and abstract nature, students can find it difficult to make connections and understand how math is applied. In my ongoing journey, I’ve found it extremely helpful to reinforce my understanding of math concepts by implementing them in code.

Coding demands precision because computers can only execute a particular set of instructions. The necessity for exactness requires a logical and systematic approach to problem-solving, which will be incredibly helpful for understanding the underlying concepts. Through coding, we will develop a more profound intuition for the thought being implemented. Furthermore, coding allows us to experiment with, visualize and automate mathematical concepts in an interactive and hands-on way, which may bring theoretical concepts to life and enhance our learning experience.

In this text, we’ll try to achieve a greater understanding of derivatives by implementing the “forward difference quotient” approximation in python. While this is a straightforward implementation requiring little code, it gets to the center of what derivatives represent.

Let’s begin by stating the definition of the derivative. Because there are many freely available resources on derivatives, this explanation is not going to be comprehensive. The derivative of a function f(x) with respect to x is defined as:

The derivative tells us, at a single point, the *direction* and *rate* at which a function is changing. By picking two points x and x + h, computing the slope of the function between the 2 points (i.e. (f(x+h) — f(x)) / h), and allowing h to get infinitely near 0, we recuperate the instantaneous rate of change (the derivative) of the function at x.

Probably the most abstract and maybe difficult-to-understand component of the derivative is that h gets infinitely near 0 without actually reaching 0. We will write a function in python that approximates this concept:

`import numpy as np`

from typing import Callable, Uniondef derivative(f: Callable[[float], float], x: Union[float, np.ndarray], h: float = 0.001) -> Union[float, np.ndarray]:

"""

Approximate the derivative of a function f at point x using

the forward difference quotient defintion.

Parameters

----------

f : callable

A function that takes a float as input and returns a float.

x : float or ndarray

The purpose(s) at which to compute the derivative of f.

h : float, optional

The step size utilized in the finite difference approximation. Default is 0.001.

Returns

-------

float or ndarray

The derivative of f at point x, approximated using the forward

difference quotient.

"""

# If h gets too small, the computation becomes unstable

if h < 1e-5:

raise ValueError('h values lower than 1e-5 are unstable. Consider increasing h')

return (f(x + h) - f(x)) / h

This function takes in a pure function of a single variable, and approximates the derivative at the purpose(s) laid out in the x argument. The actual logic of this function resides in a single line of code, yet it approximates many derivatives inside a tolerable error.

To see this in motion, let’s approximate the derivative of the quadratic function. By the ability rule (or by computing the limit of the difference quotient), we all know that:

For instance, the derivative of the function at x = 3 is 2*3 = 6. The next code approximates the derivative of the quadratic function at x = 3:

`# Define the quadratic function`

def f(x):"""

The quadratic function f(x) = x^2

"""

return x**2

# Define the input(s) to compute derivatives for

x = 3

# Define the worth of h used to approximate the derivative

h = 0.001

# Approximate the derivative

print(derivative(f, x, h))

# Output: 6.000999999999479

By setting h to be a small positive number near 0, we extract an approximation to the derivative which is near the true value. As h gets smaller (as much as a certain tolerance), the approximation gets more accurate:

`# Define a smaller h value to get a more accurate approximation`

h = 1e-5# Take the derivative

print(derivative(f, x, h))

# Output: 6.000009999951316

We will visualize this behavior for decreasing values of h:

One other interesting example involves the trigonometric functions. From the definition of the derivative, we know the next:

Using the derivative function, we will approximate this:

`import numpy as np`

import matplotlib.pyplot as plt# Define the h value

h = 1e-5

# Define the domain

x = np.linspace(-10, 10, 1000)

# Approximate the derivative of sin(x) (ought to be near cos(x))

f_prime = derivative(np.sin, x, h)

# Plot sin(x) vs the approximated derivative

fig, ax = plt.subplots(figsize=(10, 6))

ax.plot(x, np.sin(x), label='f(x) = sin(x)')

ax.plot(x, f_prime, label="f'(x) ~= cos(x)")

ax.set_xlabel('x')

ax.set_ylabel('y')

ax.set_title('Approximated derivative of sin(x) (near cos(x))')

ax.legend()

plt.show()

The derivative approximation implemented in this text is generally known as the “forward difference quotient” and is one in all some ways to perform numerical differentiation. It’s necessary to notice that this approximation is imperfect, because it tends to break down for small values of h. Furthermore, in practice, we will compute exact derivatives of close-form functions, removing the necessity for approximations. The aim of this text was to assist the reader see how the mathematics plays out through code and hopefully increase their intuition for derivatives. I might encourage the reader to check the code on different functions, explore other derivative approximations, and understand the respective strengths and weaknesses. Thanks for reading!

*Numerical Derivatives*