Maths on a Chalk Plain

A maths and science blog by Rory Phillips.

Partial Fractions I: A Case of Mistaken Identity?

Posted 6th February 2025.
Previous Post ||| Next Post

Partial Fractions, more formally known (rather contrastingly) as both 'Partial Fraction Decomposition' and 'Partial Fraction Expansion', are the subject of this post. The oxymoron within the two terms aside, they describe the ability to split up an algebraic fraction with multiple factors in the denominator into a sum of other fractions, whose denominators contain powers of those same factors. Discovered simultaneously by Johann Bernoulli and Gottfried Leibniz in 1702, it's a widespread technique that's especially useful for integration.

The addition of fractions is a process learned from an early age, and we are interested in a particular characteristic of it here, namely that the sum's denominator contains the product of the addends' denominators. This can be illustrated with a simple non-algebraic example, such as \[\frac{1}{2} + \frac{1}{3} = \frac{5}{2\times 3} \mathrm{.}\] This process applies in exactly the same way to algebraic fractions, in that \[\frac{1}{x} + \frac{1}{x+1} = \frac{2x+1}{x \times (x+1)} \mathrm{.}\] Partial fraction decomposition can be thought of as this process in reverse.

A typical 'textbook' problem on partial fractions would look like \[\frac{1}{x(x+1)} \equiv \frac{A}{x} + \frac{B}{x+1} \textrm{,}\] where the goal is to find the constants \(A\) and \(B\). Notice the \(\equiv\) symbol. Though similar, it has a different meaning to \(=\). It denotes an identity, which means that the statement is true for all values of the variable, making partial fractions useful as an algebraic method. There are two methods of solving partial fraction problems, by substitution and by comparing coefficients. For brevity, I will only cover substitution here.

To derive \(A\) and \(B\), we first multiply out the identity, and then substitute values of \(x\) in. This gives us simultaneous equations, which we may solve for \(A\) and \(B\). In some cases, we may eliminate one or more of the constants with our choice of \(x\), making our work easier. For the above problem, the working is as follows: \[1 \equiv A(x+1) + B(x)\] \[\textrm{let} \quad x=0\] \[1 = A(0+1) + B(0)\] \[A=1\] \[\textrm{let} \quad x=-1\] \[1 = A(-1+1) + B(-1)\] \[B=-1 \textrm{.}\] To verify this, you can substitute these values of \(A\) and \(B\) into the original identity.

Beyond this, there are a couple of additional rules, the first of which is that when the denominator contains a repeated factor (one of the factors is to a power greater than one), the decomposition contains all the powers of that factor up to and including its original value. For example, \[\frac{1}{x(x+1)^2} \equiv \frac{A}{x} + \frac{B}{x+1} + \frac{C}{(x+1)^2} \textrm{.}\] These can be solved in the same way as before. The second rule is that the order of the numerator must be less than the order of the denominator. Since the numerator and denominator are polynomials, by order we mean the highest power of the variable which occurs. For example, \(x^7-1\) has order 7, and \((x+3)^4\) has order 4. Where the second rule isn't obeyed, the expression can still be decomposed, but we must first use polynomial division.

These rules, especially the one about repeated factors, seem a bit arbitrary, so let's try breaking them! We'll take an expression with a repeated factor, and decompose it to \[\frac{1}{x(x+1)^2} \equiv \frac{A}{x} + \frac{B}{(x+1)^2} \textrm{.}\] Multiplying this out we get \(1 \equiv A(x+1)^2 + B(x)\). Using the method above, and the values \(x=0\) and \(x=-1\), we once again get \(A=1\) and \(B=-1\). We have been preinformed that this shouldn't work, so let's try some other values of \(x\) as well. Using \(x=-2\) and \(x=1\), we get a set of simultaneous equations \[A-2B=1 \textrm{,}\] \[4A+B=1 \textrm{.}\] Solving for \(A\) by elimination we get \[8A + 2B = 2\] \[9A = 3\] \[A = \frac{1}{3} \textrm{.}\] There's no need to take this any further, we've already gotten a value which differs from our earlier one. What could've gone wrong?

To go over what has happened, we used different values of \(x\), and got different values of \(A\). This seems to me quite strong evidence that \(A\) and \(B\) are in fact functions of \(x\), and therefore better expressed as \(A(x)\) and \(B(x)\). From this perspective, our attempts to solve for \(A\) and \(B\) earlier are complete nonsense. We actually compared 4 different values of the function \(A(x)\), acting like they were all equal! This leads to the conclusion that the earlier (rule-breaking) statement that \[ \frac{1}{x(x+1)^2} \equiv \frac{A}{x} + \frac{B}{(x+1)^2} \] is correct if and only if \(A\) and \(B\) are functions of \(x\). If we insist that they must be constants, then the identity is quite simply incorrect.

This realisation is hugely yielding, as we now know what we need to show in order to prove that partial fraction decompositions always exist. Constants are simply a special case of functions. Therefore, assuming that a fraction will always decompose where numerators are permitted to be functions, we need to prove that constant values of these functions always exist. Or, formally, constant \(A\)s always exist, where \[ \frac{ \sum_{r=1}^m b_rx^{r-1}}{\prod_{r=1}^l(x+a_r)^{n_r}} \equiv \sum_{r=1}^l \sum_{p=1}^{n_r} \frac{A_{rp}}{(x+a_r)^p} \] \[ n,m,l \in \mathbb{N} \quad a,b \in \mathbb{R} \quad m = \sum_{r=1}^ln_r \textrm{.} \] Over the next few chapters I will present my proof for this statement.
p.s. Don't worry if you don't know all those symbols yet.

Previous Post ||| Next Post