This question was recently asked on StackOverflow:
I know this might seem like an absolutely silly question to ask, yet I am too curious not to ask…
Why did “i” and “j” become THE variables to use as counters in most control structures?
The question has generated many answers, from scholarly to spurious — but the thing that has struck me is that no one has attempted to cite their sources or do any research. Why is this, when we live in a time when primary sources are more widely available than ever?
Let’s start with the claims that FORTRAN was the original source for their use in programming languages—while perhaps not the ultimate origin, it may have been the reason that they became widespread in the programming community.
The original manual for Fortran^{1} for the IBM 704 is readily available online. The first thing I notice is the glorious cover:
And sure enough, we can find the definition for the integral variables:
Unfortunately, the path stops here. I can’t find any references by Backus (or anyone else) as to why they chose IJKLMN as the integer variables. However, due to the fact that integer variables in Fortran “are somewhat restricted in their use and serve primarily as subscripts or exponents”,^{2} I am forced to the conclusion that they were used in imitation of those in mathematics. I don’t think we’ll ever know exactly who or when they were introduced to Fortran itself.
What we can do, however, is have a look at when they arose in mathematics. The usual place that i, j, etc. arise is in ‘sigma notation’, using the summation operator Σ. For example, if we write:
$i=1∑100 i$We mean $i(=1)+i(=2)+i(=3)$, until $i=100$, and we can calculate the answer as $1+2+3+4+⋯=5050$. So where did this notation itself come from?
The standard work on the history of mathematical notations is A History of Mathematical Notations by Florian Cajori.^{3} He states that Σ was first used by Euler, in his Institutiones calculi differentialis (1755). We can see the part in question here:
This reads (translation by Ian Bruce, from 17thCentryMaths.com):
26: Just as we have been accustomed to specify the difference by the sign Δ, thus we will indicate the sum by the sign Σ; evidently if the difference of the function y were z, there will be z = Δy; from which, if y may be given, the difference z is found we have shown before. But if moreover the difference z shall be given and the sum of this y must be found, y = Σz is made and evidently from the equation z = Δy on regressing this equation will have the form y = Σz, where some constant quantity can be added on account of the reasons given above; […]
Evidently this is not the Σ we are looking for, as Euler uses it only in opposition to Δ (for finite differencing). In fact, Cajori notes that Euler’s Σ “received little attention”, and it seems that only Lagrange adopted it. Here is an excerpt from his Œuvres (printed MDCCCLXIX):
Again, we can see Σ is only used in opposition to Δ. Cajori next states that Σ to mean “sum” was used by Fourier, in his Théorie Analytique de la chaleur (1822), and here we find what we’re looking for:
The sign $Σ$ affects the number $i$ and indicates that the sum must be taken from $i=1$ to $i=01 $. One can also contain the first term $1$ under the sign $Σ$, and we have:
$2πϕ(x,t)=∫dαfα−01 ∑+01 cosi(α−x)e_{−ikt}$It must then have all its integral values from $−01 $ up to $+01 $; that is what one indicates by writing the limits $−01 $ and $+01 $ next to the sign $Σ$, that one of the values of $i$ is $0$. This is the most concise expression of the solution.^{4}
Since Fourier explains Σ several times in the book, and not just once, we can assume that the notation is either new or unfamiliar to most readers.^{5} In any case, it doesn’t really matter who invented it, because while we have found our Σ, Fourier doesn’t explain why he uses $i$. In fact, since he uses it to index sequences in other places it appears it must be an alreadyexisting usage.^{6}
A quick glance at the text by Euler above shows that he uses indexing very rarely (despite the subject of the text being a prime candidate!), and when he does, he uses $m$.
And, this is as far as I got. Time to publish this.
Footnotes

It isn’t written FORTRAN here. I’m not sure of the nuances of its capitalization. ↩

J.W. Backus, R.J. Beeber, S. Best, R. Goldberg, L.M. Haibt, H.L. Herrick, R.A. Nelson, D. Sayre, P.B. Sheridan, H.J. Stern, I. Ziller, R.A. Hughes, and R. Nutt, The FORTRAN automatic coding system. Pages 188198. In Proceedings Western Joint Computer Conference, Los Angeles, California, February 1957. ↩

Unfortunately, only the first volume appears to be readily available online. You can see some of the second volume on Google Books. ↩

Note that Fourier has no qualms about writing $−01 $ and $+01 $ to represent infinities! ↩

Knuth also states that the notation arrived with Fourier, so I guess I’m not in bad company. ↩

While $i$ is often used as (one of) the indices for a matrix, true matrices weren’t developed until after Fourier’s book was published, we must look elsewhere. ↩