How did IQ Measurements Evolve over Time?

In 1904, Alfred Binet was the director of the psychology laboratory at the Sorbonne in Paris. The Minister of Public Education commissioned Binet to develop tests to identify less capable students who should be provided some form of special education. To this purpose, Binet set out to develop a series of tests connected to everyday types of cognitive processes such as counting coins, ordering numbers, comprehending readings, and identifying patterns. His intent was to construct tests that measure innate intelligence and are relatively knowledge free. Between 1904 and his death in 1911, Binet designed a sequence of tests that he normed, based on average performances of students of each age up to 16 years. He wrote:

It is a specially interesting feature of these tests that they permit us, when necessary, to free a beautiful native intelligence from the trammels of the school.

Alfred Binet 1857–1911

Each student worked through the battery of tests until reaching the first test at which he was unsuccessful. Binet called the age assigned to this test his mental age. By subtracting the student’s mental age from his chronological age, Binet obtained a single number that became his measure of the student’s intelligence. In 1912, German psychologist William Stern modified Binet’s measure by dividing the mental age by the chronological age and multiplying by 100 to obtain a whole number. With this, the concept of IQ (Intelligence quotient) as a measure of intelligence was born.

IQ = Mental age ÷ Chronological age × 100

From the early 1930s and beyond, IQ tests were periodically restandardized. This meant that the researchers had to administer the IQ tests to a large number of people and set the average performance at 100. However, it was discovered that people were achieving scores substantially higher than 100 on the same IQ tests as those involved in the earlier standardization. This upward drift in IQ of about one standard deviation (15 points) every two generations is now known as the Flynn effect, in recognition of James R. Flynn’s discussion of its potential causes and implications.

In his TED Talk, presented on September 26, 2013, Flynn hypothesized that the new technologies including radio, television, and computers are demanding more abstract thought than the environment of the early 20th century. He noted that in 1900, only 3% of jobs were cognitively demanding, including occupations such as doctor, lawyer, teacher, and accountant. He compared this to the 35% of jobs today that demand higher order thinking skills. Flynn asserted that the high-tech environment is stimulating a latent capacity for abstract thought that was not as strongly demanded in previous generations. In his seminal publication, Principles of Psychology, Herbert Spencer defined the intelligence of an organism as “an adjustment of inner to outer relations,” meaning the organism’s ability to adapt to its environment.

The rapid increase in IQ in two generations is too fast to be attributed to genetic mutation and suggests that intelligence can be increased by appropriate environmental influences. In fact, research in the newly emerging field of epigenetics is revealing that environmental influences can “switch on” and “switch off” certain genes, while research in neuroplasticity has shown how the brain, after suffering injury, can “rewire” its synaptic connections. To further complicate the issue, recent studies have revealed that while IQ scores increased between 1930 and 2000 (positive Flynn effect), IQ scores declined in subsequent decades (negative Flynn effect) in some countries–a decline that some researchers attribute to environmental effects such as declining educational standards and passive technologies.

Verified by MonsterInsights