In 1904, Alfred Binet was the director of the psychology laboratory at the Sorbonne in Paris. The Minister of Public Education commissioned Binet to develop tests to identify less capable students who should be provided some form of special education. To this purpose, Binet set out to develop a series of tests connected to everyday types of cognitive processes such as counting coins, ordering numbers, comprehending readings, and identifying patterns. His original intent was to assess a student’s readiness for instruction. However, he eventually began to construct tests that were relatively knowledge free to measure innate intelligence. Between 1904 and his death in 1911, Binet designed a sequence of tests that he normed, based on average performances of students of each age up to 16 years. He wrote:
It is a specially interesting feature of these tests that they permit us, when necessary, to free a beautiful native intelligence from the trammels of the school.
Each student worked through the battery of tests until reaching the first test at which he was unsuccessful. Binet called the age assigned to this test his mental age. By subtracting the student’s mental age from his chronological age, Binet obtained a single number that became his measure of the student’s intelligence. In 1912, German psychologist William Stern modified Binet’s measure by dividing the mental age by the chronological age and multiplying by 100 to obtain a whole number. With this, the concept of IQ (Intelligence quotient) as a measure of intelligence was born.
IQ = Mental age ÷ Chronological age × 100
While Binet’s original intent was to measure a student’s readiness for instruction–something called his or her “mental age,” he eventually saw IQ not as a measure of a student’s level of mental maturity, but rather as a measure of a student’s innate intelligence relative to other students of the same age. In 1904, British psychologist Charles Spearman tested 23 boys in a preparatory school near Oxford on a variety of achievement tests in each of: classics, French, English, mathematics, discrimination of pitch, and music. On analyzing the test results, he asked, “What pervasive cognitive faculty accounts for the fact that a student who does well on any of these tests, usually, but not always, does well on the others?” Spearman hypothesized that the cognitive abilities brought to bear in each test consisted of a general ability, common to all the tests, and a specific ability unique to that test. He called this general factor of cognitive ability the g factor. Spearman’s mathematical procedure for establishing the g factor spawned a mathematical technique now known as factor analysis.
In Stephen Jay Gould’s revised (1996) edition of The Mismeasure of Man, he argued strongly against the concept of a g factor:
The [hereditarian] argument [for g] begins with one of the fallacies–reification, or our tendency to convert abstract concepts into entities … we therefore give the word “intelligence” to this wondrously complex and multifaceted set of human capabilities. This shorthand symbol is then reified and intelligence achieves its dubious status as a unitary thing.
Gould argued that g is a mathematical fiction, because factor analysis involves the factorization of a matrix of correlations which is not unique. In some factorizations of the matrix, g would disappear. In essence, Gould was arguing that g is a mathematical fiction and does not exist.
At the root of the controversy were two duelling ideologies: the hereditarian school arguing that intelligence is innate and has a hereditary component, and the nurtures school arguing that all humans begin life with equal potential and develop cognitive skills from their environment. The battle between these two schools of thought became quite vitriolic with some psychologists arguing from emotion rather than scientific evidence. More information on this issue can be accessed at: What is intelligence? Is it measurable? – Intelligence and IQ
1 view