Were IQ tests created by the Nazis?

No the Nazis did not develop IQ tests; these actually emerged from early efforts to identify students requiring special instruction. In 1904, Alfred Binet was the director of the psychology laboratory at the Sorbonne in Paris. The Minister of Public Education commissioned Binet to develop tests to identify less capable students who should be provided some form of special education. To this purpose, Binet set out to develop a series of tests connected to everyday types of cognitive processes such as counting coins, ordering numbers, comprehending readings, and identifying patterns. His intent was to construct tests that measure innate intelligence and are relatively knowledge free. Between 1904 and his death in 1911, Binet designed a sequence of tests that he normed, based on average performances of students of each age up to 16 years. He wrote:

It is a specially interesting feature of these tests that they permit us, when necessary, to free a beautiful native intelligence from the trammels of the school.

Each student worked through the battery of tests until reaching the first test at which he was unsuccessful. Binet called the age assigned to this test his mental age. By subtracting the student’s mental age from his chronological age, Binet obtained a single number that became a measure of the student’s intelligence. In 1912, German psychologist William Stern modified Binet’s measure by dividing the mental age by the chronological age and multiplying by 100 to obtain a whole number. With this, the concept of IQ (Intelligence quotient) as a measure of intelligence was born.

IQ = Mental age ÷ Chronological age × 100

In the same year that Binet set out to develop his tests, Charles Spearman attempted to legitimize psychology as a hard science by mathematizing measures of intelligence. In 1904, he tested 23 boys in a preparatory school near Oxford on a variety of achievement tests in each of: classics, French, English, mathematics, discrimination of pitch, and music. On analyzing the test results, he asked, “What pervasive cognitive faculty accounts for the fact that a student who does well on any of these tests, usually, but not always, does well on the others?” Spearman hypothesized that the cognitive abilities brought to bear in each test consisted of a general ability, common to all the tests, and a specific ability unique to that test. He called this general factor of cognitive ability the g factor. Spearman’s procedure for establishing the g factor employed a mathematical technique now known as factor analysis. He was able to show that the score on each test could be computed as g plus some small constant, unique to that test. Since all the tests were highly correlated, a student’s performance on one test could be used to predict how a student would perform on the other tests. Furthermore, the general intelligence, g or its proxy IQ, was also a good predictor of how well a student would perform on any cognitively complex task.

In the decades that followed, IQ became embedded in the public perception as a proxy for intelligence. In 1955, American psychologist David Wechsler published a new intelligence test that became known as the Wechsler Adult Intelligence Scale (WAIS). He defined intelligence as “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment. To measure this faculty, he created two sub-tests–one measuring “verbal intelligence” and the other, “non-verbal (performance) intelligence”. Assuming that intelligence is normally distributed, Wechsler mapped his test scale onto a normal distribution with mean 100 and standard deviation 15. Today, IQ tests are used as a rough approximation of a person’s suitability for certain kinds of employment.

Verified by MonsterInsights