Most of you are probably aware of the GRE or Graduate Record Examination. Those applying for graduate programs are required to report scores from this standardized test. The GRE, along with the resume/curriculum vitae illustrating depth and breadth of experience, GPA, letters of recommendation, and an essay are evaluated for acceptance into a graduate program. Although the weight given to the applicant’s GRE varies among institutions, nearly all schools require the GRE and set minimum scores for application/acceptance. Typically top institutions and programs institute a high minimum GRE score to reduce the number of applicants and to ensure highly accomplished applicant pools. However, to what extent does the GRE predict success in graduate school? Does the GRE accurately measure an applicant’s ability to synthesize and apply knowledge, acquire and assimilate new information, or familiarity with mathematics, vocabulary, biology, history, chemistry, etc.?
Critics of the GRE, myself included, feel that the GRE does not evaluate any of these but simply provides a metric for how well a person can master GRE test taking procedures. ETS, the nonprofit that administers the test, states “the tests are intended to measure a portion of the individual characteristics that are important for graduate study: reasoning skills, critical thinking, and the ability to communicate effectively in writing in the General Test, and discipline-specific content knowledge through the Subject Test” However, the Princeton Review guide states, “ETS has long claimed that one cannont be coached to do better on its tests. If the GRE were indeed a test of intelligence, then that would be true. But the GRE is NOT [emphasis original] a measure of intelligence; it’s a test of how well you handle standardized tests.” Indeed, training for the test, either in a GRE course or with a book, involves more learning test taking strategies rather than a focus on acquirement or application of specific knowledge, i.e. “cracking the system”. Before the days of computerized exams, more math questions were included on the test than the average mathematically literate undergraduate could solve in the designated time period. To prep for the math section of the GRE, I, and others, learned not geometry or algebra, or even logic, but rather methods, i.e. cheats or tricks, to quickly move through the question. Of course, these tricks provide little help when math is encountered in either academia or the real world.
Who evaluates and writes the questions on the GRE? The Princeton Review states “You might be surprised to learn that the GRE isn’t written by distinguished professors, renowned scholars, or graduate school admissions officers. For the most part, it’s written by ordinary ETS employees, sometimes with freelance help from local graduate students.” Unfortunately, I am not able to retrieve information about question writers online either from ETS or other sources, so there is little I can comment on. However, the lack of transparency and accessible information should give all of us pause.
I am from the old guard and took my GRE with a No. 2 pencil with which I spent hours incessantly filling little circles. Modern tests are computerized and use a computer-adaptive methodology. Simply, the difficulty of questions is automatically adjusted as the test taker correctly or incorrectly answers questions. A complicated formula is used based on the level of questions and how many you answered correctly to determine your “real GRE score”. There is substantial criticism of the computer-adaptive methodology. One, if a test taker suddenly encounters an easy question mid-exam, they may deduce they have not been performing well thereby affecting their performance through the rest of the exam. Test takers may also be discouraged if relatively difficult questions are presented earlier on. Second, unequal weighting is given to questions, with earlier questions receiving more and setting precedent for later questions, biasing against test takers who become more comfortable as the test continues. ETS is aware of these issues. In 2006, they announced plans to radically redesign the test structure but later announced “Plans for launching an entirely new test all at once were dropped, and ETS decided to introduce new question types and improvements gradually over time.”
So who is ETS? They are a nonprofit 501(c)(3) created in 1947, located in Princeton, New Jersey. In an article from Business Week…
”Mention the Educational Testing Service, and most people think of the SAT, the Scholastic Aptitude Test that millions of high schoolers sweat over each year in hopes of lofty scores to help ease their way into colleges. But if ETS President Kurt M. Landgraf has his way, Americans will encounter the testing giant’s exams throughout grade school and right through their professional careers. Landgraf, a former DuPont executive brought to the organization in 2000 to give ETS a dose of business-world smarts, has a grand vision for the cerebral Princeton (N.J.) nonprofit. Worried that the backlash against college testing means a lackluster future for the SAT and other higher-ed ETS exams, Landgraf has been trying to diversify into two growth markets: tests and curriculum development for grade schools, where the federal No Child Left Behind Act has spurred national demand for testing, and the corporate market, where Landgraf sees potential growth in testing for managerial skills. By 2008, he hopes expansion in these two areas will more than double ETS’s $900 million anticipated 2004 revenue. “My job is to diversify ETS so we are no longer reliant on one or two major tests,” he says.
Does this sound like language applied to a nonprofit? From the New York Times…
It has quietly grown into a multinational operation, complete with for-profit subsidiaries, a reserve fund of $91 million, and revenue last year of $411 million… Its lush 360-acre property is dotted with low, tasteful brick buildings, tennis courts, a swimming pool, a private hotel and an impressive home where its president lives rent free… E.T.S. has come under fire not only for its failure to address increased incidents of cheating and fraud, but also for what its critics say is its transformation into a highly competitive business operation that is as much multinational monopoly as nonprofit institution, one capable of charging hefty fees, laying off workers and using sharp elbows in competing against rivals… ”E.T.S. is standing on the cusp of deciding whether it is an education institution or a commercial institution,” said Winton H. Manning, a former E.T.S. senior vice president who retired two years ago. ”I’m disappointed in the direction they have taken away from education and public service.”
In response to growing criticism of its monopoly, New York state passed the Educational Testing Act, a disclosure law which required ETS to make available certain test questions and graded answer sheets to students.
For all practical purposes ETS has grown into a for profit institution trading on its nonprofit status to create a monopoly (read the New York Times and Business Week articles for more alarming revelations). For example, Duke had 8,303 graduate applications for the fall of 2009, at $190 per test that is $1,577,570 for just one school for one year. From all schools, ETS in 2007 pulled in $880 million, and $94 million in profit after expenses. ETS also markets through one of their subsidiaries, and for profit, a test book at $23.10 (at Amazon). “We prepare the tests-let us help prepare you!”
But what of the original question? To what extent does the GRE predict success in graduate school? A meta-analysis in 2001 by Kuncel et al. demonstrated that correlations between GRE scores and multiple metrics of graduate performance were low. Correlation with graduate GPA ranged from 0.34-0.36. With performance as evaluated by departmental faculty the correlation ranged from 0.35-0.42, time of degree completion ranged from -0.08-0.28, citation count ranged from 0.17-0.24, and degree attainment from 0.11 to 0.20. While encouraging these correlations are positive (in most cases) and even accounting that GRE is supposed to be evaluated in the context of other materials (but often are not), these correlations are not that impressive. Do we need the GRE scores to evaluate applicants? Interesting, the same study also demonstrated that undergraduate GPA performed equally well as the GRE. Why then do we need $190 test loaded with faults and biases when GPA is sufficient?
I end by saying good luck to my wife who faces the GRE this fall.