Slightly off-topic: Grades vs. SAT as college performance predictor

Discussion in 'General Distance Learning Discussions' started by Chip, Mar 4, 2001.

Loading...
  1. Chip

    Chip Administrator

    Rich's post on another thread about the SATs (which *were* Scholastic Aptitude Tests, when I took them a hundred years ago, but aren't any more) raised some interesting questions.

    I find it fascinating that the reliability of SATs as predictor for first year college GPA is so low. And that's probably why UC Berkeley's president has recently come out very strongly in favor of eliminating them as a consideration.

    The argument that I have heard in the past that seems most plausible is the SAT as a standardization factor in interpreting GPA from different schools.

    So let me lay out my scenario. Let's assume that you have a rural or inner-city school where the percentage of college-bound seniors is, say, 20%. Now, contrast this with, say, one of the exclusive college-prep boarding schools in New England, where the college-bound percentage is maybe 90-95%, and a decent public school in an upper middle class area with a college-bound percentage of maybe 60%.

    What happens when you have a valedictorian from each school? And, for grins, let's assume that each school has some semblence of an AP program, so that each valedictorian has a GPA above 4.0, maybe 4.15 or 4.2.

    Does anyone know if the predictability of GPA performance at college based on high school GPA is relatively equal regardless of the school from which the GPA was earned?
    The admission folks at Oberlin claimed that the SAT was the great leveling factor there. They claimed to be able to look at an unknown high school from somewhere, see what the average SATs from that school are, and use that as a kind of index to determine how a 4.2 GPA from East Chickenscratch High compared with a 4.0 (or 3.8) from, say, Andover.

    Without that sort of standardization, would the admissions staff be worse off in making good decisions?

    I'm really curious about this, because, while I believe that the admissions process is inherently flawed, I'm also not sure what better means could be used to pick students... particularly when a school receives 10,000 applications for maybe 500 spaces.

    I know that in the 80s, at least one very selective school (Brown) used SATs as a baseline cutoff, in that anyone with SATs below, say, 1300 combined gets essentially put aside, and then one or two people review the applications briefly to make sure that they haven't overlooked someone that is amazing that happened to score low on SATs. Fair? Absolutely not. But faced with over 10,000 applications to meaningfully read in under 2 months, with a small staff, I'm not sure that any other process would be any better.

    Thoughts??
     
  2. Tom Head

    Tom Head New Member

    People always laugh at me when I say this, but why not use the ACT? It comes closer to predicting first-year GPAs than the SAT does (if I remember correctly), and I consider it less culturally biased (because all of the questions are "straight").


    Peace,
     
  3. Chip

    Chip Administrator

    And you have a problem with gay questions?

    [​IMG]
     
  4. brunetmj

    brunetmj New Member

    I am unsure how you could measure the success (predictability)of the SAT or GRE at a particular school if they only take people with a certain (magical) score in the first place. The only way to tell is let everyone in and see what happens.
     
  5. Bruce

    Bruce Moderator

    Waaaaayyyy back when I was in high school (I graduated in 1983) I was given the choice of taking the SAT, ACT, or both (I took both). If I remember correctly, colleges in the South were far more accepting of the ACT than other parts of the country, though that may well have changed by now.

    Bruce
     
  6. Andy Borchers

    Andy Borchers New Member

    Chip - Your question is an interesting one. I've done some statistical analysis on this topic and read up on the topic.

    The sad fact is that no standardized test can predict academic success with a high level of certainty. In the case of graduate business schools, the GMAT test explains about 38% of the variation in first year MBA grades. Using undergrad grades and the GMAT together explains about 41%.

    What about the other 59%? There must be other factors - like motivation that make a difference.

    In my experience, standardized tests do an ok job at the "gross level". That is to say, someone with a 420 GMAT is wasting their time applying to Harvard. The odds that they'd be successful are pretty low. In undergrad terms, an ACT of 15 won't cut it at a selective school. But at the "fine level", the tests fail miserably. Admitting someone with a 27 ACT and denying someone with a 26, in and of itself, doesn't make sense.

    The point about standard tests is that they aren't used to admit students - they are used to deny students entrance. If you were the admissions director at Harvard or a major medical school, you'd need some easy (and hopefully valid) way to say "no". Standard tests are that tool.

    Undergrad grades (for grad admission) and high school grades (for undergrad admission), appear to be a somewhat better tool. They have to be considered in context - considering both the school attended and the rigor of the courses taken.

    Thanks - Andy


     
  7. Rich Douglas

    Rich Douglas Well-Known Member

    But the test isn't "standardized." Kids from families with money take a far different test than poor kids. The SAT is highly coachable. Families that can afford the hundreds of dollars for test prep courses put their kids at a far greater advantage. Instead of being a leveling force, the SAT actually widens the privilege gap. Also, the SAT serves as a pacifying tool; if you don't have good educational opportunities, ETS says, don't worry. You can make up for it with good test scores. But admissions officials are highly skeptical of kids with low grades and high test scores. And they should be. Admissions officials know what best predicts college success: high school success.

    There is a solid correlation between income and test scores. Oh, race and test scores and gender and test scores are a huge issue, too. The notion of deciding who goes to what schools based upon the answers given to a few badly written test questions by a confused teenager on a cold Saturday morning is dumb.

    College admissions officials realize the variability in grades, that they're subject to inflation, wide variances from school to school, etc. But so are college grades.

    The notion that you can measure "aptitude" (or "potential," or "ability," or whatever) with a badly written multiple choice test is not only misguided, it is misleading. The ETS has been misleading the public for more than 50 years. The colleges go along with it because they don't have to pay for it. There are a few (very few) situations where test scores actually help, but this is at the most selective colleges, and only when used with several other, more accurate, methods. Is it worth the millions of dollars spent on these tests (not to mention the millions more spent on preparing for them)?

    And another thing: ETS denied for years that coaching was effective. They don't anymore, because they started looking ridiculous in the face of so much evidence (including their own, suppressed, studies). So what does ETS do now? They sell......coaching materials! Never one to miss a revenue opportunity.

    Because coaching is so effective, many (most?) high schools now offer some form of coaching to prepare students for the SAT. Effective coaching focuses not on math and written skills, but on test taking itself. Not only is this cynical, it draws already thin resources away from actually teaching kids what they'll need to know. Don't teach a kid to write. Teach her to pass a test that tries to predict how well she will write (but doesn't actually require her to write!).

    When I pursued my three Regents degrees, I took 35 standardized examinations, getting credit for 30 of them. Many were in subjects about which I had no education. None. Life experience? Hardly. I took them when I was 19 and 20 years old. No, I got very good at taking these exams (almost all of which were prepared and marketed by ETS). CLEP, DANTES, GRE Subject, you name it. I took (and passed) the GRE Subject Exam in Sociology without ever having taken a sociology class, or even having opened a textbook. (I didn't study for any of the others, either.) My score was worth (at the time) 39 semester hours; it earned me my second bachelor's from Regents (a BA, Liberal Arts, concentration in Sociology). I felt like I was picking the pockets of those poor slobs who did it the hard way by going to night school the rest of their lives. And I should know. I was an education specialist in the Air Force at the time, counseling thousands of military and civilian employees about educational opportunities, traditional and nontraditional.

    Nothing would please me more than to see the UC system dump the SAT I.

    Rich Douglas, Ph.D. (Candidate)
    Centro de Estudios Universitarios
    Monterrey, NL, Mexico
     
  8. BillDayson

    BillDayson New Member

    I support the SAT. Among my reasons:

    1. It apparently is a pretty good indicator of future academic success if you do as Andy says and look at large differences in scores. The kid with a 1400 SAT will be more likely to do well in a university than a kid with an 800 SAT. But I do doubt if there is a significant difference between a 980 and a 1020.

    2. As Chip suggested, the SAT serves to calibrate GPAs from secondary schools that grade to different standards.

    3. The SAT is objective. Everyone knows what it requires and can prepare for it. As opposed to letter grading at tens of thousands of secondary schools by a million different teachers. The SAT doesn't care about how kids behave in class, their attendance, what clothes they wear, whether they agree with the teacher's taste in music or literature, their race, gender, ethnicity or religion, their parents, their politics, their apparent effort and so on. All the subjective stuff of questionable relevance that colors grades is removed.
    True, some students go to SAT cram classes and others don't, but eliminating the "standardized" from standardized testing will only *increase* those kind of disparities, not eliminate them.

    4. The SAT is a God-send to a certain kind of student (like me.) There are smart kids out there that don't care a whole lot about highschool. They aren't motivated to put great effort into every assignment. They may read a tremendous amount, but most of it isn't assigned reading.

    Perhaps they don't like the structure of a class, don't like feeling held back by all the slower students, or study for the pure intellectual curiosity of it. But for whatever reason, they follow their own interests, not those of the teacher.

    In my case, my highschool GPA was a C+/B-. A 2.6 or something. But my SATs were quite good (about 1300). I never even bothered to study for them BTW, and just took them cold. I was accepted by every college I applied to, but almost certainly wouldn't have been if it hadn't been for that SAT score.
     
  9. Rich Douglas

    Rich Douglas Well-Known Member

    Sorry, Bill, but this is straight out of the ETS bible.

    The margin of error on the Math portion of the SAT is about 64. The margin of error on the Verbal section is even higher. Because ETS abandoned the third digit in its score reporting many years ago, the margin of error is more like 70 and 90. That means there is no statistical difference between a kid getting 1400 and one getting a 1260. But which one looks better? Admissions officers don't know this stuff, because ETS doesn't tell them.

    I've already argued about the "calibration" the ETS purportedly does, but it simply doesn't do it at all. Admissions officers are highly suspicious of kids with low grades and high test scores. (And they should be; kids with low high school grades tend to have low college grades, too. This correlates much better than the SAT and college grades.)

    The SAT is NOT objective! Scoring it is, but only because it is a multiple choice test. Anyone of reasonable ability can score the test correctly. But the development of the test questions is anything but objective. For one, it discriminates against women. Women tend to do worse on the SAT than men. But women tend to have higher college GPA's than men! Also, many minority groups have better grades than their collective SAT scores would predict. It is as subjective a process as high school grades. (More so, because the test isn't tied to a defined curriculum; it tests what ETS decides it tests.)

    If kids aren't motivated in high school, they aren't going to be motivated (in general) in college. There are always exceptions, but is this why we have an SAT (and the burdonsome costs associated), so we can cut a few slackers a break? No, low high school grades tend to mean low college grades.

    I'm very glad you're the exception to the rule. In fact, it is just this kind of person (driven, motivated, etc.) that the SAT is NOT designed to identify. But I'm not talking about anecdotes, but rather, the ETS's own numbers. And those statistics are as objective as ETS will tolerate.

    The SAT Math test doesn't require test takers to do much math. The Verbal section doesn't require them to write. That any college would place any emphasis on a flawed, discriminatory, expensive (not to the colleges, of course), vague, unfair test not tied to any identifiable curriculum is bizarre. It takes a lot of strength to break away from the herd, as some colleges have done. Now the mighty UC system is about to do the same. I hope it's the deathknell of standardized testing for "aptitude." It's long overdue.

    (On a humorous note, some scientists identified an obscure blood test that, coincidentally, correlated with freshman year grades better than the SAT! Needlestick, anyone?)


    Rich Douglas
     
  10. BillDayson

    BillDayson New Member

    I was unaware that the ETS *had* a Bible.

    What's a "margin of error"? A margin of error in measuring what, exactly?

    Are you talking about a correlation between scores and subsequent academic performance? If so, how is the latter measured? By GPA? In what classes, in what majors, at what universities?

    At Cal Tech, most math and physics majors score between 700 and 800 on math. If we accept your figure of 64 for the purposes of argument, then a 700 or below is going to perform significantly worse on average than a 764 or above. That seems to be a valuable piece of information.

    But the margin of error is probably considerably less in real life. That's because those with lower math SATs are less likely to study math or physics at Cal Tech. They will go to other universities with less powerful student bodies, so they will fall higher on the curve and get better grades at that university than they would have received at Cal Tech. That will distort the figures.

    Women perform just about the same as men on the verbal SAT, perhaps even slightly better. Where women do significantly worse than men is in the math SAT. That disparity continues on in university work, where women (on average) do worse than men and are underrepresented in highly numerate fields like the hard sciences and engineering. The reason for that is poorly understood and is the subject of a vast literature. But it is ridiculous to accuse the messenger of discrimination because it delivers a message you don't want to hear.

    If the design of the SAT can be justified, then the fact that it isn't tied to a defined curriculum might be an advantage. That's because it provides a way of comparing students that have been taught under a variety of different curricula.

    It is also rather outfront and open about what it demands. Kids know what kind of questions are on the SAT, so that all kids are more or less on the same page in preparing for it. Compare that to grading that is dependent on pleasing particular teachers in ways that are often poorly defined.

    I think that the world is filled with individuals that went on to academic success after lackluster high school careers.

    And yes, it *is* one reason that the SAT exists. Many universities use an "eligibility index" to make admissions decisions. If a student has high grades, then he or she can get in with lower SATs. But if a student has lower grades, then higher SATs are required. The whole point of that seems to be to provide more than one variable, and to provide a possible correcting factor for low performance in one of the variables.

    In other words, the combination of grades and SATs is a better indicator of future performance than is either one individually.

    I think that I have provided good arguments to the contrary. Namely the use of eligibility indices, and the opportunity that use of such an index gave me.
     
  11. mlomker

    mlomker New Member

    I would fall into this category. High School is a liberal arts environment with a vast number of required courses with very little freedom of choice. In college you get to chose your major and have great latitude in course selection even toward fulfilling your distribution requirements.

    The ability to chose subjects and courses that interest you can do wonders toward your performance. My High School GPA was 2.4 and my undergrad is currently at 3.7.
     
  12. Rich Douglas

    Rich Douglas Well-Known Member

    Anecdotally, I fall into the same category. I was kicked out of three high schools, yet graduated from Regents with an A.A. and two bachelor's, all earned before I turned 21. But a few data points do not stand up to the large numbers that make up ETS's own correlations regarding the efficacy of the SAT, and they're not good.

    Rich Douglas
     
  13. Rich Douglas

    Rich Douglas Well-Known Member

  14. Neil Hynd

    Neil Hynd New Member

    Hi,

    Of possible interest to some, while doing an in-service teacher training course as a UK FE college lecturer in the late 70's, I did a statistical analysis between high school leaving Maths results and FE college day release Maths results (I was one of the teachers of Maths classes).

    There was no significant correlation - but I was not able to take the subject any further.

    One Maths set led into another with no more than a 3-month gap .... but with a different college-like environment and motivation (eg. employment) there was no shortage of variables .....

    Cheers,

    Neil

     
  15. Kizmet

    Kizmet Moderator

  16. copper

    copper Active Member

    Blood alcohol levels are a good indicator!
     

Share This Page