Tuesday, July 12, 2016



The "new" SAT was administered to juniors across the state of Idaho in April. The test has been revised substantially. The major changes are:
  • alignment with the Common Core
  • two subtests ( Math and Evidence-based Reading and Writing) instead of three (Reading, Math and Writing)
  • improved resources for improvement of instruction
  • new "cut" scores indicating probability of success in college (EBRW = 480, Math = 530)
  • altered definition of readiness for college that includes 2 and 4 year colleges

The College Board, which administers the SAT, has released the actual test for a number of years after that year's administration is complete. However, though they have offered an item analysis for the PSAT (Preliminary SAT) for several years, this year was the first time the College Board has provided question analysis for the SAT.

The item analysis will prove extremely valuable for the improvement of instruction. Here's why.

For each of the 154 questions on the SAT (96 EBRW and 58 Math, the College Board provides access to the answer given by each student who took the test, district, school, and state performance on that item, percentages of students who chose distractors and the correct answer, the difficulty level of the problem, and the general question category. 

The item analysis is available for each high school, as well as for the district and for the state as a whole. The College Board also provides an explanation of the correct answer for each question. 

Here is the display of the information for question 9 on this year's Math subtest:

How does the distribution of student answers help us to improve instruction?

Well, 28% of District juniors answered "D". While we might speculate as to why they chose that answer, the fact is that 55+9 does not equate to √55+√9. Can we improve instruction so that more students understand this principle? Sure. And the item analysis gives us the tools to improve instruction and student performance.

College and university personnel can also use the statewide item analysis to isolate performance on each question and then use performance on a series of questions to determine placement in Math or Language Arts courses.


Comparatively, the SBAC gives only top-level information that is of little use in improving instruction, Here's an example of what teachers receive as feedback from the SBAC:

It's clear that the SAT provides a much more useful examination of student responses that can help us with improving instruction in high school than does the SBAC. We need an assessment at the elementary and junior high level that provides the same level of feedback.

States are required under federal law (ESSA) to test all students in math and language arts in grades 3-8 and once in high school. At each decision point about the adoption of assessments, we must consider use of data for improvement of instruction first. Then and only then will we adopt an appropriate assessment.