|What is the relationship between student performance on the California High School Exit Exam (CASHEE) and the California Standards Test?|
Daniel J Collins
The California Department of Education (CDE) instituted the California High School Exit Exam “to significantly improve pupil achievement in public high schools and to ensure that pupils who graduate from public high schools can demonstrate grade level competency in reading, writing, and mathematics” (CDE, 2008). It became a requirement for graduation beginning with the class of 2006. The California Standards Test (CST) is used by the CDE to “measure how well students in California public schools are learning the knowledge and skills identified in the California content standards” (CDE, 2008). The question remains what relationship exists between these two tests that seemingly assess the same knowledge. Analysis of data demonstrates that the CAHSEE is not an accurate predictor of mastery of the state content standards as measured by the CST.
California State University, San Bernardino
5500 University Parkway
San Bernardino, California 92407
Currently, California high school students are assessed by two state measures; the California Standards Test (CST) of the STAR program and the California High School Exit Exam (CAHSEE). These assessments are similar in purpose. The CST’s purpose is to measure a student’s progress in attaining the knowledge and skills as outlined in the California State Standards (CDE, 2008). The CASHEE’s purpose is to ensure that a student who receives a diploma has demonstrated grade level competency in the subject areas of mathematics, reading, and writing (CDE, 2008).
While both of these tests have similar stated goals, they do not assess the same knowledge, or rather, do not assess the knowledge on the same level. The CST assesses a student’s progress towards mastery of the state standards for each grade level from grade two through eleven. On the other hand, The CAHSEE assesses a student’s knowledge of Language Arts Standards at the 10th grade level and Mathematics Standards at the 6th and 7th grade levels as well as standards covered in Algebra I (CDE, 2008).
If the public is to assume that these tests are appropriate for assessing the success of the school system and the students who are educated therein, it has to also be assumed that there is some connection between what concepts are being taught and what concepts are being tested (Brown, 1992). Given this, it must be determined what relationship is present between performance on the CST and the CASHEE to assist in judging the value of the tests as evidence of success (or failure) of California school system, as well as the use of a student’s performance on the CASHEE as graduation requirement.
After determining that local educational standards were not at a high school level, the California State Legislature proposed the use of the a high school exit exam to improve student achievement and Educational Code 60850 approved the development of the CASHEE in line with state academic standards. The test was first administered in 2001, but was not considered a graduation requirement until the 2005-2006 academic year (CDE, 2008). Performance on the CASHEE is used as part of the calculation of a school’s AYP score.
The California Standards Test is a component of the Standardized Testing and Reporting (STAR) Program. The STAR program was approved in 1997 and is authorized for use until 2011. The California Department of Education states the purpose of the tests that included in the STAR program are used to “measure how well students in California public schools are learning the knowledge and skills identified in the California content standards” (2008). The California Standards Test assesses a student’s mastery of the English-language Arts, mathematics, science, and history (social sciences) standards for grades two through eleven. Performance on the CST is used to calculate a school’s API score.
Consequences of High-Stakes Testing
The research related to high-stakes testing is often negative. While researchers often acknowledge some gains as a result of high-stakes testing, it is common for the same researchers to qualify those positive results with exploration of other unintended consequences. For example, an article discussing the consequences of high-stakes testing, the researchers note “instructional changes related to improving student performance had increased since the implementation of accountability systems” (Christenson et al, 2007) However, later in the same article, the researchers qualify this finding by noting “it is unknown whether these changes are being implemented with effectiveness in ways that will truly affect student performance” (2007).
One of the findings of previous studies is the effect, or lack thereof, of high-stakes testing on graduation rates. One study found that while there was not a negative correlation between high-stakes testing and graduation rates, there was evidence that high-stakes testing does not systematically raise graduation rates (Carnoy, 2005). This is important when one considers one of the stated goals of the CASHEE was to “significantly improve pupil achievement in public high schools” (CDE, 2008).
Another study did indeed find a negative correlation between high-stakes testing and graduations rates. With the introduction of high-stakes testing, non-traditional exit certificates, such as the certificate of attendance a student will receive if they complete all requirements for graduation but are unable to pass the CAHSEE, must be offered. Gaumer-Erickson and Kleinhammer-Tramill found a negative correlation between the option of these non-traditional exit certificates and the number of students who earn diplomas. When there was the option of non-traditional exit certificates, the number of students who completed traditional graduation requirements decreased (2007).
Along with the effect on graduation rates, a potential effect on student motivation has been observed. Jones (2007) distinguishes between extrinsic and intrinsic motivation. He defines extrinsic motivation as that where the subject sees the action as a means to an end, whereas intrinsic motivation comes from an interest or enjoyment from the task at hand. Jones cites a study by Debard & Kubow (2002) where students reported increasing their study time in anticipation of standardized testing. Jones identifies this as extrinsic motivation and acknowledges that on the surface, this seems to be a positive result of high-stakes testing. However, Jones cites further studies (Deci, 1971; Lepper, Green, & Nisbett, 1973) which show that extrinsic rewards decrease intrinsic motivation in the long term. It seems that an unintended consequence of students being motivated extrinsically by testing (either by rewards for good performance or fear of the consequences of poor performance) is in the long term a reduction in their interest in or enjoyment of learning which would therefore decrease intrinsic motivation (Jones, 2007). If all a student does with the knowledge they gain from school is take a test, it is not surprising that their interest or enjoyment would wane.
An additional study found that high-stakes testing had an effect on the scope and sequence of content taught in the classroom, as well as the teaching methodologies used in the classroom. Brown (1992) found that teachers working in schools with high-stakes testing were reluctant to use non-traditional teaching methodologies for fear that they would not be as effective as others for preparing for testing. This same article said that teachers reported narrowing their curriculum to focus on the content on the tests. Brown comments on these findings by noting “When teachers abandon innovative instructional strategies or are reluctant to begin using them, the children are most affected through alterations in the learning environment” (2007).
For the purposes of determining the nature of the relationship between performance on the CAHSEE and performance of the CST, we proposed the following research question:
Is the California High School Exit Exam (CAHSEE) a predictor of mastery of state content standards as measured by the California Standards Test (CST)?
For the purposes of this study, we will define high-stakes testing by the definition provided by the Standards for Educational and Psychological Testing American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999) which states,
When significant educational paths or choices of an individual are directly affected by test performance, such as whether a student is promoted or retained at grade level, graduated, or admitted or placed into a desired program, the test is said to have high stakes. (p. 139)
Considering the negative correlation between the option of non-traditional exit certificates and number of students completing the traditional graduation requirements observed by Gaumer-Erickson and Kleinhammer-Tramill (2007), it is significant that educators and administrators further study the CAHSEE and its relationship to other forms of standardized testing. An examination of these relationships can be a tool for determining whether the CAHSEE is a valid measure of student performance and if it should remain a hurdle to be overcome for graduation.
Other issues of concern which give this study significance are the effect of high-stakes testing on both students with disabilities as well and English language learners (ELL). Gaumer-Erickson and Kleinhammer-Tramill found that more than 75% of non-traditional exit certificates issued during the scope of their research were issued to students with disabilities (2007). In a study of high-stakes testing in Massachusetts and North Carolina, Horn found that “non-White and non-Asian students are among the groups most affected by …high-stakes testing” (2003). Furthermore, Horn speculated that in as much as 84% of ELL students in the class of 2003 may not receive a high school diploma (2003).
Our willing and consenting subjects were all the collective high school sophomores in the state of California from 2005 to 2008. Though valuable information resides in the minute differences between county, district, and school performances, our study is only looking to establish a relationship, or lack thereof, between the CAHSEE results and content mastery – both carefully measured and reported annually by the state. We chose to strengthen the validity of our results with raw numbers by using the whole state, making the population and our sample one and the same.
As mentioned previously, we wanted to look at accurate numbers that represented as large a sample as possible, so we obtained test results for the whole state from the California Department of Education (CDE) website. First, we looked only at 10th grade students taking the CAHSEE for the first time. To include all CAHSEE attempts from any grade students would allow an unknown number of additional variables that would weaken the validity of our results. Specifically, we recorded the pass/fail rate for both the ELA and Math sections – A score of 350 out of 450 possible is considered passing. For the purposes of our study we made no distinction between degrees of pass or fail. We were not attempting to split hairs regarding who deserved to pass the CAHSEE or who did not. Rather, we wanted to simply identify the point at which that distinction was made by the State of California.
We then collected scores from 10th grade ELA & Math performances on the CST’s. The ELA portion of the 10th grade CST’s is standardized, so no special methods were needed to collect this data. The Math portion of the CST’s, however, varies depending on the course in which each individual student is currently enrolled. For this data we only collected scores of 10th grade students taking the Algebra I or Geometry CST’s. These subjects (and levels of advancement) were targeted because they are most congruent with the Math section of the CAHSEE, and would better represent an equitable comparison. To measure content mastery, we chose to separate scores into two categories as defined by the CDE: 1) Proficient / Advanced, and 2) Basic & below.
Once we had obtained all of the data that we needed from the different sources it was time was us begin formulating the data for the purpose of answering our questions. We first took the data and found the Probability of passing the tests for all the different tests and subjects so we would be able to compare the different results. After we decided that the two tests, or events, were independent we found the union of the events. Or in other words we found the percent of students that passed both events, the CAHSEE and the CST. We decided to go down this route because if we are interested in how many students are passing each test then the best way to do this is to look at the Empirical Probability of the data that we received. After we have calculated the probability percentages we can compare the results to each other to see if we can notice anything from them. After we had finished that it seems logical to take a look at what the empirical probability is of the union of the two events. This gave us percentages of how many people passes both tests which gave us another batch of figures to look at and compare. We figured the more data and figures that we have the better we are to answer our questions.
We conducted this study to test validity of the CAHSEE as a measure of content mastery. By looking at this one specific measure and exploring its relationship with others, we can then deduce whether the use of any other measures similar to the CAHSEE are also valid. Once established, this deduction may lead us to accept tests like the CAHSEE and extend their scope, importance, and implementation. Or, it may cause us to rethink the answers we seek – or better yet, the questions we ask.
We found that on average the difference between the CAHSEE passing percentage and the CST passing Percentages was 67.575%. That means that on average 67% more people passed the CASHEE than the CST and they are supposed to be based on the same information and we looked at the same group of people that took each test. For the ELA portion, the amount was a 40% difference. Both of those are huge differences between the two tests. How can we hold the student body accountable to both tests when we see such a big drop from on test to another? We should be seeing some type of similar results between them and there was not. One thing you may see is that the results from year to year for each individual test don’t differ that much. The percentages mainly stay about the same, so the same amounts of people are passing each, roughly, each year. This also means that the difference in figures is also happening year after year too. So we decided to look at the union of the events. This means what is the probability that someone passes both the CAHSEE and the CST. What we find was on average only 6% of the students are passing both math sections of the tests and in English that number is a little better with 29%. Still both of these numbers are extremely small. All of these figures together tell us that the CAHEE is not a valid measure of content mastery.
Fig. 1-Percentage of Passing Scores
^ The limitation of the design are that even though we have looked at the different probabilities of passing each test and the probability are there union we haven’t looked at if we can accept the conclusions that we are getting from the data. We could have performed a Chi-squared test to look for support of our question. This would give us a confidence interval which to gage what we are looking at and what the numbers are telling us. Statistically speaking there is more answers we could have gathered had we set up the data differently as well as gathered other data too. We never took a look at demographic differences to see is we might notice any patters that would be interesting to our study. So the biggest limitation to our design is the depth of which we went in our research. There is a whole world of data out there that we could have gathered and interpreted many different ways. Each of which is equally important and equally time consuming.
Given the incongruence of test results, we can safely report that passing scores on the CAHSEE give very little indication of a student’s mastery of State content standards, yet we continue to use it as one of many roadblocks to graduation. For students who pass, it translates into an achievement award of little real value. For those who do not pass, it has a profound life-long impact on the resulting lack of opportunities afforded them. We are not the first to point out problems with the CAHSEE, but most critics focus on the fact that the performance requirements are too low to answer our underlying question: Do these students know enough of what we want them to know to deserve a diploma? We quickly denounce such flawed answers as the CAHSEE without wondering if the underlying question is worth asking. This current question is deeply rooted in an essentialist point of view – one repeatedly demonstrated to ring more and more poorly with generations X, Y, and beyond. Is it any surprise the answers are unsatisfactory to us? How much time and energy will be spent seeking better answers before we think to ask a different, more appropriate question?
We believe the case against the CAHSEE can be strengthened by correlating the CAHSEE scores of high school graduates with the rates of incoming college & university students needing remedial English & Math courses. These correlations will undoubtedly be weak, indicating the relative meaninglessness of a passing CAHSEE score.
That being said, further bashing of the CAHSEE might not be what corrects our course. Perhaps a qualitative study could be undertaken to determine true student potential, unconfined by our preconceptions about what all students “ought to know”. What geniuses await discovery if only we remove our conformity-based awards & punishments? We already know most new students are not like us and don’t want to be. Should we continue to dig in our heels and try to force them to comply, or should we try to find out what will help them succeed and keep our communities, economies, and country strong? That would be worth researching.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). ^ . Washington, DC: American Educational Research Association.
Brown, D. F. (1992, April). Altering curricula through state-mandated testing: Perceptions of teachers and principals. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
California Department of Education, (2008). Overview of the California High School Exit Examination (CAHSEE).. Retrieved December 1, 2008, from California Department of Education-Testing and Accountability Web site: http://www.cde.ca.gov/ta/tg/hs/overview.asp
California Department of Education. (2008). ^ (32 pages). Sacramento: CDE.
Carnoy, M (2005).Have state accountability and high-stakes tests influenced student progression rates in high school?. Educational Measurement: Issues and Practice. Winter, 19-31.
Christenson, S. L., Decker, D. M., Triezenberg, H. L., Ysseldyke, J. E., & Reschly, A. (2007). Consequences of high-stakes assessment for students with and without disabilities. Educational Policy. 21, 662-690.
DeBard, R., & Kubow, P. K. (2002) From compliance to commitment: The need for constituent discourse in implementing test policy. Educational Policy, 16(3), 387-405.
Deci, E. L. (1971) Effects of externally mediated rewards on intrinsic motivation. Journal of Personality and Social Psychology, 18, 105-115.
Gaumer-Erickson, Amy S., & Kleinhammer-Tramill, Jeannie (2007). An analysis of the relationship between high school exit exams and diploma options and the impact on students with disabilities. Journal of Disability Policy Studies. 18, 117-128.
Horn, C. (2003, Winter2003). High-Stakes Testing and Students' Stopping on
Perpetuating a Cycle of Failure?. Theory Into Practice, 42(1), 30. Retrieved December 2, 2008, from Academic Search Premier database.
Jones, B. D. (2007).The unintended outcomes of high-stakes testing. Journal of Applied School Psychology. 23, 65-86.
Lepper, M. R., Green, D., & Nisbett, R. E. (1973) Undermining children’s intrinsic interest with extrinsic rewards: A test of the “Overjustification Hypothesis.” Journal of Personality and Social Pyschology, 28, 129-137.