One teacher's attempt to connect with other educators across Massachusetts and beyond and provide relevant, up-to-date, and sometime helpful information on next generation assessments, common core state standards implementation, and college and career ready initiatives
After listening and reflecting on the PARCC assessments with other teachers in Chicago, I believe the new tests do hold a number of promises worth considering. Much of the possibilities that technology delivered assessment will bring, I will focus on in-depth in a later post, but the fact that they are planned to be delivered via technology and that it is an assessment system really force districts (states) to think about what they should do with PARCC. However, I first want to address a few misconceptions. (NOTE: This post assumes that assessment is not the end toward which education is directed, but it is a component. In addition, given that federal and state accountability exams are a component of U.S. education policy this is not intended to be a critique of assessment, but rather assumes that given these assessments what should be done with them)
The PARCC assessment is sometimes is simplified to just another test or is seen as a testing regimen that encroaches on classroom time and significantly expands the time spent on testing. Neither are, as far as I can tell, true.
The first belief or suspicion that these assessments will be more of the same will be addressed in due time as prototype items and pilots are run during the 2012-2013 school year. However, it is worthwhile to look around and start to imagine what these assessments might be like and to truly start to think about what technology enhanced assessment might mean. Here is a list of places to go (sorry ELA folks, but my eyes are usually looking for math stuff!) where you can see what others have done with technology and assessment. These tests will be different and not solely because of the technology, but a look in this direction is enough to start thinking more deeply about how different these assessments will be (more of this in a later post when I reflect on specific technology enhancements built into the assessment using UDL principles and enhancements that are optional for ELL students and Students with Disabilities).
1. ONPAR (Obtaining Necessary Parity through Academic Rigor) from the University of Wisconsin.
2. CBAL (Cognitively Based Assessment of, for, and as Learning) from Educational Testing Services (ETS).
3. Research conducted at Shell Centre for Mathematical Education at the University of Nottingham.
The second misconception is about time spent on assessment. The true part of this is that PARCC is developing a comprehensive system of five assessments; however, it wrongly assumes that schools, districts, and states must implement them all. States that choose to adopt PARCC and replace their already existing statewide assessments in the coming years will only be required to administer two components for accountability purposes. These two assessments, the performance based assessment (PBA) and the end of year assessment (EOY), will be “free” to districts just as, in Massachusetts, the MCAS is a state funded exam with no costs to districts (the speaking and listening assessment is required, but is not for accountability and can be given at any point in the year). These two assessments do not in and of themselves represent a drastic increase in testing time, but may represent some increase. These two components represent the belief that high stakes assessment should contain a broad set of assessment experiences that go beyond multiple-choice testing.
The interesting question is, should a particular school or district choose to adopt the entire assessment system?
The image above is the entire assessment system with all of its components. Typically, districts and individual schools have cobbled together an assessment system of their own in including their state assessments. Individual teachers, schools, or districts may have a diagnostic assessment to serve as an early indicator to inform instruction and supports as well as professional development. Teachers may also have developed or purchased an interim assessment(s) to monitor student performance and use the interim assessment(s) result to inform instruction. Lastly, school and districts hope these early assessments also provide information to prepare students and faculty for the state assessments. While not universal, most districts have some assessments beyond curriculum embedded assessments that inform what they do.
So, what is the potential power of the PARCC assessment as a whole? Below is a list, in no particular order of what this system promises to offer districts and schools (states):
Usually districts have multiple assessment products some of which claim to be aligned to their accountable exams. In this case, the non-accountability components are directly aligned with the assessments used for accountability. The experience and information from the diagnostic and midyear exams provides a clear sense to students, families, and teachers how a student is making progress toward end of year mastery as well as the student’s particular growth from year to year. This internal alignment provides districts, schools, teachers, and parents a set of assessment tools that clearly align with end-of-year goals, but with the ability to inform instruction, adjust programming, reallocate resources, modify professional development during the school year.
The push to deliver assessments on computers has a variety of rationales, but one relevant to districts is time. The turn around time on state assessments is usually in month intervals and does little to help inform teacher practice, support interventions to help struggling students, or make decisions on what is and is not working. With technology delivered diagnostic assessment near the beginning of the year (with an estimated two week turnaround time) and timely scoring of performance-based assessments at the mid-year (not two weeks, but returned in time to still plan for summative exams) could be hugely valuable for schools who want to have timely data on which to act, plan, and respond to student needs.
Quality assessments that are reliable and valid as well as solidly aligned to the standards are crucial if they are to be useful in any way. So, will these tests be high quality? Time will obviously tell, but there is reason to believe that they will be. Here are few reasons in no particular order. (1) PARCC is made up of 23 states all of whom are involved in vetting and overseeing the development of these tests for purposes of accountability (remember, these states already do this individually for their state exams). All of these member states and their associated education department personnel are working hard to make these tests of higher quality than their current assessments. (2) These tests also bring higher education to bear in the vetting process. The higher education community wants these tests of such a quality that students will be ready for college-level coursework. (3) Assessment vendor contracts for assessment delivery and development will be awarded based upon the quality of the items produced and involve a multitude of vendors in the process. Vendors are currently competing for their future contracts. (4) Given the scope and stakes involved in the project, leading specialists on English Language Learners and Students with Disabilities are actively informing the development of items to ensure that all students can be successful within this assessment system (more on accommodations and UDL in a later post). (5) Lastly, PARCC has made a commitment in their design blueprint to have assessments that measure the full range of student performance in order to provide meaningful feedback on high and low performing students. All of these parameters point toward these exams being of high quality. We will have to see!
Within schools and within districts this whole system would give you grade-by-grade (course in HS) diagnostics exams to share data and action plan around best practices, supports, interventions and professional development. Some districts and schools already do this, but given the alignment, timeliness, and quality that this assessment promises it could make these assessments a powerful tool for learning what is working in your school and determining what interventions are proving successful.
Given the broad range of districts that will potentially be part of PARCC it is conceivable that districts and schools (states) could learn from each other and start sharing what is working as judged by these assessments. This would obviously be a whole new world too many districts and schools, but it is conceivable that districts or schools within the same state or in different states could collaborate and learn from each other if they are getting information around the same assessment system. This sort of sharing was limited with a single state assessment that occurred at the end of the year and which provided feedback months later, but with a more comprehensive system that isn’t solely linked to accountability it could provide an easier model for sharing lessons learned with other schools and districts.
So, what should districts (states) do? Keep watching. See if these assessments are coherent, timely, and of high quality. If they are, the patchwork of assessment that schools and districts typically employ may go away and find a really powerful replacement. The net result would be little increase in testing time overall, higher quality assessments, and assessment tools that are meaningful and useful for teachers and districts. In the mean time, if you are against assessment altogether keep writing and speaking out, but if you think it has a meaningful role in your schools and districts now is the time to be vocal about what we (you) may want from PARCC.