505 – Post #7-Rubrics

Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the analysis of the products and/or processes of students’ efforts (Brookhart, 1999; Moskal, 2000). Some controversy sourounds the use of rubrics in the classroom, however the majority of educators seem to believe that they are a valuable tool for assessment when designed correctly. Rubrics can be deemed successful when they are both valid and reliable. Validity is assessed by looking closely at content related evidence such as whether or not an assessment is accurately determining a students knowledge of a specific question, or does the question itself pose difficulties which would invalidate the degree to which it assess the students knowledge in that area. Construct related evidence gives an indication in assessment about the reasoning process of an individual when responding to evaluation. In order to be useful, a valid rubric must strive to identify the internal process a learner goes through when responding. “When the purpose of an assessment is to evaluate reasoning, both the product (i.e., the answer) and the process (i.e., the explanation) should be requested and examined” (Brookhart, 1999; Moskal, 2000). Criterion related evidence in rubrics works to assess learning in relation to external factors such as the application of knowledge in “real-world” settings. Like any well designed lesson, a rubric wiht a high validity level should start with clearly define objectives and each element of the assessment should work to define the level of learning within these objectives.
Another criteria for a useful rubric is its reliability. The scoring of a learner should potentially be consistent when applied at any time or by any evaluator. One suggested strategy for increasing reliability id the use of anchor papers which is a reference sheet for raters to use given a set of test responses prior to administering the assessment rubric. If discrepancies exist between responses and raters than the rubric should be revised. This process would be time consuming and perhaps impractical in a busy public school setting, but nonetheless, it would increase reliability.

Like any type of assessment rubrics have their drawbacks. Teachers are human beings and many times it is very difficult to be completely objective during evaluation. In Understanding Scoring Rubrics: A Guide for Teachers, Carol Boston outlines some factors that at play when scoring such as positive-negative leniency error where the evaluator tends to be too hard or too easy on everyone. Personal bias and teacher student relationships should not be a factor in assessment, but human nature is not so easily beaten. Other considerations outlined by Boston are being swayed by the appearance of a document at the expense of assess the content. Length, fatigue, and order effect can also be a factor in altering the validity of an assessment. A well designed rubric should work to eliminate many of these factors, however some detractors suggest that the rubric is too prescriptive and can reduce a process such as writing to something so mechanical and prescribed that it takes away from the process itself. Personally I have, and will continue to use rubrics, especially as a way to establish criteria for learners before they embark on the learning. One successful strategy I have used in the classroom has been to have the students develop the categories and criteria for an assessment part way through a unit once they have had an opportunity to understand some of the content in the area of study.

Here are some samples of the rubrics used in my classroom as created in the website Rubistar:

MyHero Presentation Rubric
MY HERO Project – Website Rubric
Video Production Rubric
Persuasive Writing Rubric
Blog/Discussion Rubric

 

 

References:

Moskal, Barbara M. & Jon A. Leydens (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7(10). Retrieved November 1, 2011 from http://PAREonline.net/getvn.asp?v=7&n=10

Boston, Carol (2002). Understanding Scoring Rubrics: A Guide for Teachers. Eric Clearinghouse of Assessment and Evaluation, University of Maryland. College Park, MD.

505 – Post #4

Driving home everyday I have about forty minutes to get caught up on the news of the world. At home I am bombarded by Sesame Street from the three year old and *cough* Jersey Shore from the seventeen year old.  I was thinking about the collection of data and the statistical analysis of that data and how both the validity of the data and the analysis can be wildly different based on bias and perspective. NPR had a story on President Obama’s announcement of the the introduction of wavers to the No Child Left Behind Law. It seems too many American schools are failing according to the standards set by the NCLB law and school districts can now apply for wavers in order to maintain funding. As a Canadian educator I do not know the details of the criteria established  in the law to achieve a passing grade, but what struck me is the question of how do you accurately measure the success or failure of a school based on statistics derived from exams; moreover, how can you assess the effectiveness of a teacher based on the achievement of her students on an exam?  Basing teacher salaries on the statistical achievement of her students can only result in “teaching for the test”.

In Canada a national think tank called The Fraser Institute releases a yearly ranking of all school in a given province. This ranking is said by the institute to promote healthy competition between schools and encourages teachers and administrators to do better for their students. The institute also claims that parents love the ranking system because they can make informed decisions about where to send there children for school. The collection of data for this ranking is done strictly through gathering government exam results, foundation skills assessments (FSA’s), and graduation rates. There are no “boots on the grounds” of the schools themselves, there are no interviews with children, staff, administration or parents; it is strictly a numerical evaluation of a school’s achievement which lacks any insight into socio-economic factors, ESL, or special needs populations. There is no evaluation of school culture or school as the only place a child can feel safe and get fed in a neighborhood of poverty and danger. The Fraser Institute ranking of schools only serves to demoralize the staff and students of schools that are working hard everyday to educate children, foster curiosity and creativity, and keep children safe. The statistics in this evaluation are skewed and biased. The report compares inner city schools to private schools in the richest neighborhoods in the country. The results favor private schools because of their elitist enrollment policies and offer a wildly narrow view of school achievement.

Here is a link to the ranking of my school which comes in at  112 out of 274, down from 81. (We have had a massive influx of international and ESL students in recent years, so naturally our school has worsened!) There are three High Schools ranked higher in our city; all three are private schools with exclusive enrollment.

No Child Left behind. Obama introduces wavers.

obama-announces-no-child-left-behind-state-waivers

Have a look at this propaganda video released by the Fraser Institute:

The connection I am trying to make here today is directly related to the arguments made about the lack of validity in media comparison studies. The Fraser Institute has an agenda and the strategies and tools used to measure school success and compare school’s do not factor in the variables necessary in conducting a thorough analysis and evaluation.