505 – Formative/Summative Assessment etc.

My evaluation final product is a formative evaluation in that it is assess the degree to which the Mount Douglas Challenge Program is successful in achieving their stated objectives. The program is continuous and ongoing, therefore the assessment is not summative, rather formative in nature. The evaluation is an objective based design, thereby decreasing the subjectivity present in other forms of evaluation. (Dick, Carey) My evaluation is a variation on the discrepancy model of evaluation which “…establishes priority educational goals and objectives (of the program), selecting variables and measures of attainment, establishing standards of acceptable performance, actually measuring performance, and finally, identifying any discrepancies between the established standards and the actual performance levels obtained.” (Dick, Carey) My evaluation is designed to evaluate the effectiveness of instruction within the targeted learning group of students enrolled in the challenge program. According to the text, in order for this evaluation to be a truly formative it must include subsequent trials in order to assess the success or failure of modifications made based on initial recommendations. As I am part of this program at Mount Douglas Secondary, I will be able to oversee the extension of my initial evaluation into a second round of assessments adding to the validity and relevance of the entire process. The text suggests that for evaluating instruction typically there are three sequential stages: one-to-one trials, small group trials, and field trials. After each stage of evaluation revisions are made to the instruction based on its findings and recommendations. This process is highly effective in an instructional design setting; however, it does not apply to my evaluation as the program in question is an existing program.

For my evaluation I have collected all the necessary data and am currently involved in the analysis process which can be, as described by the text “…similar to Sherlock Holmes or Miss Marple in a mystery.” I will be using the initial statement of goals and objectives laid out by the program as a framework for assessing its success. All the data collection tools (survey, interviews, observations) were designed to inquire directly about the respondents perceived level of success in relation to the objectives.

Reference

Dick, W. & Carey, L. Formative Evaluation (Chapter 10) Florida State University and University of Southern Florida.

506 – Final Project Draft

This is the last of a series of learning log journal entries for this course. It has been a very interesting journey of discovery. I have particularly enjoyed finding out the reasons why images can be affective and looking at the psychology behind human visual perception. I have spent a great deal of time this week organizing my website and unit of instruction to be submitted in draft form. The unit I have designed is one that I am currently delivering in an Language Arts class and have found the use of graphics to aid instruction highly effective. I’m pretty proud of this work and look forward to hearing some feedback from my peers and instructor.

506 – White space

This weeks image deals with the concept of white space and symmetry. Positioning graphics and text for maximum impact must consider the empty space as well. “Space can direct the eye to important information by chinking and separating instructional elements…”(Lohr, 2011). Space can also work against a designer; if not carefully planned space can trap the viewers eye in an area of negative space that detracts from the direction of the image. This week it has become clear that space can be used to clarify text quickly, can influence a viewers perception of time and contributes to an images sense of symmetry. My image this week outlines the requirements of the e-portfolio students must prepare for their final assessment.

The image provides the essential criteria for the e-portfolio surrounded by a graphic which represents an electronic portfolio. I used careful alignments, colour, contrast and white/grey space to maximize impact.

references:

Lohr, L. L. (2003). Creating graphics for learning and performance: Lessons in visual literacy. Prentice Hall Press

“Folder” image source:http://tinyurl.com/cwqv67e (Retrieved November 8, 2011)

506 – Organization

The intent of the assignment this week was to use organizational elements such as hierarchy to create an image that represents an introduction or overview of the contents of our unit of instruction. The concept of hierarchy deals with many of the elements we have examined already in the course such as size, shape, contrast, proximity, alignment, figure-ground, and chunking. Organization of a graphic allows the designer to quickly communicate to the viewer/learner what is important about the image and guides the viewer’s eye through the image emphasizing its most important elements. Tables and graphs are used often to communicate information and this chapter offered a great deal of information on how to convey your information in a concise, impact full manner.

My introductory image is a parody of a “Time” magazine cover. I was teaching a unit that involved the creation of a fictitious magazine cover in a Photography class at the time of this assignment and thought that a cover would be a perfect way to introduce the concepts to the learners and pique their curiosity at the same time. The cover uses size, colour, contrast, alignment to draw the viewer into the image and leads the eye from the top down.

references:

Lohr, L. L. (2003). Creating graphics for learning and performance: Lessons in visual literacy. Prentice Hall Press

“People” image source:http://www.freepik.com (Retrieved November 2, 2011)

“Light bulb” image source: http://tinyurl.com/3wbes8v (Retrieved November 1, 2011)

505 – Post #7-Rubrics

Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the analysis of the products and/or processes of students’ efforts (Brookhart, 1999; Moskal, 2000). Some controversy sourounds the use of rubrics in the classroom, however the majority of educators seem to believe that they are a valuable tool for assessment when designed correctly. Rubrics can be deemed successful when they are both valid and reliable. Validity is assessed by looking closely at content related evidence such as whether or not an assessment is accurately determining a students knowledge of a specific question, or does the question itself pose difficulties which would invalidate the degree to which it assess the students knowledge in that area. Construct related evidence gives an indication in assessment about the reasoning process of an individual when responding to evaluation. In order to be useful, a valid rubric must strive to identify the internal process a learner goes through when responding. “When the purpose of an assessment is to evaluate reasoning, both the product (i.e., the answer) and the process (i.e., the explanation) should be requested and examined” (Brookhart, 1999; Moskal, 2000). Criterion related evidence in rubrics works to assess learning in relation to external factors such as the application of knowledge in “real-world” settings. Like any well designed lesson, a rubric wiht a high validity level should start with clearly define objectives and each element of the assessment should work to define the level of learning within these objectives.
Another criteria for a useful rubric is its reliability. The scoring of a learner should potentially be consistent when applied at any time or by any evaluator. One suggested strategy for increasing reliability id the use of anchor papers which is a reference sheet for raters to use given a set of test responses prior to administering the assessment rubric. If discrepancies exist between responses and raters than the rubric should be revised. This process would be time consuming and perhaps impractical in a busy public school setting, but nonetheless, it would increase reliability.

Like any type of assessment rubrics have their drawbacks. Teachers are human beings and many times it is very difficult to be completely objective during evaluation. In Understanding Scoring Rubrics: A Guide for Teachers, Carol Boston outlines some factors that at play when scoring such as positive-negative leniency error where the evaluator tends to be too hard or too easy on everyone. Personal bias and teacher student relationships should not be a factor in assessment, but human nature is not so easily beaten. Other considerations outlined by Boston are being swayed by the appearance of a document at the expense of assess the content. Length, fatigue, and order effect can also be a factor in altering the validity of an assessment. A well designed rubric should work to eliminate many of these factors, however some detractors suggest that the rubric is too prescriptive and can reduce a process such as writing to something so mechanical and prescribed that it takes away from the process itself. Personally I have, and will continue to use rubrics, especially as a way to establish criteria for learners before they embark on the learning. One successful strategy I have used in the classroom has been to have the students develop the categories and criteria for an assessment part way through a unit once they have had an opportunity to understand some of the content in the area of study.

Here are some samples of the rubrics used in my classroom as created in the website Rubistar:

MyHero Presentation Rubric
MY HERO Project – Website Rubric
Video Production Rubric
Persuasive Writing Rubric
Blog/Discussion Rubric

 

 

References:

Moskal, Barbara M. & Jon A. Leydens (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7(10). Retrieved November 1, 2011 from http://PAREonline.net/getvn.asp?v=7&n=10

Boston, Carol (2002). Understanding Scoring Rubrics: A Guide for Teachers. Eric Clearinghouse of Assessment and Evaluation, University of Maryland. College Park, MD.

506-Colour

This weeks assignment was to create an image using concepts of colour and depth to enhance instruction and learning. I choose to create an image that would help learners in my unit of instruction identify some risk factors when choosing a KIVA loan candidate. The topic deals with making a financial decision, therefore I choose to base the concept of the image around a financial data chart. When working with colour it is clear that choices must be made carefully and can be inspired based on the colour wheel, or from nature, art, or from a template of complimentary colour. Colour can also be chose for a psychological impact. I choose to use a bold font in red to grad the viewers attention and to signify the importance of the decision involved with the process of risk analysis. The red indicator line in the image also is symbolic of importance. I used white space and black icons, which represent three factors in doing risk analysis, to contrast the red. It was interesting to note in the reading this week how colour can influence decision making and selection and also can be used to signify importance. Again Emphasized in this weeks reading was the using space in a positive way to de-clutter and image and emphasize particular elements.

references:

Lohr, L. L. (2003). Creating graphics for learning and performance: Lessons in visual literacy. Prentice Hall Press

“man” image source:http://tinyurl.com/3r7fjae (Retrieved October 22, 2011)

“bank” image source:http://tinyurl.com/3mw3z5c (Retrieved October 22, 2011)

“Africa” image source:http://www.flickr.com/photos/theartguy/2444535728/(Retrieved October 22, 2011)