505 – Post #4

Driving home everyday I have about forty minutes to get caught up on the news of the world. At home I am bombarded by Sesame Street from the three year old and *cough* Jersey Shore from the seventeen year old.  I was thinking about the collection of data and the statistical analysis of that data and how both the validity of the data and the analysis can be wildly different based on bias and perspective. NPR had a story on President Obama’s announcement of the the introduction of wavers to the No Child Left Behind Law. It seems too many American schools are failing according to the standards set by the NCLB law and school districts can now apply for wavers in order to maintain funding. As a Canadian educator I do not know the details of the criteria established  in the law to achieve a passing grade, but what struck me is the question of how do you accurately measure the success or failure of a school based on statistics derived from exams; moreover, how can you assess the effectiveness of a teacher based on the achievement of her students on an exam?  Basing teacher salaries on the statistical achievement of her students can only result in “teaching for the test”.

In Canada a national think tank called The Fraser Institute releases a yearly ranking of all school in a given province. This ranking is said by the institute to promote healthy competition between schools and encourages teachers and administrators to do better for their students. The institute also claims that parents love the ranking system because they can make informed decisions about where to send there children for school. The collection of data for this ranking is done strictly through gathering government exam results, foundation skills assessments (FSA’s), and graduation rates. There are no “boots on the grounds” of the schools themselves, there are no interviews with children, staff, administration or parents; it is strictly a numerical evaluation of a school’s achievement which lacks any insight into socio-economic factors, ESL, or special needs populations. There is no evaluation of school culture or school as the only place a child can feel safe and get fed in a neighborhood of poverty and danger. The Fraser Institute ranking of schools only serves to demoralize the staff and students of schools that are working hard everyday to educate children, foster curiosity and creativity, and keep children safe. The statistics in this evaluation are skewed and biased. The report compares inner city schools to private schools in the richest neighborhoods in the country. The results favor private schools because of their elitist enrollment policies and offer a wildly narrow view of school achievement.

Here is a link to the ranking of my school which comes in at  112 out of 274, down from 81. (We have had a massive influx of international and ESL students in recent years, so naturally our school has worsened!) There are three High Schools ranked higher in our city; all three are private schools with exclusive enrollment.

No Child Left behind. Obama introduces wavers.

obama-announces-no-child-left-behind-state-waivers

Have a look at this propaganda video released by the Fraser Institute:

The connection I am trying to make here today is directly related to the arguments made about the lack of validity in media comparison studies. The Fraser Institute has an agenda and the strategies and tools used to measure school success and compare school’s do not factor in the variables necessary in conducting a thorough analysis and evaluation.

 

EdTech 506 – Post #1


Graphics: As a photography and media arts teacher I discuss principles of design and what makes images effective with my students all the time. What I am learning from this course is the application of good graphics for increased understanding in learning. I am also intrigued by the connection to instructional design and learning theory presented in the text this week. Using well designed images in conjunction with the principles of cognitive load theory to address learning styles allows instructional designers to produce images for optimum impact. When designing images many people seem to have an intrinsic ability to create something aesthetically pleasing; however, if a designer can combine aesthetic appeal with an understanding of concepts such as chunking, cognitive load theory, and information processing theory, they are much more likely to design an image with lasting impact on the learner. The designer must use visual strategies to appeal to the viewer through recognition, and find ways to allow the information contained in the image to process itself through short term memory and remain in long term memory. Images need to be created for learning that connect with verbal and visual memory and that translate into understanding through meaning.

References:
Lohr, L. (2008). Creating Graphics for Learning and Performance (2nd Edition ed.). Upper Saddle River, New Jersey: Pearson Education.
Image: http://www.mdpi.com/1424-8220/9/9/7374/

505-Entry #3

This week we looked at research and evaluation and found similarities and differences between the two. What it boils down to is that research is done for the benefit of individuals outside of the participation group, while an evaluation is for the benefit of the participants.

The discussion and information on sampling and bias we had this week I found to be particularly interesting. In my position as staff committee chair at our school I am often charged with gathering the opinions of staff on issues of concern. I have deployed the use of Google Forms, Survey Monkey, and other similar online polling applications many times. Inevitably the same staff members respond to the survey and the same group choose to ignore the document. The text suggests following up with personal contact with those individuals who do not respond to find out their reasons for not responding. I had never really thought about following up on individuals who choose not to respond. It is clear that any information gathered by the same sample of teachers will not be an accurate representation of the thought and feelings of staff. These results may not be an accurate sample based on the lack of adequate number of responses and those who choose to respond may have a particular bias toward the topic of the questionnaire. Another factor may be the wide range of technical abilities on our staff. We have an aging population of teachers who may have a reluctance to using technology, not only in their classrooms, but for themselves as well. If they do not regularly open emails they will be denied access to the survey/questionnaire document all together.
Based on this weeks materials and topics I will be rethinking the process of gathering and analyzing data.

Edtech 505 Post #2

Over the course of this past week in Edtech 505 much of the content has centered around data. We have looked at methods for collecting and analyzing data and how this all fits in to the planning of an evaluation. I have never had much success with numbers, however the explanation in the text has helped to demystify the process of analyzing data. It is interesting to consider how the same results can yield different interpretations based on the technique used to analyze the data. An evaluator can choose a method to look at the numbers based on the intention of the question. It has become obvious to me over the past week that one must consider the techniques used to gather information very carefully based on the particular evaluation model one chooses to use. It has also become clear that the various evaluation models are, in many ways, techniques used by teachers all the time. It is helpful, however to identify the characteristics of each mode in order to accurately use the benefits of each in the development of an evaluation. This week also helped to solidify for me the differences between research and evaluation.