Weiman article (Part 3) Engagement, Test results and Attendence

A look at the Weiman Study (continued).  This is one anonymous comment on the Chronicle page on this article: “I have tried most of the teaching methods out there in the course of over 20 years of teaching. Many “experimental” methods are effective, but they ALL result in less material being covered. Moral of the story. A good lecture is the BEST means of conveying many kinds of knowledge and methods to GOOD students. For the not-so-good, it’s not so good. Who do you want to teach to?”

The good old “I’m a filter, not a pump” keeping the not so good students down where they belong approach.  “I’ll just cater for the good students”.

A rather cynical comment from Bernard Pliers, actually on Maths education:

It’s not used to elevate students, it’s used to thin them out.
And that’s done by the Socratic-hide-the-ball teaching style, with graded homework that excuses the teacher from, you know teaching, and separates the class into haves and have nots.
A’s are for people that didn’t need to take the class in the first place.

It is interesting to note that many “active engagement” (insert some of the other buzz words) teaching trials show benefit for the huge number of students in the middle.  Teaching, not telling.  (Of soapbox now)

Evaluating the trial in the Weiman study had three dimensions

  • Student engagement
  • Post-test
  • Attendance.

ENGAGEMENT

This fascinated me. So I reproduce in full from the supporting notes:

The engagement measurement is as follows. Sitting in pairs in the front and back sections of the lecture theatre, the trained observers would randomly select groups of 10-15 students that could be suitably observed. At five minute intervals, the observers would classify each student’s behavior according to a list of engaged or disengaged behaviors (e.g. gesturing related to material, nodding in response to comment by instructor, text messaging, surfing web, reading unrelated book). If a student’s behavior did not match one of the criteria, they were not counted, but this was a small fraction of the time. Measurements were not taken when students were voting on clicker questions because for some students this engagement could be too superficial to be meaningful as they were simply voting to get credit for responding to the question. Measurements were taken while students worked on the clicker questions when voting wasn’t underway. This protocol has been shown by E. Lane and co-workers to have a high degree of inter-rater reliability after the brief training session of the observers

E. Lane is referred to, but not referenced but is sure to be Erin Lane.

There is a diagram from one of her studies which is looking an Earth and Ocean Science class.  Physics is not the only discipline seeking approaches to improve engagement:

From: www.cwsei.ubc.ca/SEI_research/files/Geo_Ocean/Lane_QuantifyingStudentBehavioralEngagement_poster.pdf

From the report:

In the experimental section, student engagement nearly doubled

THE TEST

The test questions for this topic were agreed after the week of teaching, both instructors agreeing it was a good test of the objectives. (Whew!!) From the paper:

The average scores were 41 (+/- 1%) in the control section and 74 (+/- 1%) in the experimental section. Random guessing would produce a score of 23%, so the students in the experimental section did more than twice as well on this test as those in the control section

ASIDE: all the questions are included in the online report. They are HARD questions.

ATTENDANCE

During the week of the experiment, engagement and attendance remained unchanged in the control section. In the experimental section, student engagement nearly doubled and attendance increased by 20% (Table 1). The reason for the attendance increase is not known

What is significant

It seems obvious: pay more attention and come to class more and you learn better.  Maybe.  There is a complex relationship between interest, motivation, effort, time on task, the right kind of task (etc)

In summary: two teachers taught a well defined subject to two groups to all intents and purposes the same.  Different approaches.  They both tried hard.  One group’s results were far superior to the other.

What does this mean? you may ask.

Leave a Reply

Your email address will not be published. Required fields are marked *