Members Login
Username 
 
Password 
    Remember Me  
Post Info TOPIC: students now chipping away at USM
printzhead

Date:
students now chipping away at USM
Permalink Closed


The 2 candidates for SGA prez have some interesting proposals.  One of them wants to have GPAs earned at other institutions merged into overall GPA after transferring to USM.  The other wants to put student evaluations of teaching results online, and have the administration place "bad teachers" on some sort of academic probation list.

__________________
He Said/He Said

Date:
Permalink Closed

The idea that anyone would listen to an SGA candidate is laughable. How smart could they be? After all, they chose to come to USM even with all the negative publicity about Shelby.

Yes, one candidate is stumping for rule changes that will allow transfer students to count their transfer GPA in with their USM GPA for academic eligibility, etc. This means that if I get a high GPA at Jones that I can slack off at USM and still stay off of academic suspension. It also suggests that a 4.0 at Jones will get me closer to graduating with a Latin designation. This is one step in the process of turning USM into a 2-year (jr/sr) college.

The other candidate is advocating that evals be public and the good/bad list built. Here's the problem with this: there is no accountability in student evals. A student can slam me on an anonymous eval by making statements that are untrue, and I have no recourse. We all know the pitfalls of the types of student evals administered at USM, and this is something that we should be against. Having you name on the "bad list" might indicate that you teach a rigorous required (core) course and that non-majors who enter your classroom on Day One with a bad attitude about your subject can affect your job status. I can see it now: If you're on the Bad List, you are ineligible for raises. If you're on the Bad List, you have to do a development plan or you're fired.

When we start letting 18-22 year-old students (especially the kind of students USM currently has) dictate the atmosphere in academia, then we are truly letting the inmates run the asylum.

__________________
Whatcha Gonna Do When They Come For You

Date:
Permalink Closed

Not to mention the validity of the instrument used and the tactics used to get student responses.

Let's remember part of being young and idealistic is being ignorant of certain things. These platform planks sound good to other students, but hopefully not to anyone else.



__________________
Longhorn Eagle

Date:
Permalink Closed

I don't understand the knee-jerk fear of releasing the results of student evaluations.  At Texas, the cumulative evals are posted right next to the course catalog to make selection easier.  Faculty are rated on 1) ability to organize the course, 2) ability to communicate information effectively, 3) workload/reading load 4) encouragement of student expression 5) interest in student improvement.


The evals are done just before the exam, so no one will slam a prof because of a bad grade.  Also, the comments are not posted, so if someone wanted to make up some kind of charge, that would go directly to the dean and not online. 


What's wrong with this system? 


 



__________________
Outside Observer

Date:
Permalink Closed


Longhorn Eagle wrote:

I don't understand the knee-jerk fear of releasing the results of student evaluations.  At Texas, the cumulative evals are posted right next to the course catalog to make selection easier.  Faculty are rated on 1) ability to organize the course, 2) ability to communicate information effectively, 3) workload/reading load 4) encouragement of student expression 5) interest in student improvement.
The evals are done just before the exam, so no one will slam a prof because of a bad grade.  Also, the comments are not posted, so if someone wanted to make up some kind of charge, that would go directly to the dean and not online. 
What's wrong with this system? 
 




The problem is...just because it says "ability to organize the course" on the form, does NOT mean that is what is being rated. The issue is validity...especially predictive validity...show me that these ratings are significantly and substantially related to some accepted measure of student learning, and I have no problem with them. For example, how many undergraduates can evaluate a faculty member's "interest is student improvement?" What exactly is "interest in student improvement?" Can you observe it? Of course not, yet when the form is in front ofthe student/evaluator, they must circle some number...so what do you suppose their rating is based upon? Whatever is in their mind at the moment, or whatever is most salient to them...which could be about anything.

__________________
Longhorn Eagle

Date:
Permalink Closed

Outside Observer wrote:


The problem is...just because it says "ability to organize the course" on the form, does NOT mean that is what is being rated. The issue is validity...especially predictive validity...show me that these ratings are significantly and substantially related to some accepted measure of student learning, and I have no problem with them. For example, how many undergraduates can evaluate a faculty member's "interest is student improvement?" What exactly is "interest in student improvement?" Can you observe it? Of course not, yet when the form is in front ofthe student/evaluator, they must circle some number...so what do you suppose their rating is based upon? Whatever is in their mind at the moment, or whatever is most salient to them...which could be about anything.


I think your whole approach is just a thinly veiled attack on the students.  Either they are


1) too malevolent to partcipate honestly, and will only use surveys as a vehicle to inflict professional damage on their professors


2) too stupid to understand the questions, and will be unable to address the as question presented


3) too petty to settle small grievances through appropriate channels, and will use the evaluation to try to wreck someone's reputation to settle some vendetta


4) too apathetic to answer the questions, and will just circle randomly


If I remember right, our evals say "demonstrated" interest in student improvement.  I would argue that such a demonstration is observable, and even if it occurs during office hours, students will tell other students they got positive feedback, etc...


I don't think anyone says that evaluations are perfect empirical measurments of the criteria sought to be understood.  They are just one tool in the toolbox. Throwing them out because of ambiguity that is inherent in all survey method research is just ridiculous.


This is not meant to be a personal attack, but if you have as little faith in the student body as your post indicates, I sincerely hope your job does not involve regular contact with students in an educational setting. 


 



__________________
alumna

Date:
Permalink Closed

Outside Observer wrote:


Longhorn Eagle wrote: I don't understand the knee-jerk fear of releasing the results of student evaluations.  At Texas, the cumulative evals are posted right next to the course catalog to make selection easier.  Faculty are rated on 1) ability to organize the course, 2) ability to communicate information effectively, 3) workload/reading load 4) encouragement of student expression 5) interest in student improvement. The evals are done just before the exam, so no one will slam a prof because of a bad grade.  Also, the comments are not posted, so if someone wanted to make up some kind of charge, that would go directly to the dean and not online.  What's wrong with this system?    The problem is...just because it says "ability to organize the course" on the form, does NOT mean that is what is being rated. The issue is validity...especially predictive validity...show me that these ratings are significantly and substantially related to some accepted measure of student learning, and I have no problem with them. For example, how many undergraduates can evaluate a faculty member's "interest is student improvement?" What exactly is "interest in student improvement?" Can you observe it? Of course not, yet when the form is in front ofthe student/evaluator, they must circle some number...so what do you suppose their rating is based upon? Whatever is in their mind at the moment, or whatever is most salient to them...which could be about anything.



This just confirms the sespision (sp?) I had all along:  student evaluations of teachers are a load of fecal matter. I used to just pick the number that happened to catch my fancy that weekday and asign that value as an answer to all the questions --except the tricky ones, of course.  That allowed me to swiftly complete the forced, ridiculous labor that those forms signify. Now, that I think about it, if students were allowed to view these scores and actually feel that they are make a difference in participating in this process, I would have made more of an effort...


Any way, unless you are some socially reclusive freshman, student campus gossip does well enough in letting one know the quality of a full-time teacher. You can see it on their faces, the dried tears after exams, etc.  Student evaluations might be good for the evaluation of adjuncts who read power-points to us, those disgusting pigs. Don't they know we are forced by law to sit through their torturous lectures?!



 



__________________
Tired of being sick

Date:
Permalink Closed

LE:

It is more like years of experience seeing the results from student evaluations. "All" students do not do anything; nor does "no student". For example, for years one evaluation instrument had a question that asked how many times did the class not meet. Routinely students would answer how many times they did not attend; the variance in the answers was huge. This is the kind of thing that professors see that drives them crazy. I do not run my classes according to whether or not the students like it, think it is hard, or think it is a lot of work. However, if there are consistent comments or scores in a particular area, that should - and does - cause me to examine that area.

TBS

__________________
LLMF

Date:
Permalink Closed

Listing the instructor's eval next to his or her classes will only serve to further our inequity at USM. Students who register first will use the eval as an indicator of easiness and will register accordingly. Students who register later will be stock with the harder profs. LE, if you don't think this is how students assign eval scores (not worst to best instructor but hardest to easiest instructor), then you're naive.

__________________
Jimmy Jam & Terry Lewis

Date:
Permalink Closed


Longhorn Eagle wrote:

Outside Observer wrote:
The problem is...just because it says "ability to organize the course" on the form, does NOT mean that is what is being rated. The issue is validity...especially predictive validity...show me that these ratings are significantly and substantially related to some accepted measure of student learning, and I have no problem with them. For example, how many undergraduates can evaluate a faculty member's "interest is student improvement?" What exactly is "interest in student improvement?" Can you observe it? Of course not, yet when the form is in front ofthe student/evaluator, they must circle some number...so what do you suppose their rating is based upon? Whatever is in their mind at the moment, or whatever is most salient to them...which could be about anything.

I think your whole approach is just a thinly veiled attack on the students.  Either they are
1) too malevolent to partcipate honestly, and will only use surveys as a vehicle to inflict professional damage on their professors
2) too stupid to understand the questions, and will be unable to address the as question presented
3) too petty to settle small grievances through appropriate channels, and will use the evaluation to try to wreck someone's reputation to settle some vendetta
4) too apathetic to answer the questions, and will just circle randomly
If I remember right, our evals say "demonstrated" interest in student improvement.  I would argue that such a demonstration is observable, and even if it occurs during office hours, students will tell other students they got positive feedback, etc...
I don't think anyone says that evaluations are perfect empirical measurments of the criteria sought to be understood.  They are just one tool in the toolbox. Throwing them out because of ambiguity that is inherent in all survey method research is just ridiculous.
This is not meant to be a personal attack, but if you have as little faith in the student body as your post indicates, I sincerely hope your job does not involve regular contact with students in an educational setting. 
 




Can I vote for "5. All of the above"?

__________________
Longhorn Eagle

Date:
Permalink Closed

My guess is that the real problem is a combination of fear of accountability and arrogance. 


Allowing students to compile information on a professor enables them to vote with their feet.  It also holds the prof. to some (albeit tiny) level of accountability in their relationship with their students.  You can't summarily piss on the people who eventually have some imput into how you are perceived on campus. 


Which leads me to my second point.  Why should professors be "above the law" so to speak?  If you are a serious underachiever in the classroom, then you should move out of teaching.  Maybe stay holed up in the lab or the library.  Are you somehow entitled to a position in the classroom, no matter if you are consistenly rated poorly by your students?


Obviously, not every single student takes evals seriously.  But, I can tell you that students factor a lot more than just workload into the ratings.  Profs at Texas with workloads that fall between high and excessive (4 & 5 on a 1-5 scale) will still teach a packed section because they have such a good reputation at the school. 


 



__________________
Outside Observer

Date:
Permalink Closed

I'm basing my comments on the performance rating literature, as well as procedures for establishing the validity of measurement. Students are not qualified raters for many of the items on typical evaluation rating instruments. For example, a common item is "The faculty member is up to date in his/her field." How in the world would an undergraduate student have any idea whether the instructor is up to date in his or her field. Make a big point of discussing several recent business week stories in class...and remind students how recent they are...and I'll bet you'll get a good rating on this item...of course you may not read any scientific journals in your field. Subjective rating is a problem, not just in student evaluations, but inperformance rating in general....survey the literature...

If a measure is being used in a performance evaluation, it should be validated...it should predict desirable outcomes...show me some evidence..I'll bet you a significant amount of money, that the faculty who receive the highest ratings in many departments are the most entertaining, the easiest, etc. which suggests that there is an inverse or perhaps inconsistent relationship between student rating score and student learning.
So..do you want me to do what it takes to get really high student ratings? Or do you want me to maintain some standards to that students have to do a little work and actually have to learn something? You seem to be arguing that the higher the student rating, the better the instructor. I don't believe it. My experience and my research suggests that in some cases, it's actually the opposite.

Also...I don't believe that validity evidence is transferrable from campus to campus. GA State developed a supposedly multidimensional student rating instrument several years ago...6 dimensions of "teaching effectiveness" if I remember correctly. They claimed that it exhibited reliabiity and convergent and discriminant validity...Yet when we examined it on our own campus, we could produce only one dimension. In an exploratory factor analyasis, only one factor emerged. I have since examined the data using confirmatory factor analysis with the same result. Now...what could produce just one factor in a long rating instrument? Again if memory serves there were 36 or so items which supposedly broke down into 5 or 6 dimensions. Students were not discriminating between various supposedly different aspects of instruction..."how up to date is your instructor" was not perceived any differently than "my instructor explains things clearly" or "my instructor meets class at assigned times." So there is one issue or thought driving ratings on all items...wonder what that ONE issue could be? Clearly, an overall or global judgment of the instructor..then the question becomes, what drives that overall global judgment...I suspect for different students it is different things...for some it might be their best judgment about how much that instructor caused them to learn...for others, it was that instructor made me work too hard...or that instructor won't give me a C even though i came to class half the time.

__________________
Outside Observer

Date:
Permalink Closed


Longhorn Eagle wrote:

My guess is that the real problem is a combination of fear of accountability and arrogance. 
Allowing students to compile information on a professor enables them to vote with their feet.  It also holds the prof. to some (albeit tiny) level of accountability in their relationship with their students.  You can't summarily piss on the people who eventually have some imput into how you are perceived on campus. 
Which leads me to my second point.  Why should professors be "above the law" so to speak?  If you are a serious underachiever in the classroom, then you should move out of teaching.  Maybe stay holed up in the lab or the library.  Are you somehow entitled to a position in the classroom, no matter if you are consistenly rated poorly by your students?
Obviously, not every single student takes evals seriously.  But, I can tell you that students factor a lot more than just workload into the ratings.  Profs at Texas with workloads that fall between high and excessive (4 & 5 on a 1-5 scale) will still teach a packed section because they have such a good reputation at the school. 
 




That;s all I'm asking for...accountability. I have not been able to produce any evidence that student ratings exhibit convergent and discriminant validity, let alone predictive validity...predictive validity should be established for any measure of someone's job performance. Show me evidence that this measure does what you claim. If an organization found itself defending its perf. appraisal rating scale on a discrimination charge, it would have to produce such evidence.

How about I come up with a rating scale for you boss to use in rating you? Do you want some evidence that the ratings are actually related somehow to outcomes or jobperformace? Or are you just willing to take my word for it?



__________________
foot soldier

Date:
Permalink Closed

I'm with Outside Observer, who has been making very good comments about student evals. While student evals might indicate if there is a problem, they aren't generally a good indication of good teaching or learning going on in a class room. I saw that as a recent member of a search committee. The students rated a candidate who taught very little actual material but showed a cute video the most highly. Or, in another favorite example of mine, one of my colleagues used to teach at a school where the evals rated "faculty member shows respect for student." She got bad ratings on that question--the reason was that she had responded to student requests that they not have class because it was raining by suggesting that they get an umbrella!

It is, in my opinion, highly irresponsible to make any evaluations of teaching strictly on the basis of student evals. Do most USM departments have regular visits by other faculty to evaluate teaching? Other schools I have worked at do, while at USM there was no established procedure, and classroom visits were haphazard at best. There were also undertaken by no regular established committee, but by any tenured faculty member.

__________________
LVN

Date:
Permalink Closed

As both a TA and an adjunct at different universities, I welcomed my students' evaluations, and even made up one of my own for them to complete anonymously. Especially when I taught adult students, I took their comments to heart and appreciated the feedback.
Then I taught freshmen at USM last year. No way would I want that group of young people to evaluate me. This was a group who considered homework a good idea, not a requirement. "Turn off your cell phone" was an unreasonable request. When I cancelled classes to conduct personal meetings, at least 20% failed to show up for their meeting. There were possibly five students in each section who would have been capable of a meaningful evaluation. Sorry, but that's the way it was.

__________________
Tired of being sick

Date:
Permalink Closed


LLMF wrote:

Listing the instructor's eval next to his or her classes will only serve to further our inequity at USM. Students who register first will use the eval as an indicator of easiness and will register accordingly. Students who register later will be stock with the harder profs. LE, if you don't think this is how students assign eval scores (not worst to best instructor but hardest to easiest instructor), then you're naive.



Tell the tale, LLMF. And BTW LE, "at Texas" is not impressive or persuasive. You will not want to hear it, but that does not alter the truth of it: UT-Austin students, as a whole, are superior to USM students. Perhaps they read evalaution documents and answer honestly and correctly.

__________________
Invictus

Date:
Permalink Closed


foot soldier wrote:

It is, in my opinion, highly irresponsible to make any evaluations of teaching strictly on the basis of student evals. Do most USM departments have regular visits by other faculty to evaluate teaching? Other schools I have worked at do, while at USM there was no established procedure, and classroom visits were haphazard at best. There were also undertaken by no regular established committee, but by any tenured faculty member.



A good evaluation system should include student evals, of course, but it should also include peer evaluations, self evaluation, and supervisor (chair) evaluations as well. The entire program should yield a matrix that will allow the instructor, working with his/her chair, to develop a plan for continual improvement. I believe that no matter how good a teacher one is, one can always get better, perfection being a piece of real estate that only dieties can own. My institution includes the above components in a total system that also includes planned professional development programs, again developed by the faculty member in consultation with the chair or lead instructor. For new hires, the planned program includes basic orientation to departmental & college operations. We spent several years researching & developing this system. It isn't perfect, of course (see above real estate reference), but it is working pretty well for us. I'm sure we'll tweak it every year, because a good system can never be allowed to become static.

The concept of an evaluation system as something used solely for punitive purposes is archaic & somewhat, um, barbaric.

BTW, Ole Miss allows a student with a valid current-student PIN to access instructor evaluations via their web services.

__________________
Charasmatic teacher

Date:
Permalink Closed

Longhorn Eagle wrote:


 What's wrong with this system?


One thing that's wrong is that a student can't know who were the best (and worst) teachers until several years after graduation.



__________________
Page 1 of 1  sorted by
 
Quick Reply

Please log in to post quick replies.

Tweet this page Post to Digg Post to Del.icio.us


Create your own FREE Forum
Report Abuse
Powered by ActiveBoard