Journal Articles
Browse in : |
All
> Journals
> CVu
> 132
(14)
All > Journal Columns > Editorial (221) Any of these categories - All of these categories |
Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.
Title: Editorial
Author: Administrator
Date: 09 April 2001 13:15:44 +01:00 or Mon, 09 April 2001 13:15:44 +01:00
Summary:
On Evaluations
Body:
We live and work in societies that are becoming increasingly pre-occupied with measuring things. One of the ironies is that, while mostly these measurements are chosen to be objective, in the case of human beings we often use entirely subjective criteria. At the same time we give no guidance as to how we should make the assessment.
For example, when you apply for a loan from your bank you will be required to fill in a standard questionnaire that a computer will assess. The loan's manager has very little, if any room, for applying his personal judgement. Look around you and you will find a myriad of other examples of objective measurements that deny human beings the right to make a judgement call. However, how is the quality of such metrics monitored? Often you will find that it just seems like a good idea, or that the metric would seem to measure something worthwhile.
Most of us have come across measuring a programmer's performance by the number of lines of code written. We know that this makes no sense, but it is something that can be measured. It is rather like measuring the quality of an artist by the number of brush strokes he makes per hour; we can measure that, but we all know that it is no way to measure the quality of the resulting picture.
Now let us put aside these ludicrous forms of assessment and focus on the kind of evaluation that some of us endure on a regular basis. Those of us who attend training courses and conferences are asked to fill in evaluation forms at the end of the event or even on a session-by-session basis. But what guidance are we given? Very little if any.
For example, at the end of each week's training I present, those attending are asked to fill in an evaluation form covering everything from food to the presenter's technical knowledge and presentation skills. How can you legitimately give anything more than three for food (on a five point scale), when gourmet cooking is hardly likely? Of course delegates assess the food against what they normally have. So if your employer's staff canteen is pretty mediocre you give the food on the course a high rating, where-as if you normally eat at an excellent little restaurant you know, you probably rate the food pretty poorly.
In the case of the food, it probably does not matter that much. But when it comes to such things as the quality of the presenter's skills, it can be far more relevant. Let me be blunt, anyone who rates my technical knowledge of C++ at less than a five (out of five) has either a rather extreme requirement for how much a presenter should know, or they have not understood the question.
When it comes to presentation skills, I clearly rate well below such experts as Herb Sutter or Dan Saks, so should never really deserve a five. Yet my employer expects me to average well over four. You will recognise that that cannot be achieved unless a substantial minority grade me at five.
Actually, I have no great problem with such end of course evaluations as long as all concerned recognise them for what they are, subjective judgements based on ill-defined criteria. The question I have for you is 'What guidelines that you would suggest to those evaluating a course or a conference?'
Notes:
More fields may be available via dynamicdata ..