Skip to main content

Table 5 Evaluation methodology

From: Integrating staff well-being into the Primary Health Care system: a case study in post-conflict Kosovo

Evaluation methodology

Evaluation instruments

The evaluation consisted of the following elements (i) a stakeholder analysis; (ii) a desk review of relevant programme documents; (iii) an on-line survey using “Survey Monkey©” (https://www.surveymonkey.com) among 100 randomly selected staff trained under the programme; (iv) in-depth interviews with key informants (members from the steering committee, taskforce and programme-staff), trainers and beneficiaries; (v) a preliminary analysis and stakeholder validation meeting and; (vi) compilation of report

Sample selection on-line survey

The 100 individuals for the survey were randomly drawn from the list of all (840) people who participated in the stress awareness training during the last two years. The starting point was a randomly selected number (http://www.random.org), while subsequent numbers were systematically calculated using sampling intervals of 8 and 9 alternately. The questionnaire consisted of 30 closed questions (some with room to elaborate) and three open questions

Comparison of characteristics of the survey respondents to the statistics gathered by KRCT on training participants revealed that respondents were fairly representative for the trained staff in terms of gender and professional background, but less so in terms of place of origin. The latter may be due to non-response. Although 89 % of the sampled individuals responded, only 77 % of the questionnaires were sufficiently filled in to allow processing

Limitations

At the time of the evaluation, approximately 75 % of the staff of family health centres had been trained in stress management. Nearly 300 additional staff members were scheduled to be trained during the last six months of the programme. Therefore, it was too early to measure the impact of training on (the professional and personal performance of) staff and certainly on institutional development. A challenge related to the survey was the lack of (universal) access to internet and to a certain extent, limited computer literacy. As a consequence, some people filled in the questionnaire together (which is reflected in some of the answers). In addition, there were some indications that (some of) the staff was inclined to provide “preferred” rather than honest answers. It is important to reiterate that the objectives of the needs assessments and the evaluation survey were different (and therefore the questions also). Indeed, we did not intend to treat the needs assessment findings as base-line data to be compared later with (near) end-line data