To understand KCSE results, look at the teachers and how they teach

Kenya National Examinations Council Chairman Prof. George Magoha oversees a KCSE examination at Sawagongo High School in Siaya on November 13, 2017. PHOTO | ONDARI OGEGA | NATION MEDIA GROUP

What you need to know:

  • National exams are believed to motivate our children, lift some students to world class standards, help increase the national productivity and contribute to the restoration of our global competitiveness.
  • Assessment measures should be concerned with evaluating a candidate’s abilities across the whole spectrum of Bloom’s taxonomy of learning, not just knowledge acquisition.
  • We must know that the final results are only the tail-end of a process and that other variables such as infrastructure, teacher capacity, learners who are ready to learn and societal norms come into play.

The administration of the Kenya Certificate of Primary Education (KCPE) and the Kenya Certificate of Secondary Education (KCSE) are considered by most Kenyans as being essential in the development of a credible education system.

Such national exams are believed to motivate our children, lift some students to world class standards, help increase the national productivity and contribute to the restoration of our global competitiveness.

Contrary to these beliefs, the examinations are largely based on the simplistic stimulus-response view of learning. The two examinations are essentially evaluating knowledge on the basis of the candidate’s recall of what he/she had previously learnt.

Strictly speaking, assessment measures should be concerned with evaluating a candidate’s abilities across the whole spectrum of Bloom’s taxonomy of learning, not just knowledge acquisition but also application, synthesis and creativity.


The trend of KCSE and KCPE results over the last three years has been of interest to those of us in research and statistics due to the perceived drop in performance.

A statistical evaluation of the validity of the 2017 KCSE examination papers in mathematics, biology, chemistry, Kiswahili and CRE for example, reveals startling results that perhaps the critics of the performance should make reference to before making their unsubstantiated criticisms.

All these papers received high statistical indices in both content and construct validity.

They revealed high correlations between the substance of the papers and the expected learning outcomes as per the Kenya Institute of Curriculum Development syllabus. 

In other words, the examinations measured what they were supposed to measure. The difference with the 2017 examinations from the previous years was that they challenged students to use the knowledge learnt to apply, synthesise and even postulate on possible happenings in a given context.


The 2016 KCSE papers were replicas of the previous papers, only that the variable of cheating was eliminated. Our teachers ‘teach to the test’ and students routinely go through numerous past paper questions knowing that the examination will have a close semblance to previous papers. This is not learning.

It is mere memorisation, hence the catastrophic results in biology and CRE in 2017 for example. This year’s examinations were based on the high cognitive constructs of Bloom’s taxonomy.

In fact, these results bring to the surface the false pedagogical approaches in the Kenyan classroom and the need for major reforms in learning dynamics in our schools.

I use the performance of the above four subjects and the validity of the test items therein to encourage all of us to embrace research to guide our pronouncements on such weighty matters that confront our education system.


Quite often, validity is sacrificed for reliability and this mistake usually results in measures being only concerned with the behaviour of scores rather than the intellectual value of the results.

Being concerned only with the acceptability of candidates’ scores does not tell us whether such candidates have the capacities to engage constructively with the expectations of the 21st century economy.

 In evaluating KCSE results, the following factors ought to be taken into consideration:

The historical context: Our education system and development does not tally with the practicability of our examination system.

This is especially so for the use of national examinations to certify that an individual has successfully completed a given level of education and/or for making a decision about the individuals’ entrance into a tertiary institution or the job market.

Considering the wide gap between the poor and rich, urban and rural schools, good and bad schools, the acceptability or practicability of a common examination system is certainly questionable.

Equity issues: It is a basic fact that no common education procedures, such as national examinations, can produce truly just measures until policy makers put in place appropriate national delivery standards for social, educational resource and other support systems for all.

What justifications are there, for instance, to give a common examination to candidates in Alliance High School, Kenya High School, Maseno School, and the candidates in Lumino Secondary School in Kakamega County, Masii Secondary school in Machakos County, Viyalo Secondary School in Vihiga County and Siburi Mixed Day Secondary School in Homa Bay County? First, we need an educational playing field that is at least seemingly equitable for all students.

Application of technology: Here, the focus is on the standard and quality of test designers at all educational levels.

Assessment today is seen as a technical art, a complex of standardised means for attaining a predetermined academic end.

Today in Kenya, the quality of personnel involved in test designs cannot be compared with what obtains in the developed world. In the developed world, the standard of assessment procedures is highly specialised in terms of the entire test development processes, the cultural background, the material chosen for inclusion, the language and idioms used and the validation processes.

In Kenya, a large percentage of the test designers are not up to speed with the statistical procedures involved in the educational assessment measures and the advances made with the use of computers.

Screening/ categorisation: The administration of national examinations also faces the challenge of using test results to screen out certain groups by mere categorisation of grades. What is the magical intent of one to have a C+ to join the university? What of adopting subject clusters after one scores a minimum of C- in the examination?

Considering that we have now eliminated cheating and are now testing high order cognitive skills, why can’t we rationalise the admission criteria to universities? 

The challenge here is to inquire whether these groups of people have been justifiably or unjustifiably screened out and furthermore to ask what constitutes a fair and equitable use of test scores in personnel selection, placement and classification.


The philosophical and political implications of this challenge lie more in the values a society accepts as desirable goals for a system of personnel allocation and utilisation. We need to ask, therefore, how much weight the Kenyan society should give to maximising the productivity of her educational system. How much weight should be given to balancing equal opportunities among different ethnic groups to eliminate educational and occupational inequalities?

Psychometric research results may be able to provide some measure of answers, but the cost of forcing numerical equality among groups of unequal access and opportunity for selection and placement decisions through test processes pose a great challenge.

Right of script: In the administration of national examinations, candidates do not know how their tests are being interpreted and how decisions have been arrived at.

In a less than perfect reliability of test scores and human error in handling test results, it is desirable that candidates be given the privilege to see their scripts if they feel strongly unconvinced by the final results communicated to them. This, apart from benefiting the candidate, can improve the reliability of scoring and handling of scores.

It can also improve upon the assessment procedures. Information available from the KCSE 2017 marking exercise indicates that in most of the subjects, many of the candidates submitted blank papers, giving the examiners an easy time in going through them. Wouldn’t it be prudent to avail samples of these papers to the public to demonstrate the decaying civilisation our education system represents and dispel the unfounded assertions that KNEC deliberately failed the students?


We must know that the final results are only the tail-end of a process and that other variables such as infrastructure, teacher capacity, learners who are ready to learn and societal norms come into play. There is, therefore, need for all these factors to be constantly monitored through empirical research and evaluation processes. 

Instead of wholesomely blaming poor performance on unsubstantiated devious occurrences, we should seek solutions to systemic problems related to delivery such as teaching methods, lack of teachers in schools, absenteeism and poor commitment to teach, lack of teaching and learning facilities such as books, laboratory equipment and libraries and poor teachers’ working conditions. We must also address the gross inequalities in school resources and the weak social support system.

Even when all these receive serious attention, it is the teacher — and not the assessment measures — that must be the cornerstone of educational reforms.

Empirical evidence has identified two variables that have led to the current level of performance in national examinations (i) greater vigilance against examination fraud and (ii) increased validity of the test items in the examinations.


Indeed, there has been no attempt to systematically examine the content of examinations as related to the implemented curriculum, with a view to assessing the validity of these examinations.

The key question addressed in this rapid assessment of last year’s KCSE results has been: Do students fail examinations, or is it that the examinations fail students?

A systematic content analysis carried out on selected science, language and social science subject syllabi — and their corresponding examination papers — show a strong relationship between what is covered in the examination papers and the content of the subject syllabi, suggesting that the examinations themselves may not be a cause of students’ poor performance.

Instead, students’ poor performance in the 2017 KCSE national examinations is attributable to the teaching and learning processes among the other variables mentioned earlier.

 Prof. Laban P. Ayiro is the acting Vice-Chancellor of Moi University. [email protected] 


You're all set to enjoy unlimited Prime content.