Cyprus Mail

Can an aptitude test predict the outcome?

CyprusMail

Can an aptitude test predict the outcome?

By Tracy Phillips

Yellis tests, now used in certain schools in Cyprus to predict GCSE success, should only be viewed as part of an all-round assessment of ability 

School reports used to include lots of teacher comments about pupil progress while grades and predictions were often vague. Comments were subjective, sometimes woolly, sometimes encouraging and sometimes brutally harsh: “Could do better.” But in an age where the very idea of expertise, not just in the teaching profession, has been devalued in so many ways, evidence based teaching and reporting is the norm. And evidence in teaching includes a variety of types of assessment data, which, if taken together and interpreted correctly, can help build a clearer picture of pupil progress over time.

Standarised tests, like the Yellis tests, are now being used in some international schools in Cyprus. Developed at the Centre for Evaluation and Monitoring (CEM) at Durham University in 1994, when arguments were raging about grade inflation and exam standards, Yellis is a reference system and stands for Year 11 Information System. The system analyses GCSE and IGCSE results globally against student test scores; while GCSE exams change overtime, the Yellis test is a constant. This information is used in different ways. From a parent point of view what is interesting is how schools use the test to set target grades for pupils and to make exam grade predictions. The Yellis test, usually taken at the start of Year 10, is an aptitude test that scores pupils in three separate areas: vocabulary, mathematics and non-verbal reasoning. This provides a baseline assessment from which to measure progress. By using the global data that CEM has, and comparing the success at IGCSE or GCSE of students with similar test scores, it gives a prediction of a student’s chances of success in different subjects.

Put simply, there is a correlation between pupils achieving a particular test score in Yellis and a particular GCSE grade in a range of subjects like English, Maths or French. This does not mean that the Yellis score and the GCSE result are linked in any way. What a pupil gets on the GCSE exam is mainly down to hard work and sometimes a bit of luck on the day. But this information can be helpful to schools and pupils if it allows them to have a meaningful debate about progress and preparing for exams. It is particularly helpful if it suggests that students are likely to underachieve compared to their own or their teachers’ expectations. And it may even spur some on to work harder. Like any school report, it can be demotivating if the scores are not what the pupil or parent expects. Of course, like any aptitude test, there are assumptions made so it may be more helpful in measuring trends than individuals. And some individuals always buck the trend.

So, how is this information really useful to schools? Maybe in a number of different ways. It allows them to see how easy it is for students to access the IGCSE curriculum they are offering, as it tests knowledge of vocabulary. This makes sense given that the IGCSE curriculum will be delivered in English and many of the pupils may be native speakers of other languages. It also allows schools to measure how well they are doing in different subject areas against the predicted results. Some schools may also use it to identify pupils that score highly as ‘more able’. It would then be easy to see where interventions would help to boost the scores of those seen as underachieving. Pupils are also asked questions about their educational aspirations, which may show where there is a gap between a pupil’s ‘likelihood of staying in education’ and his or her ‘ability’.

Most schools in the UK now make pupils sit some sort of cognitive ability test, or CATs test. Since school league tables were introduced in the UK in 1992, there has been an obsession with measuring ability, attainment and progress. This type of data has as much to do with measuring schools as it has to do with measuring individual progress. It often helps to facilitate the market in education if the data is used to support claims about pupil attainment. Most parents are well aware though that standardised assessment data, even if accurate, can only ever provide a snapshot of their child’s skills on a particular day. And drawing generalised conclusions on the basis of that data might involve a number of unhelpful assumptions. However, looking at a graph that tells a pupil that they have a 60 per cent chance of achieving a top grade in IGCSE Maths may be useful as a way of opening a dialogue about how hard that pupil wants to work towards that outcome.

As the head of English in a UK state school my role is often as much about data as it is about curriculum planning and teaching. My team don’t call me the spreadsheet queen for nothing. But while my spreadsheets usually include cognitive ability data if I have it, it is hardly ever the most useful bit of information on the sheet. In order for an assessment to be useful to me as a predictive tool, the content and style of the test that students take has to have something in common with the exam I am predicting a grade for. If there is no overlap, it is just not helpful. This is why tracking the results of teacher assessments designed to measure the skills that will be tested on the exam is more useful to me than analysing the results of cognitive ability tests. If I want to know how well a pupil will answer a question that asks them to analyse how a writer creates tension in a literary text, a vocabulary test and a non-verbal reasoning test won’t tell me that. Evidence of reading ability, reading widely and the right attitude in the classroom tells me more about a pupil’s likely success in GCSE English Language than any test of ‘natural’ ability.

Yellis is a tool that offers a baseline measure for schools. Like all aptitude tests, this information has its place and can be useful if understood and used appropriately. And like all data, it needs to be handled with care. Even if aptitude tests are able to measure ‘natural’ ability, which is debatable, GCSEs do not. They measure skills that need to be taught and hard work. Evidence in educational planning is helpful, but no one piece of data alone tells the whole story. And the best data comes from teachers’ professional expertise and teacher assessment.

[mp_ads_system ads_system_select=”176653″][mp_ads_system ads_system_select=”176658″]

Latest News

[mp_block_12 text_layout=”text-layout-5″ category_id=”45137″ post_text_chars=”300″ post_paging=”header_paging”]