Creative Learners

Creative Learners
Learning Brief


Jet

How can teachers be assisted to use learner results to inform effective teaching?

Category: Creative Learners | Education system improvement | 23 July, 2013 - 20:00

←  BACK

Assessment for learning or formative assessment  is based on the idea that significant improvements in learning can take place when teachers and learners are able to obtain and use relevant information to help them establish where learners are in their learning, where they need to go and how best to close any gaps. The success of this process depends on the effective gathering, use and flow of information, or feedback. Although effective feedback loops are often difficult to create, when formative feedback is effectively used within a system, educationists tend to agree that it leads to positive changes in learner achievement.

The Annual National Assessment (ANA)

Following this trend, the Department or Basic Education (DBE) introduced the Annual National Assessment (ANA) in 2008 as part of the Foundations for Learning Campaign. The ANA may be described as a common task assessment. It is a system-wide, uniform assessment tool administered by teachers in specific grades within a given time frame. The value of the ANA as a common task assessment lies in that, if used effectively, it assists teachers and heads of departments to identify areas in which remediating strategies are needed. Furthermore, it communicates the expected standard of learning at a specific time to teachers and school management teams (SMTs) alike.

The ANA report (DBE, 2011a) describes how the national database of ANA results should be able to provide information to assist persons working in teacher development, development of textbooks and workbooks or providing support to schools and to target the areas most in need of intervention. To this end, the DBE published Guidelines for the Interpretation of the ANA 2011 Results (DBE, 2011c). This document, which describes how the ANA results should be analysed at class, school, district, provincial and national level, was circulated to schools and districts. Even though the guideline document is quite specific regarding the steps to be taken by teachers and school management teams (SMTs) in interpreting the results, teachers require high levels of expertise and experience to successfully interpret assessment data in order to benefit their learners. Teachers need to be able to administer the assessment, link the assessment items to specific skills and/or knowledge, correctly interpret the learners’ responses and results,  provide relevant and constructive feedback and then adjust their own teaching to meet the needs of the learners. If teachers succeed in achieving all these steps, effective formative feedback can be said to have occurred.

However, the ANA, because it is a large scale summative assessment, suffers from what Looney (2011) describes as the central drawbacks in standardised testing, namely:

  • Standardised testing is designed to ensure that the testing process is reliable, valid and generalisable in order to make valid comparisons and claims regarding the effect/impact of the intervention. Thus, standardised tests cannot capture the minutiae of performance on complex tasks such as problem solving. Standardised tests can provide some diagnostic detail, but not nearly as much as formative assessments can.
  • Feedback loops in standardised testing generally suffer from a time delay of several months, whereas formative testing requires almost immediate feedback.
  • There is a strong association between standardised testing and high-stakes consequences[1], which may lead to “teaching-to-the-test” and thus the narrowing of the curriculum by only teaching what is being tested.

The strategies DBE adopted since 2008 counteract these concerns to a certain extent:  

In an effort to shorten the time delay between testing and receiving feedback, teachers administered the tests in their own schools, but not in their own classes. They then marked the tests and reported the results of their own classes. Thus, some feedback to the teacher through test administration and marking did take place almost instantly. However, how such feedback was understood and the extent to which it was used by teachers to structure their teaching is largely unknown.

  • With regard to the analysis of the results, as mentioned above, the DBE assumed that by distributing the guidelines for analysis, teachers would be able to analyse the results at class, grade and school level, thus shortening the feedback time. However, JET’s school visits suggest that there was no consistent implementation of the guidelines.
  • The DBE encouraged schools to produce learner reports based on the ANA results. Again, while this is an admirable attempt to encourage feedback to parents, the implementation was not monitored consistently and the quality of reports varied from reporting single scores per subject per learner to more substantive reports detailing learners’ scores per subject in relation to class and grade averages. The ideal of reporting on the weaknesses and strengths of every learner was most definitely not reached.  
  • Because it is extremely difficult to test higher order skills (e.g. problem solving and creativity) and ’soft skills’ such as conflict resolution and time management through standardised tests, the DBE recommended that the ANA should be supplemented with day-to-day formative assessment in the classroom. However, as is usually the case with any large scale testing, when reporting the ANA results the formative based classroom assessment tends to be overshadowed by the quantitative results.

How can teachers’ formative use and analyses of the ANA data improve?

Despite these limitations, having instituted the ANA as one of the tools to improve learning and teaching, it is important to establish whether educators are in fact able to use the tool effectively. In this context, JET carried out a pilot project to determine whether teachers could analyse and use common task assessment data, leading them to adapt their teaching strategies to improve their learners’ achievement.  Teachers in 11 schools were given a common task assessment to administer to their learners. The common task assessment, which was developed by JET, included questions to assess learners’ understanding of the concepts taught during a specific period of time as prescribed in the DBE’s Curriculum Assessment and Policy Statements (CAPS).

The study method

The study entailed first training the participants in assessment procedures and techniques. An initial six hour workshop was conducted during which teachers and heads of departments (HODs) were instructed on the general characteristics of formative and summative assessments and common tasks. This was followed by a discussion of every item in the common task assessment and its link to the CAPS, as well as training on the administration, scoring, moderation and recording of the learners’ results by item on an assessment grid. Participants were also trained to use a curriculum monitoring tool and to follow the required procedures for its completion.  The anticipated outcome of the workshop was that participants would be able to practically administer, score, moderate and record their classes’ results on the common task assessment and critically examine their level of curriculum implementation by completing the curriculum monitoring tool.

In a second six hour workshop, participants were trained to analyse learner’s results using a colour-coding system as well as item difficulty rankings (expressed as the percentage of correct answers for a particular item). This system was used since the participants were all from remote rural communities with limited access to computers and electricity. This system could, however, easily be adapted for use in Excel. The principles of analysis remain the same. The analysis assisted participants to draw conclusions about the state of learning in their classes. This information was linked to both the findings regarding their own curriculum implementation and their schools’ academic improvement plans. Practical demonstrations of remediation strategies and adaptations of teaching methods were then given and discussed.

What did we learn from the workshops?

Several important observations were made during the workshops which help to answer the question: How can teachers be assisted to effectively use learner results to inform their teaching?

1.      During the discussion on the common task assessment questions, participants could easily identify which topic area in the CAPS a question related to, but struggled to identify the exact skill or concepts being tested. Understanding of the finer details regarding what was being tested by each question was lacking, e.g. teachers could identify that the following question tested pattern recognition, but not that the pattern was a geometric one involving three different shapes.

Since teachers struggled to pin down these details, it was almost impossible for them to decide where in the hierarchy of difficulty this particular question lay, e.g. teachers found it difficult to decide whether the question above is more or less difficult than the following question and why.

This might provide a clue as to why teachers generally teach at a specific difficulty level and not beyond and why teachers often seemingly teach skills at random.  This also indicates that some teachers might find the analysis and interpretation of the ANA results difficult; they may merely be using the DBE’s guideline document without understanding the link between each question in a test and the specific concept or skill being tested and how it fits into the CAPS. A document containing this information could easily be produced by the DBE from the test frameworks used during the ANA test development phase.

2.      The practical analysis of the results during the workshop was time consuming, but well worth it. Participants very easily pointed out their classes’ weaknesses and strengths, as well as learners in need of additional support. Participants were forthcoming in providing information regarding whether these problematic topics had been taught in class, as well as whether they were taught at the correct difficulty level. This was an indication that the workshop was experienced by most participants as a safe environment in which mutual growth and learning was taking place, as opposed to an environment focused on compliance. To ensure that the ANA results are used to maximum effect, their analysis should always be done in an atmosphere of mutual respect and trust. This has implications for how HODs and principals approach the analysis of the ANA results at class, grade and school level and also how districts use the results to structure school level interventions. If the benefit of an open and honest analysis of the results is not apparent to teachers or if the use of the results is viewed as punitive, there is a high risk that teachers may, due to their human nature, view the ANA as having high stakes consequences. This will almost certainly lead to more “teaching-to-the-test” and the subsequent narrowing of the curriculum that has been associated with other large scale testing programmes.

3.      A further significant observation during the workshops was that in general, participants expressed genuine surprise at the difficulty level required in the second term of grade 1. With the exception of one or two teachers, all the teachers indicated that they teach well below the level expected in the CAPS. This is a disturbing indication that teachers are not yet well versed in the expected learning standards and levels per term as prescribed in the CAPS. However, analysis of the common task assessment results during the workshop had the beneficial effect of making teachers aware of the gap between what they teach and what was expected; this is confirmation that common tasks assessments such as the ANA could be used to communicate the expected standard of teaching. This emphasises the great responsibility that lies with DBE and all developers of common task assessments to ensure that the correct information is communicated to schools.

4.      It was also observed that teachers were much more competent in identifying what their learners know and the how they came to have this knowledge than in coming up with strategies to close the gaps between learners’ knowledge and expectations in the CAPS. Teachers tended to answer the questions of what to do to change the situation very generally, by saying they would ‘re-teach’ or ‘teach differently’. Details regarding the teaching strategies they would adopt were lacking, even when teachers were prompted to provide them. This led the workshop facilitator to demonstrate examples of teaching strategies in areas where most learners experienced problems. This situation again highlights that teachers need practical demonstrations on how to teach certain content through different methodologies as well as more practical training and support on the CAPS.

5.      Some of the more disturbing evidence that emerged was that teachers struggled the most with teaching counting, number concept, patterns and time. Counting, number concept and patterns form the foundation for many of the other concepts taught in mathematics and thus weak teaching strategies in these areas can lead to devastating consequences for learners in later years. Although these observations are based on a small scale study it raises the questions: Are we training our teachers well? Is practical teaching experience used sufficiently in pre-service training? Do in-service teachers get sufficient in-class, practical support? Teachers’ comments certainly implied that they found the practical demonstrations offered during the workshops useful.

6.      An attendant benefit of the workshop was the illustration of the use of practical, cheap and accessible teaching aids (i.e. mostly home-made).  Teachers acknowledged that they learned that no fancy, expensive equipment is needed to teach mathematics. It was surprising how many of the participants expressed the belief that effective teaching depends on the use of elaborate, costly materials.

7.      With regard to curriculum monitoring through book control and class visits, the participants were very aware that this was part and parcel of the HOD’s job and a requirement in all schools. However, among both teachers and HODs there seemed to be uncertainty regarding why and how to do this practically. There was also an initial hesitancy to implement curriculum monitoring due to a preconception that curriculum monitoring involves “someone checking up on someone else”. Once the why and how of curriculum monitoring was explained and the supportive role of the HOD discussed, participants demonstrated an increased willingness to play a role in the process. They then freely engaged with the curriculum monitoring tool and were able to link their findings to the learners’ results. Once the learners’ results were analysed and the links with the findings of the curriculum monitoring made, the next task was to capture this information in the academic improvement plan and stipulate the steps to be taken to improve the current state of learning and teaching. At this stage, the workshops participants again struggled to pin down the exact details of what needed to be done. This finding is supported by the general lack of detail in and vagueness of both school improvement plans and academic improvement plans in schools in general. This is another indication that educators require practical training and support as well as assistance in establishing the links between what the gaps in teaching are and how to go about fixing them. It definitely is not enough to instruct teachers to develop academic improvement plans (a compliance approach), without training and supporting them in the nitty-gritty of coming up with remedial actions (a developmental approach).   

Conclusion

Would teachers benefit from training on the use of learner results to inform their teaching? The only possible answer to this question is a resounding “YES!” Not only would teachers benefit from training on the use of common task assessments (such as the ANA) and the effective use of assessment results, they need practical support and guidance in this regard. This study demonstrated that: Some teachers struggle to link specific items in an assessment to a detailed understanding of the skill being tested and to understand how they fit into the sequence of skills described in the CAPS; some teachers struggle to come up with alternative teaching strategies to address gaps in learner competency; some teachers struggle to summarise the gaps identified through a common task assessment and link it to specific actions steps to remediate gaps in schools’ academic improvement plans. Such teachers are not able to use common task assessment data in a formative way: feedback is not taking place successfully. Teacher development efforts should take cognisance of the need to remedy this situation, especially in the light of the rollout of the ANA by the DBE.

There is a need for higher education institutions, non-governmental organisations and the DBE to join hands to ensure that the rich data generated through the ANA are used to full effect by all teachers.    This can be achieved by developing assessment grids for the ANA tests to be completed by teachers, detailing the specific skill or concept tested in each item.  However, this on its own would not be sufficient to assist teachers to address learners’ weaknesses. Information about the common mistakes made by learners in specific questions, as well as possible teaching strategies to remedy this, need to be made available to teachers.

Furthermore, all content training for teachers should make provision for the analysis of concepts and skills with regard to the curriculum, including what concepts and skills precede and follow the specific concept or skill in the curriculum, common misconceptions,  and how best to teach these concepts and skills. This would ensure that teacher development goes beyond content training by making the pedagogical aspects of learning a specific concept or skill clear, preferably through demonstration lessons and practical activities.   

Lastly, teachers, HODs and principals should receive specific training on the analysis and summarising of and reporting on the ANA data. The link between the findings in the ANA and other common tasks and schools’ academic improvement plans should be emphasised.  

References

Black, P. & Wiliam,D. (1998). Assessment and Classroom Learning, Assessment in Education, March, p7-74.

DBE. (2011a). Report on the Annual National Assessment of 2011. Pretoria: Department of Basic Education. 

DBE. (2011b). Action Plan Towards 2014: Towards the Realisation of Schooling 2025. Pretoria: Department of Basic Education. 

Department of Basic Education. (2011c). Guidelines for the Interpretation of the ANA 2011 Results. Pretoria: Department of Basic Education. 

Looney, J.W. (2011). Integrating formative and summative assessment: Progress toward a seamless system? OECD Education Working Papers, no. 58: Paris: OECD.  

Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science. 18(2): 119-144.

Wiliam,D. & Black,P. (2006). Meanings and Consequences: a basis for distinguishing formative and summative functions of assessment? British Educational Research Journal, 22(5): 537-548.


[1] High stakes testing could be defined as any test which is perceived by the testee to have important consequences for either the individual or the school.  Thus, any test that is used to make decisions regarding an individual or group, e.g. whether a school is classified as an “underperforming school” in a specific district.

 


23 Jorissen Street, Braamfontein, Johannesburg


 (011) 403-6401


 www.jet.org.za

In Short

Significant improvements in learning can take place when teachers and learners are able to obtain and use information to help them establish where learners are in their learning.  JET carried out a pilot project to determine whether teachers could analyse and use common task assessment data, leading them to adapt their teaching strategies to improve their learners’ achievement.  In this learning brief they share the findings of their project and also suggest practical strategies to assist teachers further in this regard. 


 Search for lessons learned:


Leave blank for all. Otherwise, the first selected term will be the default instead of "Any".
Leave blank for all. Otherwise, the first selected term will be the default instead of "Any".