The ethics of evidence-informed practice

One of the challenges you will face as an evidence-informed practitioner is that while it is relatively straightforward to understand that working in an educational setting has an ethical component, it is not so clear as to how go about making ethical evidence-informed decisions within such a setting.  This lack of clarity is in large part due to a lack of an agreed ethical framework for how schools should be led or managed – although this week’s announcement of ACSL’s Framework for Ethical Leadership in Education is surely to be welcomed.

In addition, although there are clearly established for principles for the conduct of research BERA (2018), evidence informed practice should not be conflated with research. Put simply, evidence informed practice is about making practical decisions based on the best available evidence and which may result in changes in practice.  Whereas research is a deliberate inquiry into a particular issue, which hopefully increases the evidence-base, provides a statement of results and, creates new knowledge - Carnwell (2001). As such,  the ethical considerations involved in the creation of new knowledge are different from those relevant to making decisions based on that new knowledge .

Ethical evidence-informed decision-making

When first seeking guidance on the ethical principles underpinning evidence-informed decision-making  it makes sense to start with the  Center for Evidence-Based Management’s Guiding Principles - CEBMa (2018).   Established in 2011 the CEBMa - a not for profit independent organisation -  was established by an international group of management scholars and practitioners to promote the use of evidence-based leadership and management. The CEBMa’s Guiding Principles are described in Figure 1

Figure 1 The Guiding Principles of the Center for Evidence-Based Management 

Evidence Matters

We will base our practice on the best available critically appraised evidence from multiple sources. Reflective use of high-quality evidence drives better outcomes for organizations, their members and clients, and the general public. Where high-quality evidence is not available, we will work with the limited evidence at hand and supplement it through “learning by doing”, by systematically assessing the outcomes, implications, and consequences of our practice. 

Ethics and Stakeholder Consideration

We recognize the moral obligation to understand the implications that our practice can have for multiple stakeholders, including any who would benefit or be harmed by it in the near or long-term. We seek to overcome the biases associated with a narrow view of stakeholders that contemporary organizations sometimes propagate, and incorporate the values and concerns of all stakeholders in our practice and decision processes. 

Lifelong Learning 

As practitioners in business, academia, government or community organizations, we will remain committed to lifelong learning. We encourage and champion open discussion, feedback, constructive criticism, reflection, and ongoing assessments related to our practice. We appreciate that this may lead us to change our judgment and conclusions.

Independent Critical Thinking

We are open to views from everyone and will weigh them against the best available evidence from multiple sources. We will never be afraid to speak up when the available body of evidence contradicts established practice or political interests. Independent critical thinking is the lifeblood of evidence-based practice.

Center for Evidence-Based Management

These guidelines are particularly useful as they provide generic guidance on the principles underpinning ethical evidence-informed  decision-making.  That said, although understanding these principles is relatively straightforward, what is far more complex, is seeking to apply them within a real-life situation where there the diverse interests of multiple stakeholders need to be met.   In addition, it would be sensible to ensure that these guiding principles are used alongside whatever code of practice/professional guidelines exist for your role.  

Applying CEBMa’s guiding principles 

Let’s say that you have been asked to undertake a review of the school’s teacher professional growth policy – including lesson observation -  and make recommendations to the school’s senior leadership team by the end of the school year.

First, it will be necessary to obtain the best available critically appraised evidence on teacher professional growth.  However, this will not be limited just to academic research but you will need to obtain other sources of evidence, including stakeholders, practitioners and the school.  In other words, making a recommendation on the basis that the ‘the research evidence says’ is indicative of an incomplete process of acquiring evidence. 

Second, there is a requirement to engage in genuine inquiry -  Le Fevre, Robinson, et al. (2015).   Gathering evidence from a range of stakeholders, and then only paying lip-service to what has been said – undermines the authenticity of the evidence-gathering process.  Indeed when gathering and appraising evidence from stakeholders it is essential to be explicit about why the evidence is being gathered and how that evidence is going to be used.  If this is not done, this can lead to negative consequences for the perceived legitimacy of any future decision.

Third, crucially there needs to be a willingness  to change your mind.  There is little or no point in going about a process of gathering, appraising and aggregating evidence if you are not willing to change your mind about how to proceed.  At the start of the process you may have a view about say, graded lesson observations and how it can generate valid and reliable evidence of a teacher’s effectiveness.  However, if the research or other evidence strongly suggests otherwise, then a willingness of change your position on the matter becomes a prerequisite

Fourth, and this in many situations may pose a difficult challenge, if after engaging in your review you draw conclusions which could well be different from those of powerful and influential stakeholders – be it senior leaders or staff associations – you will need to find a way to articulating those views in a way, which does not unnecessarily alienate those influential stakeholders.   The work of Heifetz, Grashow, et al. (2009) on adaptive leadership is extremely useful helping  you thrive in such situations.

In addition there are two other considerations, such as free and informed consent and the prevention of harm that you will also need to take into account - Robinson and Lai (2005).  In this context – given that you are reviewing the school’s professional growth policy as part of your day to day responsibilities and the school is constantly asking for feedback on policies from teaching staff, then it would be unreasonable to expect that written formal consent would be required. However, alongside this, you will also need to take into account whether staff will have the right not to participate the evidence gathering process – as there may be power differentials between say recently qualified teachers and more senior staff.  In these circumstances it would be wrong to compel teachers to participate.  However, if you do have colleagues not willing to participate this should ‘set-off alarm bells’ about the nature of school’s culture, which has led to colleagues not wanting to participate.  Indeed, this maybe the most important thing you find out from your evidence gathering process.

Next, you will need to take into account the prevention of harm.   This is particularly tricky issue as there may be circumstances where whatever decision is made will lead to someone being ‘potentially’ harmed.  However, that does not mean you cannot try and minimise any harm that arises from the process of generating evidence.  So in this example of reviewing the school’s teacher professional growth policy you may decide to use a survey to gather colleagues’ views.  In doing so, you may want to make sure the survey is designed in such a way that it is not possible to identify individual respondents.  If this is not done, respondents may feel that their responses may lead to them to being ‘targeted for reprisals’  if their views do not coincide with that of the senior leadership team

So where does this leave you – the evidence-informed practitioner – when trying to engaged in ethical decision-making.  Robinson and Lai (2005) provide some outline guidance for use with practitioner research, which can be easily adapted for use with evidence-informed practice.

·      There are no definitive rules for the conduct of evidence-informed practice – rather you will need to be awareness of both the principles underpinning evidence-informed practice and the professional values relevant to your role in the school.  You will then need to use your professional judgment to apply them to you particular context.

·      You will not be able to get away from the ethical implications of your decision-making.  Whereas ethical research activities set out to prevent harm being done to the participants, the very nature of decision-making in schools may lead to harm being done to certain individuals.  As such, the ethical evidence-informed decision-maker is seeking to minimise harm rather than eliminate it altogether.

·      Try and increase the perceived benefits to others of participating in the process – the greater the perceived benefits the greater the chance that you may receive feedback which might provide a different insight into the issue you are trying to address

·      Given the nature of evidence-informed practice, which involves gathering evidence from multiple sources, it makes sense to try an involves others in the decisions that affect them.  Indeed, if we go back to the origins of evidence-based medicine an essential element involves making clinical decisions which are informed by the patient’s preferences and values.

·      Check your assumptions and do not take anything for granted.  Just because you think a process is designed to be unthreatening – that does not mean that is how it will be perceived by colleagues.  Engage in genuine inquiry to gain a real understanding of stakeholders perceptions of the process.

And finally

Sometimes the difference between research and evidence-informed practice may be blurred.  Nevertheless, there is as simple rule of thumb – try and minimise the harm caused by any decision that you may make.

References

BERA. (2018). Ethical Guidelines for  Educational Research (4th Edition). London. British Educational Research Association

Carnwell, R. (2001). Essential Differences between Reasearch and Evidence-Based Practice.

CEBMa. (2018). Our Guiding Principles. Rotterdam, Netherlands. Center for Evidence-Based Management

Heifetz, R. A., Grashow, A. and Linsky, M. (2009). The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World. Harvard Business Press.

Le Fevre, D., Robinson, V. and Sinnema, C. (2015). Genuine Inquiry : Widely Espoused yet Rarely Enacted. Educational Management Administration & Leadership. 43. 6. 883 - 899.

Robinson, V. and Lai, M. (2005). Practitioner Research for Educators: A Guide to Improving Classrooms and Schools. Thousand Oaks, CA. Corwin Press.

 

Number Needed to Teach

One of the benefits of the Christmas and New Year’s break is that it gives you the opportunity to do some reading and then have the time to give some thought to have what you have just read.  Over the Christmas break I have been reading Professor Steve Higgins’s  fascinating new book Improving Learning: Meta-analysis of Intervention Research in Education in which I came across a statistic which isn’t often seen in educational texts –  the Number Needed to Treat (NNT) and renamed for this post as the Number Needed to Teach (NNTCH) – which for a given intervention shows how many pupils need to be ‘taught’ in the intervention group for there to be one more favourable outcome compared to the control group.

In thinking about the NNTCH it soon became apparent that is potentially quite useful as it can help you quantify how many pupils might benefit from the intervention – which is particularly important if are trying to make a case for the use for the introduction of the intervention – as it will  make it easier to demonstrate to colleagues the potential impact of the intervention.  The NNTCH allows you to support your argument for the intervention with reference to the number of pupils who may benefit rather than referring to some abstract statistic such as effect size .  That said, the NNTCH is not without its problems, which I will discuss later.  So to help you make the most of the NNTCH in your decision-making the rest of this post will:

  • Explain how to calculate the NNTCH

  • Look at the relationship between effect sizes and NNTCH

  • Examine how the NNTCH can help you calculate the average cost of pupils benefiting from an intervention.

  • Make some tentative observations about the usefulness of the NNTCH

How do you calculate the NNTCH? 

The calculation of the NNTCH is relatively straightforward. 

  • Say we have a control group of 100 pupils

  • 50 (50%)  pupils score higher than the average score for the control group as a whole.

  • Let’s say we have an intervention group of 100 pupils

  • 66 (66%) pupils in the intervention score higher than the average score for the control group (an effect size of d = 0.4 sd)

  • We now calculate out the reciprocal of the % (66) of pupils of the intervention group scoring higher than the average score in the control  MINUS the % (50) of the pupils in the control group with an average score higher than then average for the control group

  • So the NNTCH = 1 / (66% - 50%) = 7 pupils ( technically 6.25 pupils and we round up to the nearest whole number).   

That is, we need to teach 7 pupils if we want 1 pupil to benefit from the intervention.

Alternatively, we can think of the NNTCH along these lines.  If we put 100 pupils through an intervention 16 pupils are more likely to experience a favourable outcome – i.e. score higher than then average in the control group – compared to if they were in the control group.

What’s the relationship between  effect sizes and the NNTCH?

We can work out the relationship between effect sizes and the NNTCH by ‘mashing’ together two resources. First, we can use the work of Coe (2002) who produced a table showing the relationship between effect size and the percentile of an intervention group scoring above average of the control group.  Two, we can use this table to create a number of simulations of interventions with different effect sizes and use an easily accessible online NNT calculators, to work out the associated NNTCH for a particular effect size. This then allows us to produce a table which shows the relationship between a particular effect size and the NNTCH

This now means that  for a given effect size we can work out how many pupils will need to be exposed to the intervention in order to obtain a favourable outcome.   For example, say we have intervention where there effect size is 0.2 SD, this means that 58% of the pupils in the intervention group have a score greater than the average score in the control group.  In other words, if 100 pupils were to go through the intervention, 8 additional pupils are now likely to experience and a favourable outcome, compared to what it would have been had they been in the control group.  If we then use the NNTCH calculator  this converts to a NNTCH of 13 pupils

Screen Shot 2019-01-04 at 14.21.24.png

Calculating the cost per pupil of the intervention to aid decision-making

It seems to me that one of the major benefits of calculating the NNTCH is that it can help you make more informed judgments about the costs and benefits of an intervention.  To help illustrate this I’m going to use an example based upon the EEF’s Thinking, Doing, Talking Science – Evaluation report -  Hanley, Slavin, et al. (2016).  Needless, to say I won’t go into the full details of the intervention, which can be found here – other than to say it’s focus was on making Y5 making science lessons in primary schools more practical, creative and challenging.  Let’s now look at some of the ‘numbers’ associated with the intervention.

  • Intervention cost per school - £1000

  • Number of pupils per school involved 50 

  • Average cost per pupil £20 

  • Effect size of the intervention 0.22

  • Number Needed to Treat 13 (see table)

  • Number of pupils likely to benefit per school 4 (50/13 rounded up to the nearest whole number)

  • Average cost per pupil who now experience a more favourable outcome £250

So suddenly an intervention which appears to cost only £20 per pupil now costs £250 per pupil who benefit – so what appears to be a relatively cheap per intervention per student suddenly becomes a lot more costly, once you take into account the number of pupils who are likely to benefit from the intervention. 

On the other hand, you might have a different intervention which at first glance appears more expensive per pupil – but maybe more cost-effective once when you take into account how many pupils benefit from the intervention.

  • Cost per school £1500

  • Number of pupils involved 50

  • Average cost per pupil £30

  • Effect size 0.4

  • Number Needed to Treat 6 (see table)

  • Number of pupils who are likely to benefit per school 8

  • Average cost per pupil who now experience a more favourable outcome £187.50

The benefit of this calculation is that although the second intervention would appear to be £500 more expensive – it would appear to be £62.50 cheaper per pupil who benefit from the intervention.  In other words, this second intervention would appear to be more cost effective than a cheaper intervention with a larger NNTCH.

Some observations about the usefulness of the NNTCH

The NNTCH is relatively easily to calculate and is quite a simple of way of working out how many pupils need to be experience an intervention before single additional pupil experiences a more favourable outcome than if they had been in the control group.  It also helps make a more realistic estimate of the average cost of the intervention per number of additional pupils who experience a more favourable outcome.  As such, it provides it helps you calculate an additional reference points when trying to work out the cost/benefits of an intervention.

On the other hand, the NNTCH is not without its’ problems. It assumes that a favourable outcome is determined by additional pupils doing better than the ‘average’ in the control group.  That said, a pupil in the intervention group may do better than they would have done in the control group but still not score better than the average of the control group.  In addition, when calculating the NNTCH – confidence intervals for the NNTCH - should also be calculated.  We also need to take into account that for some pupils the intervention may have a detrimental impact and indeed perform worse than if they had been in the control group.  Indeed, in medicine attention is also paid to the Number Needed to Harm – when introducing new interventions.

And finally

The view you take on NNTCH will depend upon many factors, though with one being especially important i.e. what is your stance on the usefulness of effect sizes and the associated assumptions.

References

Coe, R. (2002). It's the Effect Size, Stupid: What Effect Size Is and Why It Is Important. Paper presented at the British Educational Research Association annual conference, 2002. 12. p.

Hanley, P., Slavin, R. and Elliott, L. (2016). Thinking, Doing, Talking Science: Evaluation Report and Executive Summary - Updated July 2016. London. Education Endowment Foundation

Higgins, S. (2018). Improving Learning: Meta-Analysis of Intervention Research in Education Cambridge Cambridge University Press.

Teacher and school-led research - some ethical considerations

This is the second in a series of posts that looks at the issue of ethics within the context of teacher-led and school-led research.  In the first post I am examined senior leaders’  lack of awareness of issues pertaining to research ethics.  In this post, I will pose some questions about ethics and teacher/school led research and provide some tentative answers. So here goes.

1.     Do I need to take into account ethical issues when conducting teacher/school led research?

Yes – as a professional educator you have an obligation to conduct research in an ethical manner, Robinson and Lai (2005).

2.     Where can I gain guidance on ethical issues?

The best places to start are BERA’s Ethical Guidelines for Educational Research - BERA (2018) alongside the code of conduct of your professional association

3.     Do I need to get ethical approval to conduct teacher/school research?

Maybe – if the research is being conducted as part of a formal programme of study or in partnership with a higher education institutions, then in all likelihood you will require ethical approval from the relevant ‘ethics committee.’ If not, it depends upon the activity you are undertaking and the procedures within your school.

4.     If I – or the school - are undertaking research outside of the auspices of a HEI are there occasions where it would make sense for the research to be subject to some appropriate form of internal approval process?

Yes – the following activities are likely to be deemed ‘research’ and should have some form of approval

·      the testing or development of hypotheses 

·      comparing interventions 

·      collecting quantitative and qualitative data over and above data generally collected within the school - though this could include normally collected data 

·      significant change to teaching approaches/support strategies - which are in addition to what is normally provided for pupils.

·      generalising findings beyond the setting of the school

5.     Are there any activities which are not going to require ethical approval?

·      the evaluation of existing provision

·      performance reviews, and testing within normal educational requirements if there is no research question involved (used exclusively for assessment, management or improvement purposes).

·      inquiries based on the review of previously published research literature

·      research based solely on the researchers personal reflections and self-observation

6.     Can I rely on ‘equipoise’ and ‘informed consent’ to provide sufficient ethical protection for participants?

No - other matters to be considered include: transparency; the right to withdraw; potential harm from participation in the research; privacy and data storage

7.     Is the quality of the research undertaken an ethical issue?

Yes -  Gorard (2002).   No matter what ethical guidelines are put in place – if the research is poor quality – participants are being disturbed, schools are wasting their money, and teacher researchers will be wasting their time.  What’s more given the ease with which research findings can these days can be disseminated via social media or the plethora of conferences the research may lead to teachers and schools being misled, Coe (2008)

And finally

Ethical concerns are not limited to research but should also form an integral part of  discussions school improvement, quality improvement and evidence-based practice.  If you are only discussing ethical issues in the context of teacher or school-led research, then hopefully this blog post will give you pause for thought.  

References

BERA. (2018). Ethical Guidelines for  Educational Research (4th Edition). London. British Educational Research Association

Coe, R. (2008). Reflections on Ethics in Educational Research (Draft). University of Durham.

Gorard, S. (2002). Ethics and Equity: Pursuing the Perspective of Non- Participants. Social Research Update, 39 (Winter 2002). University of Surrey Social Science Research Update - Univeristy of Surrey. 39 Winter 2002.

McNiff, J. (2017). Action Research: All You Need to Know. London. SAGE

Robinson, V. and Lai, M. (2005). Practitioner Research for Educators: A Guide to Improving Classrooms and Schools. Thousand Oaks, CA. Corwin Press.

Are research active schools acting ethically?

A recent conference organised by the Institute for Effective Education - Improving outcomes for children: How do we know we’re succeeding? – covered a range of research and evaluation being carried out in schools. Everything from small-scale evaluations run by schools, where schools were testing innovations they had come up with, to large EEF trials, where schools were scaling up interventions with promise. It prompted me to think about the ethical framework within which these projects operate.  In one session, when I asked what ethical framework and processes the school used before and during the conduct of a randomised controlled trial, the response was that ‘equipoise’ was assumed, parents gave consent, but there was no formal approval process to look at the ethical issues before carrying out a trial.

Now I could go into a long discussion about equipoise and informed consent  But for want of a better phrase – for me equipoise is the research equivalent to a ‘get of jail card’.  In other words, we don’t know what works, which gives us an excuse to do pretty much what we like.  It’s useful for schools but does not provide sufficient ethical protection to pupils.  As for informed consent – this is quite rightly something that needs to be taken into account – but the latest British Educational Research Association ethical guidelines for educational research BERA (2018) identify a far greater range of ethical issues which need to be addressed.

Now to be fair – a comment made at the end of a session does not provide incontrovertible evidence to support my claim that research active schools need to give more thought to the ethical component of their work.    So with this in mind, it seem sensible look into the available research on school leaders views on research ethics.  Fortunately, I came across some recent research by Bryan and Burstow (2017) who engaged with 25 school leaders to explore how the ways in research-active schools were aware or using ethical guidance in their research practices. 

Bryan and Burstow undertook an initial online survey of 520 contacts – with 44 responses (8.5%) – and found some confirmation of their hypothesis that there was an issue around the  ethical awareness of school researchers

·      35.9 % of respondents indicating that they always take time to consider ethical issues

·      30.8% of respondents indicating they havent had research ethics at the forefron of their planning

·      20.5% of respondents considering themselves as part of an ethically correct organisaitons – sufficiently informed and demonstrating good practice.

·      12.8% of respondents indicating that they always take full considerations of any ethical issues and use published guidelines.

Bryan and Burstow then went onto interview 25 senior leaders – drawn from primary and secondary schools who managed professional learning within their schools and had research projects taking place and subsequenly identified six themes

1.     Ethics were not on the radar

2.     Schools were seen as moral high grounds – which were not going to do projects which potentially harmed children.

3.     Informed consent and right to withdraw – interestingly some respondents felt that children could not refuse to do something, with an expectation that everyone would take part.

4.     A rejection of anonymity – that staff teachers would get back to specific pupils if they felt there was a need to.

5.     Parental permission – a general expecation that parents would be ok with their children participating in reserch.

6.     Concerns about workload and how research might contribute to overloading teachers.

Bryan and Burstow then go onto to make three observations. First, none of the respondents from 25 schools involved in the interviews had engaged in the BERA guidelines, and the appeared to have limited understanding of the importance of quality, rigour and trustwortiness in research.  Second, there was a general dismissal of informed consents, of the right to withdraw or say no.  Third, teachers saw little difference between research and what they did in classrooms on a day to day basis.  Four, senior leaders in the schools had a rich understanding of how research within their schools was ethically situated – which went beyond a set of bureaucratic guidelines – with research practices aligned to pedagogic practices.

So where does this leave me and my claim that research active schools maybe acting unethically.  On reflection,– research active schools are probably not acting unethically, that said, my sense is that they are not acting in ways which could be considered as best-practice (see Stutchbury, K. and Fox, A. 2009). As Bryan and Burstow note – there is a need for a strategic approach to the issue of research ethics within schools.  The EER/IEE and Research Schools are well placed to do this – but I am not sure whether this is currently being done  That said, this is not about creating a bureaucratic approval process, instead it’s about getting school leaders to explicitly think about the ethical issues of any research undertaken.  This is particularly important, given what  Zhao (2018) has to say about the potential negative side-effects of educational interventions.

And finally, given the publication of this year’s OfSTED Chief Inspector’s Report and the comments made about off-rolling of pupils, unfortunately we cannot make the assumption that all schools are moral-high grounds and and that school leaders provide acceptable levels of ethical leadership and are appropriate environments, within which school-led research should be taking place.

References 

BERA (2018). Ethical Guidelines for Educational Research (Fourth Edition). London. British Educational Research Association

Bryan, H. and Burstow, B. (2017). Leaders’ Views on the Values of School-Based Research: Contemporary Themes and Issues. Professional Development in Education. 43. 5. 692-708.

Stutchbury, K. and Fox, A. (2009) Ethics in Educational Research: introducing a methodological tool for effective ethical analysis, Cambridge Journal of Education, 39 (4), 489-504

Zhao, Y. (2018). What Works May Hurt—Side Effects in Education. New York. Teachers College Press.

The school research lead: meta-analysis, effect sizes and the leaking ship

Over the last few weeks I have given considerable thought to the usefulness to school leaders of both effect sizes and meta-analyses, as they try to bring about improvement in their schools.  On the one hand, there is the view at effect sizes and meta-analysis have major limitations but remain the best we have, Coe (2018).  On the other hand, there is the view that effect sizes and meta-analysis are fundamentally wrong and do not represent the best that we have and there are viable alternatives, Simpson (2017) and Simpson (2018).

So what is the evidence-based school leader/teacher to do when the scientific literature they have at their disposal is possible either limited or just plain wrong.  Well a useful starting point is Otto Von Neurathe’s  simile comparing scientists with sailors on a rotting ship - We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.  This would suggest that if you think effect sizes and meta-analysis may be leaking planks on the goodship ‘evidence-based education’ – but remain the best we have - it would be foolish to rip them out before we have something to put in their place.  Alternatively, if you are of the view that effect sizes and meta-analysis are ‘wrong’ and that you cannot draw useful conclusions from them – if you keep these ‘leaky planks’  in place – this may well lead to the ‘ship’ taking on water and becoming unstable, going off in the wrong direction or even sinking.   If that is the case, then whoever is steering the ship needs to do a least three things. First, make adjustments in the direction the ship may ‘naturally’ travel by making appropriate changes in-course.  Second, redouble their efforts to find ‘new planks’ which can be used as replacements for the leaky planks. Three, find other materials which can help plug the leaks, whilst they look for new planks.

However, from the point of view of the evidence-based school leader, it probably does not matter which stance you take on effect sizes and meta-analysis, the actions you need to undertake the address the issues at hand will in large part remain the same.   First, as Kvernbekk (2016) states when looking at research studies it’s the causal claim and associated support factors that your are after – not the average effect size.  Properly conducted randomised controlled trials, which include an appropriate impact evaluation or qualitative element – may give you several clues as to what you need to do to make an intervention work in your setting.

Second, spend time making sure you are solving the right problem.  Solving the right problem will have a major impact on whether or not you are successful in bringing about favourable outcomes for pupils.  There is no point using high quality and trustworthy research studies, if they are being used to help you solve the wrong problem.  On the other hand, if they are helping you get a better understanding of the issues at hand, then that’s another matter.

Third, as an evidence-based school leaders you won’t just rely on the academic and scientific literature, you will also draw upon a range of different sources of evidence – practitioner expertise, stakeholder views and school/organisational data to help you come up with a solution which leads to a favourable outcome.  That does not mean that these other sources of evidence are without their own problems.  Nevertheless, in making a decision it might be better to use these sources of evidence –  and being aware of their limitations -  than not to use them at all, Barends and Rosseau (2018).

Four, given the ‘softness’ of the evidence available – even if you come up with a plausible solution, you will need to give a great deal of thought to the scale of implementation.  In all likelihood, small fast-moving iterative pilot studies within your school are more likely to lead to long-term success than school or multi-academy trust wide rollouts. Langley, Moen, et al. (2009) and Bryk, Gomez, et al. (2015)  provide useful guidance as to what to do given different levels of knowledge, resources and stakeholder commitment.

Five, as Pawson (2013) states,  it is important to attend extremely closely to the ‘quality of the reasoning in research reports rather than look only to the quality of the data.’ (p11)   Moreover, it is necessary to give thought and effort considering how you go about improving the quality of your own practical reasoning.  This could be done by making sure that before you make ‘evidence-based decisions’ you make sure your thinking is tested by individuals – who may well disagree with you.  You may also want to look at the work of Jenicek and Hitchcock (2005) who provide guidance on the nature of critical thinking and strategies that you can adopt to improve your own thinking skills.

And finally

This discussion should not be seen as an attempt to dismiss the usefulness of evidence-based practice.  Rather it should be seen as an attempt to outline what to do when the research evidence is a bit ‘squishy’.  Even if it wasn’t ‘squishy’ effect sizes and systematic reviews would only provide you with a small fraction of the evidence you need to make concerning decisions about what educational interventions to adopt or withdraw within your school, Kvernbekk (2016).

 References

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page.

Bryk, A. S., Gomez, L. M., Grunow, A. and LeMahieu, P. G. (2015). Learning to Improve: How America's Schools Can Get Better at Getting Better. Cambirdge, MA. Harvard Education Press.

Coe, R. (2018). What Should We Do About Meta-Analysis and Effect Size? CEM Blog. http://www.cem.org/blog/what-should-we-do-about-meta-analysis-and-effect-size/. 5 December, 2018.

Jenicek, M. and Hitchcock, D. (2005). Evidence-Based Practice: Logic and Critical Thinking in Medicine. United States of America. American Medical Association Press.

Jones, G. (2018). The Ongoing Debate About the Usefulness of Effect Sizes. GaryRJones.com. https://www.garyrjones.com/blog/.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

Langley, G. J., Moen, R., Nolan, K. M., Nolan, T. W., Norman, C. L. and Provost, L. P. (2009). The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco. John Wiley & Sons.

Pawson, R. (2013). The Science of Evaluation London. Sage Publications.

Popper, K. (1992). The Logic of Scientific Discovery (5th Edition). London. Routledge.

Simpson, A. (2017). The Misdirection of Public Policy : Comparing and Combining Standardised Effect Sizes. Journal of Education Policy. 32. 4. 450-466.

Simpson, A. (2018). Princesses Are Bigger Than Elephants: Effect Size as a Category Error in Evidence‐Based Education. British Educational Research Journal. 44. 5. 897-913.