Can the field of bio-ethics help us think about the ethics of teacher-led research and evidence-based practice in schools?

In previous blogs I have tried to get some kind of grip on the ethical issues associated with both school-led research and evidence-based/informed practice within schools. This has led me to doing some reading on ‘bio-ethics’ and the general ethical principles used in the caring professions to see if they can shed some light on how to conduct ethical evidence-based practice.  Recently, I read an article by Hersch (2018) – who borrows two concepts from bioethics - clinical equipoise and therapeutic misconception -  and applies them to ‘research’ conducted by teacher researchers.  So the rest of this post will in explore in more detail what is meant by the terms educational misconception and educational equipoise, and will then go onto examine the implications for teachers and schools.

Educational misconception

In the context of medicine, therapeutic misconception exists where a research subject/patient has the mistaken belief that decisions about his or her care are based solely on what is best for patient/subject. So a patient may take part in a clinical trial, be in the control group receiving a placebo and think they are receiving the best possible care.  

This type of misconception is not limited to medicine, it can also happen in teacher and school-led research.  As Hersch argues once a teacher decides to evaluate a teaching strategy or intervention they add an extra aim to their classroom – other than teaching pupils to learn to the best of the teacher’s abilities.  This additional aim involves finding out what teaching methods work best to accomplish this.  As such, the beneficiaries of the research may not be current pupils, but rather the teacher themselves, or future pupils taught by this teacher.

Hersch goes onto argue that educational misconception is a problem and states: Considering the seriousness with which research on human subjects is conducted, and the importance placed on voluntary participation with easy opt out and informed consent, it would be a significant ethical failing if students were under the misconception that their teachers only have their learning in mind in the classroom.  Of course, we are not dealing with life-and-death issues, nor even with serious consequences in students’ health, either physical or mental.  Yet this does not detract from the gravity of letting students continue with their studies under an unnecessary misconception (p8)

Educational equipoise

In the context of medicine, clinical equipoise arises where there is an honest professional disagreement amongst experts about the most appropriate treatments. In other words, there is no consensus about the pluses and minuses of the various treatments.  As a result a clinical trial may be designed to resolve, if possible, the disagreement.

This notion of equipoise has found its way into education, for example, Coe, Kime, et al. (2013) - in the EEF’s DIY Evaluation Guide – state

Ethical evaluations start from a position of 'equipoise', where we do not know for certain what works best, it can be argued that that it is unethical not to try and establish which is more effective, particularly if the intervention is going to be repeated or rolled-out more widely…..It is important to explain that an evaluation of a new possible method is taking place, not an evaluation of a better method as if we knew it was better, we’d be doing it already.

Hersch then goes onto identify three difficulties in translating the notion of clinical equipoise to educational research.  First, teaching lacks an ethical framework which goes ‘beyond the ethical requirements for people in general’. Indeed, one of the ironies of teaching is that if a teacher were to introduce a new approach and did not rigorously evaluate it, this is within the bounds of acceptable teaching practice.  On the other hand, a teacher who sets out to rigorously evaluate their practice is probably going to be held to a higher ethical standard.  

Second, the chances for harm may be smaller in an educational context than in a clinical context, but that does not mean the potential for pupils being harmed should be ignored.

As Hersch states: At worst, students learn a little less than they could, so is that really a worry? But it is a worry when considering the ethics of SoLT (scholarship of learning and teaching).  Just because there are worse harms to which people can be subjected in other contexts, and just because some teachers do a poor job teaching does not mean that those who seek to research their teaching need not care about the harms they inflict, minor as they might be.

Third, Hersch raises the point about who the community of experts is, particularly when there are disagreements about the success or otherwise of an intervention.  Is it researchers who work in higher education institutions or other bodies?  Is it the professional bodies or associations of teachers? Are the experts within the senior leadership teams of a school?  Is it the subject experts who work in a department within a school? It is, perhaps, another indication of how we are from being a research-informed profession, that such seemingly simple questions are so difficult to answer.

Implications

So how can schools and teachers meet some of these challenges?

·      Teachers should consider making it clear to pupils participating in a study, what the aims  of the study are and who are the intended beneficiaries.

·      Teachers have an ethical obligation to keep up with the latest research, and in particular where there appears to be an emerging consensus, so as to make sure they are using the most effective interventions – otherwise, there may be an issue around educational equipoise

·      The school research lead has an important role to play as a potential gatekeeper as to ‘what works, for whom, to what extent, where, and for how long’

And finally

Perhaps, before thinking about undertaking teacher-led or school-led research practitioner inquiry, time could be spent thinking about the ethical values which inform the work of the school.  If this is done, so many other things might become that much easier to manage.

References

Coe, R., Kime, S., Nevill, C. and Coleman, R. (2013). The Diy Evaluation Guide. London. Foundation, E. E.

Hersch, G. (2018). Educational Equipoise and the Educational Misconception; Lessons from Bioethics. Teaching & Learning Inquiry. 6. 2. 3-15.

How to stop doing things or how not to start doing things in the first place - and the APEASE framework

In a recent article in the TES – Keziah Featherstone @keziah70 – advocates  … stopping as many things as you can this term and not introducing new things, even if they seem like a really good idea …. focus on the core business of teaching and learning and making that as easy and effective as you can. Do not exhaust yourself, your students or your staff by trying something new and glittery. Just because it works somewhere else, or in a book, doesn’t mean it’ll work for you. 

Personally – I’m not sure you can totally stop doing new things – though what you can do is make sure that any idea, innovation or intervention is  appropriately evaluated before being introduced.  That said, it is necessary that ensure that existing practices are subject to review – particularly if it is not clear whether it has a positive impact on pupil learning.  To help you do this I’m going to suggest that you use a simple set of six criteria (APEASE) - developed by Michie, Atkins, et al. (2014) -  when designing or evaluating a new intervention. The APEASE criteria are detailed In Table 1 –  with descriptions being amended to how the criteria can also be used to evaluate existing practices.

Table 1. The APEASE criteria for designing and evaluating interventions

Affordability

An innovation is affordable if within the relevant budget it can be accessed by all those pupils for whom it might benefit of relevance or benefit.  For existing practices, if you weren’t already doing it, would you be able find the money. And id you could, would you want to use it for this purpose?

Practicability

Can the intervention be implemented by effective teachers as part of their day to day practice.  Or does the intervention require the ‘best’ staff to be highly trained and be supported by extensive resources.  Are current practices ‘soaking’ up extensive specialist resources

Effectiveness and cost-effectiveness

What will be the effect size of the intervention when implemented under normal day to day conditions in the school/  How much will it cost per pupil? How much will it cost per pupil who benefits from the intervention?  You may want to have at this post on the Number Needed to Teach to help you with this judgment. 

For existing practices have you any notion of the cost per pupil or cost per pupils who benefits from the practice?

Acceptability

Acceptability refers to the extent to which an intervention is judged to be appropriate by relevant stakeholders.  What is acceptable to teachers may not be acceptable by parents. What is acceptable by the senior leadership team may not be acceptable by the school’s governing body or trustees. 

Are there some existing practices, which at best, are only marginally acceptable to stakeholders. Can these practices be stopped without upsetting key stakeholders?

Side-effects/safety

An intervention may be effective and practicable, but have unwanted side-effects or unintended consequences. These need to be considered when deciding whether or not to proceed – as these are often overlooked.  You might want to have a look a look the work of Zhao (2018) who explores the issue of side-effects in education.

Are you aware of the negative side effects of an existing practice, if so, are these implicit and just taken for granted or how have they been identified and articulated.

Equity

An important consideration is the extent to which an intervention may reduce or increase the disparities between different groups of pupils. For example, Hill, Mellon, et al. (2016) demonstrates the negative impact on Y7, Y8 and Y9 pupils of a ‘superhead’ concentrating resources on Y10 and Y11 pupils.

And finally

No set of criteria can you give the answer as to whether to either introduce a new innovation or bring to a halt existing practice.  All they can do, is hopefully, help you increase your chance of making decisions which lead to favourable outcomes for your pupils. 

References

Hill, A., Mellon, L., Laker, B. and Goddard, J. (2016). The One Type of Leader Who Can Turn around a Failing School. Harvard business review. 20.

Michie, S., Atkins, L. and West, R. (2014). The Behaviour Change Wheel. A guide to designing interventions. 1st ed. Great Britain: Silverback Publishing.

Zhao, Y. (2018). What Works May Hurt—Side Effects in Education. New York. Teachers College Press.

The ethics of evidence-informed practice

One of the challenges you will face as an evidence-informed practitioner is that while it is relatively straightforward to understand that working in an educational setting has an ethical component, it is not so clear as to how go about making ethical evidence-informed decisions within such a setting.  This lack of clarity is in large part due to a lack of an agreed ethical framework for how schools should be led or managed – although this week’s announcement of ACSL’s Framework for Ethical Leadership in Education is surely to be welcomed.

In addition, although there are clearly established for principles for the conduct of research BERA (2018), evidence informed practice should not be conflated with research. Put simply, evidence informed practice is about making practical decisions based on the best available evidence and which may result in changes in practice.  Whereas research is a deliberate inquiry into a particular issue, which hopefully increases the evidence-base, provides a statement of results and, creates new knowledge - Carnwell (2001). As such,  the ethical considerations involved in the creation of new knowledge are different from those relevant to making decisions based on that new knowledge .

Ethical evidence-informed decision-making

When first seeking guidance on the ethical principles underpinning evidence-informed decision-making  it makes sense to start with the  Center for Evidence-Based Management’s Guiding Principles - CEBMa (2018).   Established in 2011 the CEBMa - a not for profit independent organisation -  was established by an international group of management scholars and practitioners to promote the use of evidence-based leadership and management. The CEBMa’s Guiding Principles are described in Figure 1

Figure 1 The Guiding Principles of the Center for Evidence-Based Management 

Evidence Matters

We will base our practice on the best available critically appraised evidence from multiple sources. Reflective use of high-quality evidence drives better outcomes for organizations, their members and clients, and the general public. Where high-quality evidence is not available, we will work with the limited evidence at hand and supplement it through “learning by doing”, by systematically assessing the outcomes, implications, and consequences of our practice. 

Ethics and Stakeholder Consideration

We recognize the moral obligation to understand the implications that our practice can have for multiple stakeholders, including any who would benefit or be harmed by it in the near or long-term. We seek to overcome the biases associated with a narrow view of stakeholders that contemporary organizations sometimes propagate, and incorporate the values and concerns of all stakeholders in our practice and decision processes. 

Lifelong Learning 

As practitioners in business, academia, government or community organizations, we will remain committed to lifelong learning. We encourage and champion open discussion, feedback, constructive criticism, reflection, and ongoing assessments related to our practice. We appreciate that this may lead us to change our judgment and conclusions.

Independent Critical Thinking

We are open to views from everyone and will weigh them against the best available evidence from multiple sources. We will never be afraid to speak up when the available body of evidence contradicts established practice or political interests. Independent critical thinking is the lifeblood of evidence-based practice.

Center for Evidence-Based Management

These guidelines are particularly useful as they provide generic guidance on the principles underpinning ethical evidence-informed  decision-making.  That said, although understanding these principles is relatively straightforward, what is far more complex, is seeking to apply them within a real-life situation where there the diverse interests of multiple stakeholders need to be met.   In addition, it would be sensible to ensure that these guiding principles are used alongside whatever code of practice/professional guidelines exist for your role.  

Applying CEBMa’s guiding principles 

Let’s say that you have been asked to undertake a review of the school’s teacher professional growth policy – including lesson observation -  and make recommendations to the school’s senior leadership team by the end of the school year.

First, it will be necessary to obtain the best available critically appraised evidence on teacher professional growth.  However, this will not be limited just to academic research but you will need to obtain other sources of evidence, including stakeholders, practitioners and the school.  In other words, making a recommendation on the basis that the ‘the research evidence says’ is indicative of an incomplete process of acquiring evidence. 

Second, there is a requirement to engage in genuine inquiry -  Le Fevre, Robinson, et al. (2015).   Gathering evidence from a range of stakeholders, and then only paying lip-service to what has been said – undermines the authenticity of the evidence-gathering process.  Indeed when gathering and appraising evidence from stakeholders it is essential to be explicit about why the evidence is being gathered and how that evidence is going to be used.  If this is not done, this can lead to negative consequences for the perceived legitimacy of any future decision.

Third, crucially there needs to be a willingness  to change your mind.  There is little or no point in going about a process of gathering, appraising and aggregating evidence if you are not willing to change your mind about how to proceed.  At the start of the process you may have a view about say, graded lesson observations and how it can generate valid and reliable evidence of a teacher’s effectiveness.  However, if the research or other evidence strongly suggests otherwise, then a willingness of change your position on the matter becomes a prerequisite

Fourth, and this in many situations may pose a difficult challenge, if after engaging in your review you draw conclusions which could well be different from those of powerful and influential stakeholders – be it senior leaders or staff associations – you will need to find a way to articulating those views in a way, which does not unnecessarily alienate those influential stakeholders.   The work of Heifetz, Grashow, et al. (2009) on adaptive leadership is extremely useful helping  you thrive in such situations.

In addition there are two other considerations, such as free and informed consent and the prevention of harm that you will also need to take into account - Robinson and Lai (2005).  In this context – given that you are reviewing the school’s professional growth policy as part of your day to day responsibilities and the school is constantly asking for feedback on policies from teaching staff, then it would be unreasonable to expect that written formal consent would be required. However, alongside this, you will also need to take into account whether staff will have the right not to participate the evidence gathering process – as there may be power differentials between say recently qualified teachers and more senior staff.  In these circumstances it would be wrong to compel teachers to participate.  However, if you do have colleagues not willing to participate this should ‘set-off alarm bells’ about the nature of school’s culture, which has led to colleagues not wanting to participate.  Indeed, this maybe the most important thing you find out from your evidence gathering process.

Next, you will need to take into account the prevention of harm.   This is particularly tricky issue as there may be circumstances where whatever decision is made will lead to someone being ‘potentially’ harmed.  However, that does not mean you cannot try and minimise any harm that arises from the process of generating evidence.  So in this example of reviewing the school’s teacher professional growth policy you may decide to use a survey to gather colleagues’ views.  In doing so, you may want to make sure the survey is designed in such a way that it is not possible to identify individual respondents.  If this is not done, respondents may feel that their responses may lead to them to being ‘targeted for reprisals’  if their views do not coincide with that of the senior leadership team

So where does this leave you – the evidence-informed practitioner – when trying to engaged in ethical decision-making.  Robinson and Lai (2005) provide some outline guidance for use with practitioner research, which can be easily adapted for use with evidence-informed practice.

·      There are no definitive rules for the conduct of evidence-informed practice – rather you will need to be awareness of both the principles underpinning evidence-informed practice and the professional values relevant to your role in the school.  You will then need to use your professional judgment to apply them to you particular context.

·      You will not be able to get away from the ethical implications of your decision-making.  Whereas ethical research activities set out to prevent harm being done to the participants, the very nature of decision-making in schools may lead to harm being done to certain individuals.  As such, the ethical evidence-informed decision-maker is seeking to minimise harm rather than eliminate it altogether.

·      Try and increase the perceived benefits to others of participating in the process – the greater the perceived benefits the greater the chance that you may receive feedback which might provide a different insight into the issue you are trying to address

·      Given the nature of evidence-informed practice, which involves gathering evidence from multiple sources, it makes sense to try an involves others in the decisions that affect them.  Indeed, if we go back to the origins of evidence-based medicine an essential element involves making clinical decisions which are informed by the patient’s preferences and values.

·      Check your assumptions and do not take anything for granted.  Just because you think a process is designed to be unthreatening – that does not mean that is how it will be perceived by colleagues.  Engage in genuine inquiry to gain a real understanding of stakeholders perceptions of the process.

And finally

Sometimes the difference between research and evidence-informed practice may be blurred.  Nevertheless, there is as simple rule of thumb – try and minimise the harm caused by any decision that you may make.

References

BERA. (2018). Ethical Guidelines for  Educational Research (4th Edition). London. British Educational Research Association

Carnwell, R. (2001). Essential Differences between Reasearch and Evidence-Based Practice.

CEBMa. (2018). Our Guiding Principles. Rotterdam, Netherlands. Center for Evidence-Based Management

Heifetz, R. A., Grashow, A. and Linsky, M. (2009). The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World. Harvard Business Press.

Le Fevre, D., Robinson, V. and Sinnema, C. (2015). Genuine Inquiry : Widely Espoused yet Rarely Enacted. Educational Management Administration & Leadership. 43. 6. 883 - 899.

Robinson, V. and Lai, M. (2005). Practitioner Research for Educators: A Guide to Improving Classrooms and Schools. Thousand Oaks, CA. Corwin Press.

 

Number Needed to Teach

One of the benefits of the Christmas and New Year’s break is that it gives you the opportunity to do some reading and then have the time to give some thought to have what you have just read.  Over the Christmas break I have been reading Professor Steve Higgins’s  fascinating new book Improving Learning: Meta-analysis of Intervention Research in Education in which I came across a statistic which isn’t often seen in educational texts –  the Number Needed to Treat (NNT) and renamed for this post as the Number Needed to Teach (NNTCH) – which for a given intervention shows how many pupils need to be ‘taught’ in the intervention group for there to be one more favourable outcome compared to the control group.

In thinking about the NNTCH it soon became apparent that is potentially quite useful as it can help you quantify how many pupils might benefit from the intervention – which is particularly important if are trying to make a case for the use for the introduction of the intervention – as it will  make it easier to demonstrate to colleagues the potential impact of the intervention.  The NNTCH allows you to support your argument for the intervention with reference to the number of pupils who may benefit rather than referring to some abstract statistic such as effect size .  That said, the NNTCH is not without its problems, which I will discuss later.  So to help you make the most of the NNTCH in your decision-making the rest of this post will:

  • Explain how to calculate the NNTCH

  • Look at the relationship between effect sizes and NNTCH

  • Examine how the NNTCH can help you calculate the average cost of pupils benefiting from an intervention.

  • Make some tentative observations about the usefulness of the NNTCH

How do you calculate the NNTCH? 

The calculation of the NNTCH is relatively straightforward. 

  • Say we have a control group of 100 pupils

  • 50 (50%)  pupils score higher than the average score for the control group as a whole.

  • Let’s say we have an intervention group of 100 pupils

  • 66 (66%) pupils in the intervention score higher than the average score for the control group (an effect size of d = 0.4 sd)

  • We now calculate out the reciprocal of the % (66) of pupils of the intervention group scoring higher than the average score in the control  MINUS the % (50) of the pupils in the control group with an average score higher than then average for the control group

  • So the NNTCH = 1 / (66% - 50%) = 7 pupils ( technically 6.25 pupils and we round up to the nearest whole number).   

That is, we need to teach 7 pupils if we want 1 pupil to benefit from the intervention.

Alternatively, we can think of the NNTCH along these lines.  If we put 100 pupils through an intervention 16 pupils are more likely to experience a favourable outcome – i.e. score higher than then average in the control group – compared to if they were in the control group.

What’s the relationship between  effect sizes and the NNTCH?

We can work out the relationship between effect sizes and the NNTCH by ‘mashing’ together two resources. First, we can use the work of Coe (2002) who produced a table showing the relationship between effect size and the percentile of an intervention group scoring above average of the control group.  Two, we can use this table to create a number of simulations of interventions with different effect sizes and use an easily accessible online NNT calculators, to work out the associated NNTCH for a particular effect size. This then allows us to produce a table which shows the relationship between a particular effect size and the NNTCH

This now means that  for a given effect size we can work out how many pupils will need to be exposed to the intervention in order to obtain a favourable outcome.   For example, say we have intervention where there effect size is 0.2 SD, this means that 58% of the pupils in the intervention group have a score greater than the average score in the control group.  In other words, if 100 pupils were to go through the intervention, 8 additional pupils are now likely to experience and a favourable outcome, compared to what it would have been had they been in the control group.  If we then use the NNTCH calculator  this converts to a NNTCH of 13 pupils

Screen Shot 2019-01-04 at 14.21.24.png

Calculating the cost per pupil of the intervention to aid decision-making

It seems to me that one of the major benefits of calculating the NNTCH is that it can help you make more informed judgments about the costs and benefits of an intervention.  To help illustrate this I’m going to use an example based upon the EEF’s Thinking, Doing, Talking Science – Evaluation report -  Hanley, Slavin, et al. (2016).  Needless, to say I won’t go into the full details of the intervention, which can be found here – other than to say it’s focus was on making Y5 making science lessons in primary schools more practical, creative and challenging.  Let’s now look at some of the ‘numbers’ associated with the intervention.

  • Intervention cost per school - £1000

  • Number of pupils per school involved 50 

  • Average cost per pupil £20 

  • Effect size of the intervention 0.22

  • Number Needed to Treat 13 (see table)

  • Number of pupils likely to benefit per school 4 (50/13 rounded up to the nearest whole number)

  • Average cost per pupil who now experience a more favourable outcome £250

So suddenly an intervention which appears to cost only £20 per pupil now costs £250 per pupil who benefit – so what appears to be a relatively cheap per intervention per student suddenly becomes a lot more costly, once you take into account the number of pupils who are likely to benefit from the intervention. 

On the other hand, you might have a different intervention which at first glance appears more expensive per pupil – but maybe more cost-effective once when you take into account how many pupils benefit from the intervention.

  • Cost per school £1500

  • Number of pupils involved 50

  • Average cost per pupil £30

  • Effect size 0.4

  • Number Needed to Treat 6 (see table)

  • Number of pupils who are likely to benefit per school 8

  • Average cost per pupil who now experience a more favourable outcome £187.50

The benefit of this calculation is that although the second intervention would appear to be £500 more expensive – it would appear to be £62.50 cheaper per pupil who benefit from the intervention.  In other words, this second intervention would appear to be more cost effective than a cheaper intervention with a larger NNTCH.

Some observations about the usefulness of the NNTCH

The NNTCH is relatively easily to calculate and is quite a simple of way of working out how many pupils need to be experience an intervention before single additional pupil experiences a more favourable outcome than if they had been in the control group.  It also helps make a more realistic estimate of the average cost of the intervention per number of additional pupils who experience a more favourable outcome.  As such, it provides it helps you calculate an additional reference points when trying to work out the cost/benefits of an intervention.

On the other hand, the NNTCH is not without its’ problems. It assumes that a favourable outcome is determined by additional pupils doing better than the ‘average’ in the control group.  That said, a pupil in the intervention group may do better than they would have done in the control group but still not score better than the average of the control group.  In addition, when calculating the NNTCH – confidence intervals for the NNTCH - should also be calculated.  We also need to take into account that for some pupils the intervention may have a detrimental impact and indeed perform worse than if they had been in the control group.  Indeed, in medicine attention is also paid to the Number Needed to Harm – when introducing new interventions.

And finally

The view you take on NNTCH will depend upon many factors, though with one being especially important i.e. what is your stance on the usefulness of effect sizes and the associated assumptions.

References

Coe, R. (2002). It's the Effect Size, Stupid: What Effect Size Is and Why It Is Important. Paper presented at the British Educational Research Association annual conference, 2002. 12. p.

Hanley, P., Slavin, R. and Elliott, L. (2016). Thinking, Doing, Talking Science: Evaluation Report and Executive Summary - Updated July 2016. London. Education Endowment Foundation

Higgins, S. (2018). Improving Learning: Meta-Analysis of Intervention Research in Education Cambridge Cambridge University Press.