The school research lead, The Implementation Game and increasing your chance of successfully implementing an intervention

In last week’s blog we looked at school leaders could use the Hexagon Tool to help them better decisions as to whether a particular intervention is right for their school and setting.  In this week’s blog I’m going to look at what comes next – the implementation of the intervention – and how the work of Melanie Barwick and The Implementation Game (TIG) can increase your chance of actually bringing about improvements for your pupils and staff

Put simply The Implementation Game is basically a resource that helps you develop and implementation plan for whatever intervention you are looking to introduce.    Based on the research evidence from the field of implementation science – TIG is ‘played’ by the group of people who will be helping you develop the implementation of the of the intervention. In particular, it gets the implementation team to think about five different stages of implementation.

·     Preparing for practice change – choosing an innovation – and, for example,questions around your needs, desired outcomes, potential evidence-based practices which could achieve those outcomes

·     Preparing for practice change – readiness – whether the proposed innovation meets yours needs, is it a good fit, what changes will need to be made, what resources are available, what capacity is available to sustain the innovation, how will you obtain and maintain buy-in, how will you communicate the goal of the innovations

·     Implementation structure and organization – what partnerships will be required, what training will be required, what physical space will be needed, who will you maintain fidelity to both the implementation process and fidelity to the innovation, what technology will be needed, 

·     Ongoing implementation support - what staff training will be provided, what technical assistance and coaching will be made available, what data will you collect to evaluate process and outcomes, how will you go about learning how to improve your processes

·     Maintaining fidelity and sustaining – how will you maintain fidelity and quality over time  

In addition, TIG provides a range of other resources which helps you think through 

·     The different factors that might be relevant for your intervention – for example, the characteristics of the intervention, the outer setting and external factors, the inner setting and internal factors, characteristics of individuals involved and the process of engaging with them.

·     Implementation strategies – gather information, building buy-in, developing relationships, developing training materials, financial strategies and incentives, quality management

·     Implementation outcomes – for example acceptability, adoption, appropriateness, costs, feasibility, fidelity, 

A few observations

It seems to me that TIG is a useful tool that help you engage in a rigours process of planning the implementation of an intervention.  However, that does not mean that by using the tool this will guarantee success – that will depend upon many factors both your skills in both using the TIG and subsequently implementing the identified actions.  Indeed, one thing that I really like about the tool is right from the beginning it’s getting you to think about the sustainability of the intervention – and it’s not just about how can we implement an innovation – and then tick a box and say job done.  

And finally 

This will be my last blog of the academic year – and I intend to return will new resources and material at the end of August

Reference

Barwick M. (2018). The Implementation Game Worksheet. Toronto, ON The Hospital for Sick Children.

The school research lead, the hexagon tool and making good decisions about implementing interventions

As we approach the end of the academic year, you will no doubt be giving some thought to what new practices or interventions that you wish to adopt this coming September. Unfortunately, we know that once implemented many of these interventions will not live up to their initial promise – maybe the evidence supporting the intervention was not that robust and the intervention’s benefits were overstated– maybe their isn’t the external or internal expertise available to support the implementation of the intervention – maybe the intervention doesn’t fit with other processes and practices within the setting – maybe the intervention runs counter to the existing school culture and is met with resistance from some of the people who need to implement it.

However, it might be possible to increase your chances of making sure that you choose to implement an intervention – that not only appears to work in other settings but has a good chance to work in yours. One way of increasing your chances of a successfully implementing an intervention is to make sure that before the intervention is implemented is that you undertake some form of structured evaluation of both the intervention and your setting. To help you do this, I’m going to suggest that you have a look at something known as the Hexagon Tool – Metz and Louison (2019) – which will help you undertake a structured appraisal of: the research evidence to back claims for the interventions effectiveness; of whether there is a clear and usable intervention which can be adapted to the local context; the support available to help implement the intervention; whether the intervention meets the needs of your school/setting; whether the intervention is a good fit with other processes and practices within your school setting; whether your school/setting has the capacity to implement the intervention.

Figure 1 The Hexagon Tool

Screenshot 2019-07-12 at 14.41.01.png

Metz and Louison go onto provide guidance on when to use the tool – ideally at the early stages of decision-making process of whether to adopt the intervention. They also provide guidance as to how to use the tool – and the tasks which needed to be completed before the actual use of the tool – and what needs to be done as the tool is being used.

Of particular, use is they provide both a set of questions and associated rating scale to help you make judgements about each of the six elements. For example, for the ‘evidence’ component they pose the following questions.

1. Are there research data available to demonstrate the effectiveness (e.g. randomized trials, quasi-experimental designs) of the program or practice? If yes, provide citations or links to reports or publications.

2. What is the strength of the evidence? Under what conditions was the evidence developed?

3. What outcomes are expected when the program or practice is implemented as intended? How much of a change can be expected?

4. If research data are not available, are there evaluation data to indicate effectiveness (e.g. pre/post data, testing results, action research)? If yes, provide citations or links to evaluation reports.

5. Is there practice-based evidence or community-defined evidence to indicate effectiveness? If yes, provide citations or links.

6. Is there a well-developed theory of change or logic model that demonstrates how the program or practice is expected to contribute to short term and long term outcomes?

7. Do the studies (research and/or evaluation) provide data specific to the setting in which it will be implemented (e.g., has the program or practice been researched or evaluated in a similar context?)?

If yes, provide citations or links to evaluation reports.

8. Do the studies (research and/or evaluation) provide data specific to effectiveness for culturally and linguistically specific populations? If yes, provide citations or links specific to effectiveness for families or communities from diverse cultural groups.

Which they suggest you use to make a rating judgment – which is based on the following 5 point scale.

5 High Evidence

The program or practice has documented evidence of effectiveness based on at least two rigorous, external research studies with control groups, and has demonstrated sustained effects at least one year post treatment

4 Evidence

The program or practice has demonstrated effectiveness with one rigorous research study with a control group

3 Some Evidence

The program or practice shows some evidence of effectiveness through less rigorous research studies that include comparison groups

2 Minimal Evidence

The program or practice is guided by a well-developed theory of change or logic model, including clear inclusion and exclusion criteria for the target population, but has not demonstrated effectiveness through a research study

1 No Evidence

The program or practice does not have a well-developed logic model or theory of change and has not demonstrated effectiveness through a research study

A few observations

A framework such as the Hexagon Tool is extremely helpful in getting you to think about the different aspects of implementing an intervention. Not only that, it does so in way which should allow to summarise your evaluation in a way which is easily communicable to others, with the use of the rating scale and maybe the use of a ‘spider digram.’ However, before you can make good use of the tool – you are probably going to have to make a few adjustments to some of the detailed descriptions of each of the elements and the associated questions – so that they reflect your context and system, rather than US system in which the tool was devised. In addition, it’s important to remember that the Hexagon Tool does not provide a substitute for your professional judgment and you will still need to make a decision as to whether or not to proceed with the intervention.

And finally

Tools like the Hexagon Tool are extremely useful in helping you organise your thinking but they are not a substitute for thinking about the intervention and whether ‘what worked there’ might in the right circumstances ‘work here.’

Reference

Metz, A. & Louison, L. (2019) The Hexagon Tool: Exploring Context. Chapel Hill, NC: National Implementation Research Network, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill. Based on Kiser, Zabel, Zachik, & Smith (2007) and Blase, Kiser & Van Dyke (2013).

ResearchED and 300,000 words later - some reflections

The first ResearchED event I attended was in September 2014 and the London national conference. Without doubt, this was some of the most inspiring and influential professional development I had experienced in the thirty years I had been involved in education. It was inspiring because I was taking part in an event with over 1000 teachers who had given up a Saturday morning to speak and listen about something they cared about i.e. improving teaching and learning though the appropriate use of research evidence. It was influential, in that it got me thinking, reading and writing about evidence-based school leadership and management.

ResearchED London 2014 got me thinking about evidence-based school leadership and management for two reasons.  First, the vast majority of the sessions at the event had a focus on teaching and learning and little attention seemed being paid to the role of research and other sources of evidence in the decision-making of senior leaders in schools. Second, that summer I had by chance read an article by Adrian Furnham ]which introduced me to the discipline of evidence-based management and I was intrigued as to whether there was a possible synthesis with evidence-based education.  This contributed to me writing a book – Evidence-based School Leadership and Management: A practical guide– and 220  blogposts (www.garyrjones.com/blog). 

Having now written around 300,000 words on all things evidence-based, I would like to make the following observations about the current state of evidence-based practice within schools.   First, the ‘evidence-based movement’ is not going away anytime soon.  We have 22 schools in the Research Schools Network; an increasing number of schools appointing schools research leads; hundreds if not thousands of  educational bloggers contributing to discussions about how to improve education; social media and eduTwitter providing a forum for the articulation of views;  over 20 researchED conferences scheduled for 2019; the Education Endowment Foundation (EEF) spending in 2017-18 over £4m  to fund the delivery of 17 projects, involving 3620 schools and other educational settings reaching approximately 310,00  children and young people; and finally, we have Ofsted using research evidence to inform their inspection framework.

Nevertheless, despite all this time, effort and commitment being put into research and evidence-based practice, there is still much to be done to ensure evidence-based practice contributes to improved outcomes for pupils.  First, we need to have an honest conversation about teacher research literacy  and their subsequent abilities to make research informed changes in their practice.  Research undertaken by the National Foundation for Educational and the EEF suggests that teachers have a weak variable knowledge of the evidence-based relating to teaching and learning and have a particularly weak understanding of research requiring scientific or specialist knowledge, Nelson et al (2017).  Second, there is a distinction between the rhetoric and the reality of evidence-based practice within schools.  Research  undertaken for the Department for Education – Coldwell et al (2017) identified a number of schools where headteachers and senior leaders ‘talked a good game’ about evidence-informed teaching within their schools, whereas the reality was that research and evidence was not embedded within the day to day practice of the school.   Third, it’s important to be aware there is a major debate taking place amongst educational researchers about randomised controlled trials, effect sizes, meta-analysis. Indeed as Professor Rob Coe states: Ultimately, the best evidence we currently have may well be wrong; it is certainly likely to change. (Coe, 2018)

And finally, if I was to offer any advice to teachers, school leaders and governors/trustees who are interested in evidence-based practice, it would be the following. Becoming an evidence-based practitioner is hard-work. It doesn’t happen by just reading the latest EEF guidance document, John Hattie’s Visible Learning or by spending one Saturday morning a year at a researchED conference.  It requires a career long moral commitment to challenging both your own and others practice, critically examining ‘what works’ to ensure whatever actions you take bring about improvements in pupil outcomes.  

Recommendations for further reading 

Brown, C. (2015). Leading the Use of Research & Evidence in Schools. London. IOE Press

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page. 

Cain, T. (2019). Becoming a Research-Informed School: Why? What? How?London. Routledge.

Coe, R. (2018) What should we do about meta-analysis and effect size CEM Blog https://www.cem.org/blog/what-should-we-do-about-meta-analysis-and-effect-size/

Coldwell, M., Greany, T., Higgins, S., Brown, C., Maxwell, B., B, S., Stoll, L., Willis, B. and Burns, H. (2017). Evidence-Informed Teaching: An Evaluation of Progress in England Research Report. London. Department for Education

Furnham, A. (2014). On Your Head: A Magic Bullet for Motivating Staff. The Sunday Times. Sunday 13 July 2014. London

Jones, G. (2018). Evidence-Based School Leadership and Management: A Practical Guide. London. Sage Publishing.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

Nelson, J., Mehta, P., Sharples, J. and Davey, C. (2017). Measuring Teachers’ Research Engagement: Findings from a Pilot Study: Report and Executive Summary. London. Education Endowment Foundation/NFER

 This blogpost first appeared as an article in issue 4 of the researchED Magazine, which was published in June 2019