Guest Post : Unleashing Great Teaching by David Weston and Bridget Clay


This week's post is a contribution from David Weston and Bridget Clay who are the authors of Unleashing Great Teaching: the secrets to the most effective teacher development, published May 2018 by Routledge. David (@informed_edu) is CEO of the Teacher Development Trust and former Chair of the Department for Education (England) CPD Expert Group. Bridget (@bridge89ec) is Head of Programme for Leading Together at Teach First and formerly Director of School Programmes at the Teacher Development Trust.


Unleashing Great Teaching 

What if we were to put as much effort into developing teachers as we did into developing students? How do we find a way to put the collective expertise of our profession at every teacher’s fingertips? Why can’t we make every school a place where teachers thrive and students succeed? Well, we can, and we wrote Unleashing Great Teaching: the secrets to the most effective teacher development to try and share what we’ve discovered in five years of working with schools to make it happen.  Quality professional learning needs quality ideas underpinning it. But, by default, we are anything but logical in the way that we select the ideas that we use. A number of psychological biases and challenges cause us to reject the unfamiliar.

We all have existing mental models which we use to explain and predict. To challenge one of these models implies that much of what we have thought and done will have been wrong. We all need to guard against this in case it leads us to reject new ideas and approaches. This is nothing new.
In 1846 a young doctor, Ignaz Semmelweis, suspected that the cause of 16% infant mortality in one clinic might be the failure of doctors to wash their hands. When he ran an experiment and insisted that doctors wash hands between each patient, the deaths from fever plummeted. However, his finding ran so against the established practice and norms that his findings were not only rejected but widely mocked despite being obviously valid. This reactionary short-sightedness gave rise to the term The Semmelweis Reflex: ‘the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs or paradigms.’

An idea that contracts what we already think, which comes from a source that we don’t feel aligned to, or which makes us feel uneasy, is highly likely to be rejected for a whole range or reasons , even if there is a huge amount of evidence that it is far better than our current approach.

Reasons for rejection

Confirmation bias is really a description of how our brains work. When we encounter new ideas, we can only make sense of them based on what’s already in our heads, adding or amending existing thinking. This means that anything we encounter that is totally unfamiliar is less likely to stick than something partially familiar. Similarly, an idea that is mostly aligned with our existing thinking is more likely to stick than something completely at odds – the latter is a bit like a weirdly-shaped puzzle-piece: it’s very hard to find a place for it to go.  The effect of all of this is that when we hear an explanation, we remember the ideas that confirm or support our existing thinking and tend to reject or forget the ideas that don’t.

But it’s not just the nature of the ideas that affect our ability to learn. If an existing idea is associated with the memory of lots of effort and hard work, it becomes harder to change. This sunk cost bias means that we excessively value things we’ve worked hard on, no matter whether they’re actually very good or not. This bias is also known as the Ikea Effect – everyone is rather more proud of their flatpack furniture than this cheap and ubiquitous item perhaps deserves, owing to the effort (and anger!) that went into its construction.

We also see a number of social effects that mean that we don’t just listen to other people’s ideas in a neutral way. The Halo Effect is the way we tend to want to believe ideas from people we like and discount ideas from people we don’t. Of course, none of that bears any relation to whether the ideas are good. Public speakers smile a lot and make us laugh in order to make the audience feel good and thus become more likely to believe them. Two politicians of different parties can suggest the exact same idea, but supporters of the red party are much more likely to hate the idea if they hear it from the blue politician, and vice versa. A teacher from a very different type of school is much less likely to be believed than someone you can relate to more – though of course none of this necessarily affects whether their ideas are good.

If someone does present an idea that conflicts with our current thinking and beliefs, they run the risk of Fundamental Attribution Error. When we come into conflict with others, we rush to assume that the other person is of bad character. Any driver who cuts you up is assumed to be a terrible driver and a selfish person, but if you cut someone else up and they hoot then you generally get annoyed with them for not letting you in. A speaker or teacher who tells you something you don’t like is easily dismissed as ignorant, annoying or patronising.

Using evidence to support professional learning 

So how do we ensure that we’re using quality ideas to underpin professional learning? In our book we lay out some tools to help you overcome your inevitable biases.

Firstly, it’s very useful to look out for systematic reviews. These are research papers where academics have carefully scoured all of the world’s literature for anything relevant to a topic, then categorised it by type, size and quality of study, putting more weight on findings from bigger, higher quality studies and less on smaller, poorly-executed research. They bring all of the ideas together, summarising what we appear to know with confidence, what is more tentative, and where there are areas where the evidence is conflicting or simply lacking.

If you are interested in a topic, such as ‘behaviour management’ or ‘reading instruction’ then it’s a really good idea to tap it into a search engine and add the words ‘systematic review’. Look for any reviews conducted in this area to get a much more balanced view of what is known.

Secondly, raise a big red flag when you can feel yourself getting excited and enthusiastic about an idea. That’s your cue to be extra careful about confirmation bias and to actively seek out opposing views. It’s a very helpful idea to take any new idea and tap it into a search engine with the word ‘criticism’ after – e.g. ‘reading recovery criticism’ or ‘knowledge curriculum criticism’.

Thirdly, be a little more cautious when people cite lists of single studies to prove their point. You don’t know what studies they’ve left out or why they’ve only chosen these. Perhaps there are lots of other studies with a different conclusion – only a good systematic review can find this out.

Finally, be cautious of the single study enthusiasm, where newspapers or blogger get over-excited about one single new study which they claim changes everything. It may well be confirmation bias – or indeed if they are criticising it then it could also be confirmation bias causing them to do so.

To conclude

Of course, good quality ideas are only one ingredient. In our book we also explore the design of the professional learning process, offer a new framework to think about the outcomes you need in order to assist in evaluation, and discuss the leadership, culture and processes needed to bring the whole thing together. There are many moving parts, but if schools can pay the same attention to teachers’ learning as they do to students’ learning, we can truly transform our schools and unleash the best in teachers.


.

The school research champion and the evidence-rich school

Teachers, middle and senior leaders interested in bringing about greater use of evidence within their schools are exposed to a wide-range of terminology.  As such,  teachers and school leaders interested in evidence have to be able to distinguish, or at least be aware of the possible differences between: research-based practice; research-informed; evidence-based practice; and, evidence-informed practice.  And now  there a ‘new-kid on the block’ – evidence-rich/enriched practice.  So in this post I am going to look at:  what evidence-rich/enriched practice could mean; research into evidence-enriched practice looks like in a health-care setting; and, consider the implications of preceding discussion for those in interested in the use of evidence within schools. 

Evidence-enriched practice

Stoll (2017) describes evidence enriched practice as involving teachers and school leaders using external research evidence; collecting and analysing data; and, engaging in collaborative enquiry/research and development.  With teachers and school leaders being very much in the driving seat in the use of evidence.

Reflecting on this definition a number of issues need to be considered.

First, existing definitions of evidence-based practice, such as, Barends, Rousseau, et al. (2014), already make great play of different sources of evidence, be it research evidence, organisational data, stake-holder views and practitioner expertise, and if done properly, will be evidence-rich.

Second, definitions of evidence-based medicine, such as Sackett, Rosenberg, et al. (1996) emphasise the role of patients in making decisions.  Indeed, evidence-based medicine is about patients and clinicians making informed decisions about patient care, which are informed by the patients values and preferences.  Stoll’s definition is largely silent on the role of pupils and stakeholders in the decision-making process.

Third, the use of the ‘driving seat’ metaphor is quite interesting, in the driving seat of what: an evidence-informed pedal-powered go-kart or an evidence-based F1 racing car. 

Four, evidence-based practice is about making decisions on the basis of the best available evidence, which for me, is not the same as engaging in collaborative research and development.  R&D may subsequently be used in future evidence-based decisions, but it is a separate process. 

Five, despite the above criticism of Stoll’s notion of evidence-enriched practice, I welcome the emphasis on the collaborative nature of evidence-based practice, which has particular implications for school leadership: see Jones (2018 Forthcoming).

Evidence-enriched practice: lessons from health and social care sector

Regular readers of this blog will be aware that I often argue that there is much to learn from medicine and health-care about evidence-based practice.  Accordingly, it seems sensible to see what research has been published in the medicine and health-care sectors on evidence-enriched practice.  To do this I conducted a search on Google Scholar using the term ‘evidence-enriched practice’ I came across this paper : Developing Evidence Enriched Practice in Health and Social Care with Older People Andrews, Gabbay, et al. (2015).  This is a fascinating paper, which I will explore in more detail in future posts, however for the purposes of this blog I’m just going to highlight the various elements and sub-elements of evidence-enriched practice which were woven and interwoven into the project.

Element 1: Valuing and using a range of evidence

  • research evidence
  • practitioner knowledge and experiences
  • the voice of older people and carers
  • organisational knowledge (policy imperatives, embedded systems and resources).

Element 2: Securing senior management buy-in and valuing and empowering participants


  • Appreciation and respect: valuing people and focusing on their strengths and the things that matter to them
  • Honesty: supporting people to ‘say it as it is’
  • Permission: encouraging people to be creatively humane, not just procedurally compliant
  • Mutual trust: developed through respectful conversations
  • Celebration: recognising and building on success, including the importance of ‘ordinary’, often little, things

Element 3: Capturing and presenting relevant evidence in accessible and engaging formats

  • Stories, quotes, pictures, music and poetry
  • Good practice from elsewhere
  • Normative frameworks
  • Provocative statements

Element 4: Facilitating the exploration and purposeful use of evidence

  • A simple approach to support dialogic learning using evidence as the stimulus
  • Working as a community of practice
  • Facilitating serendipity and weaving in evidence as the project developed

Element 5: Recognising and addressing national and local organisational circumstances and obstacles

  • National social policy and financial investment in social care services
  • National regulatory requirements and local policies and procedures
  • Managing relational risk
  • Managing risks to physical safety
  • Developing and using recording that enhances the provision of good care and support and quality assurance
  • Local organisational management culture
  • The problem of feeling ‘left out’

What should be immediately obvious is that in comparison to Stoll (2017) this is a far more comprehensive framework with which to describe an evidence-enriched environment.  In particular, it emphasises the role of senior leadership in creating the environment in which an evidence-enriched practice can flourish.  It also recognises the need to address national and local circumstances, and not to see them as a hindrance but as something which is an integral part of the ‘evidence environment’.  Finally, the role of older people and carers is fully acknowledged.  


What are the implications for those interested in the creation of evidence-enriched practice with schools?

First, education does not need to reinvent the ‘evidence-enriched  wheel’ as there is much to learn from other sectors.  That does not mean it will not have to be adapted but it does mean we can ‘stand on the shoulders of others.’

Second, school leaders who think they will automatically build an evidence-enriched school culture by appointing a school research lead/champion need to think again.  School leaders need to give real consideration as to whether the leadership and management culture and style of the school is consistent with the conditions necessary to create an evidence-enriched environment.  If it isn’t but want to do something about it, the starting point is your own conduct as a school leader. If you are not interested in deeply reflecting upon your own leadership practice, then you may be better off not trying to become evidence-enriched.

Third, ‘evidence-enriched’ teachers are part of a community of practice.  It’s not about individual teachers conducting teacher-led randomised controlled trials – it’s about deep and profound conversations with colleagues, pupils, parents and other stakeholders based upon a culture or mutual respect.

Fourth, currently much of the research into evidence-informed practice focuses on how teachers and school leaders use research-evidence.  This is a far too narrow a focus and greater emphasis should be place on investigating how teachers and school leaders go about aggregating multiple sources of evidence and incorporating that evidence into the decision-making process. 

Fifth, knowledge brokers – be it research schools or  the individual school research champion  - need to consider different ways knowledge can be shared.  Newsletters are a very basic and safe way of sharing information – though probably not that effective - and we need to find far more of communicating ideas in accessible and interesting formats.

And finally

If you are interested in finding out more about what evidence-rich and evidence-enriched may look like in practice, the RSA will later this year be publishing a report Learning About Culture  which looks at what works in cultural learning, and how  to support schools and cultural organisations to use evidence from their own work and elsewhere to continuously improve their practice. Indeed, one of the intended key outcomes of the work is something the RSA describes as evidence-rich practice. 


References

Andrews, N., Gabbay, J., Le May, A., Miller, E., O'Neill, M. and Petch, A. (2015). Developing Evidence Enriched Practice in Health and Social Care with Older People.
Barends, E., Rousseau, D. and Briner, R. (2014). Evidence-Based Management : The Basic Principles. Amsterdam. Center for Evidence-Based Management
Bath, N. (2018). Exploring What It Means to Be ‘Evidence-Rich’ in Practice. IOE London Blog. https://ioelondonblog.wordpress.com/2018/04/12/exploring-what-it-means-to-be-evidence-rich-in-practice/.
Jones, G. (2018 Forthcoming). Evidence-Based School Leadership: A Practical Guide. London. SAGE Publishing.
Sackett, D., Rosenberg, W., Gray, J., Haynes, R. and Richardson, W. (1996). Evidence Based Medicine: What It Is and What It Isn't. Bmj. 312. 7023. 71-72.
Stoll, L. (2017). Five Challenges in Moving Towards Evidence-Informed Practice. Impact. Interim issue. Interim issue.
Straus, S., Glasziou, P., Richardson, S. and Haynes, B. (2011). Evidence-Based Medicine: How to Practice and Teach It. (Fourth Edition). Edinburgh. Churchill Livingstone: Elsevier.


The school research lead and how to avoid being the drunkard under the lamp post

A major challenge for school leaders and teachers interested in evidence-based practice (EBP)  is to be constantly seeking out EBP 's limitations and weaknesses.  To do this I recommend that headteachers and school research leads play close attention to the work of Professor Trisha Greenhalgh, who recently authored an article entitled Of Lamp Posts, Keys, and Fabled Drunkards: A Perspectival Tale of 4 Guidelines (Greenhalgh (2018).  In this article  Professor Greenhalgh describes her own experience as a patient arising from a high impact cycling accident and how evidence-based guidelines were misused in her treatment.

The use and abuse of guidelines

Without going into the details of Professor Greenhalgh's accident - which involved coming off a bicycle at 20 mph, hitting the road surface resulting in multiple fractures -  there were a number of occasions where according to Professor Greenhalgh's account  guidelines were either misapplied or not used at all during her treatment..

  • a guideline that existed and was relevant but which was not used 
  • a guideline that was not relevant but which was used 
  • a guideline that was relevant but was misremembered and misapplied by commentators claiming to be giving evidence based advice 
  • a guideline that did not exist but which was quoted by adherents of EBM as if it had existed (and which was also misremembered and misapplied).

Professor Greenhalgh subsequently identifies three reasons why this misuse of guidelines can happen.

First, we are hard-wired to classify. So when a doctor comes across a patient the tendency to classify them as part of a group.  Once that is done, this leads to the patient being treated on the basis 'guidelines' which are designed to meet the needs of the 'average patient' not the individual.

Second …. bounded rationality-that is, the idea that because real‐world decisions often involve numerous options, outcomes, and contextual factors, we unconsciously simplify the problem to make it possible to cope with cognitively and manage practically.  Indeed, the inexorable pressures of modern clinical work often require us to use such "fast and frugal" reasoning p(6).  

Third, there is an over‐valuing of rationality (doing the thing right-as in following rules and guidelines) over reason (doing the right thing-as in making the right moral choice for this patient at this time, given these contingencies). p(6)

What are the implications for senior school leaders and school research champions?

  • Do 'average' pupils, 'average' classes or 'average' schools exist or are they all unique with their own special requirements
  • It is essential to keep up to date with the latest research and guidance provided by the Education Endowment Foundations, as otherwise you may miss out on something that could have real benefits for your pupils.
  • However, just because the EEF have produced a new set of guidance, or added something to the Teaching and Learning Toolkit, or published something promising research findings - this does not make them a priority for your school.  As they maybe other matters or issues which are far more relevant to your pupil's needs
  • In these very early days of Research Schools and relatively inexperienced school research leads there are very real risks that colleagues may get things wrong - misremembered or misapplied.  So it is really important when someone says ' the research says' that the response is 'Ok, what claim are you making and what is the warrant for your claim?' Wallace and Wray (2016) and Booth, Colob, et al. (2016)
  • What structures have been put in place to help identify the misapplication or misuse of research evidence or guidelines. What processes are in place to help address the consequences of things going 'wrong'? 
  • There is a distinction between clinical judgment and organisational judgment - clinical judgment refers to decisions made about individual patients, whereas organisational judgment is applied at scale, across the organisation.   As such, evidence-based practice may be more useful when applied to the school as a whole, rather than trying to apply it to decisions about individual pupils.
  • The evidence about a particular problem is never set in stone and there is an ongoing need for conversations to continue to unpick the nature of the problem, so the appropriate actions can be taken.

To conclude

Professor Greenhalgh cites Sir John Grimley Evans who in 1995:

There is a fear that in the absence of evidence clearly applicable to the case in the hand a clinician might be forced by guidelines to make use of evidence which is only  doubtfully  relevant,  generated  perhaps  in  a different grouping of patients in another country at some other time and using a similar but not identical treatment. This is evidence biased medicine; it is to use evidence in the manner of the fabled drunkard who searched under the street lamp for his door key because that is where the light was, even though he had dropped the key somewhere else. (page 451)

Reference

Booth, W., Colob, G., Williams, J., Bizup, J. and Fitzgerald, W. (2016). The Craft of Research (Fourth Edition). Chicago. The University Of Chicago Press.
Greenhalgh, T. (2018). Of Lamp Posts, Keys, and Fabled Drunkards: A Perspectival Tale of 4 Guidelines. Journal of Evaluation in Clinical Practice. 0. 0.
Wallace, M. and Wray, A. (2016). Critical Reading and Writing for Postgraduates (Third Edition). London. Sage.

Senior Leaders and Coaching: Are you doing more harm than good?

A recent article in the May-June 2018 edition of the Harvard Business Review reports on research conducted by Gartner, which found that a certain type of coaching - the Always on Manager - does more harm than good, with a negative impact on performance.  In addition, the Gartner study found little correlation between the time spent coaching and employee performance.

For the research Gartner, surveyed 7300 employees and managers across a number of industries, along with interviewing or surveying 325 HR executives and found four different approaches to coaching:

Teacher Managers – coach employees on the basis of their own knowledge and experiences, providing advice oriented feedback and personally directing development.  Many have expertise in technical fields and spent years as individual contributors before working their way into managerial roles.

Always on Managers provides continual coaching, stay on top of employees’ development and give feedback across a range of skills.  Their behaviors closely align with what HR professionals typically idealize.  These managed may appear to be the most dedicated of the four types to upgrading their employees’ skills – they treat it as part of their daily job.

Connector Managers give targeted feedback in their areas of expertise; otherwise, they connect employees with others on the team or elsewhere in the organisation who are best suited to the tasks.  They spend more time than the other three types assessing the skills, needs, and interests of their employees, and they recognise that many skills are best taught by people other than themselves.

Cheerleader Managers  take a hands off approach, delivering positive feedback and putting employees in charge of their own development.  They are available and supportive, but they aren’t as proactive as the other types of managers when it comes to developing employees’ skills. (Harvard Business Review

The article goes on to note that:
  • The four types are more or evenly distributed within organisations, regardless of the industry
  • Whether a manager spends 36% of 9% of their time on coaching and employees development – it did not seem to matter – it’s more about the quality than the quantity of coaching
  • Hyper-vigilant always on managers appear to do more harm than good .
The article highlights three reasons why Always on Managers have a negative impact on performance 
  1. The continual stream of feedback is often overwhelming
  2. They spend less time time focussing on employees’ needs and more time on issues that are less relevant to employees real needs
  3. They fail to recognize the limits of their own expertise and is effectively making it up as they go long
On the other hand, employees manged by Connectors were three times more  likely to be high performers than employees managed by the other types of coaches.  The article notes that form  the research this seemed to be explained by Connectors doing four things:
  1. Asking the right questions
  2. Providing tailored feedback
  3. Helping colleagues connect and network with other colleagues who can help them
  4. Recognise the limits of their own skills 
The Gartner researchers then go onto recommend that managers should take the following action
  • Focus on quality of coaching not the quantity 
  • Find out about your employees’ aspirations for the future and the skills, knowledge and experience they need to achieve those aspirations
  • Have open coaching conversation, shifting the focus from one to one conversations to team coaching, where colleagues learn from one another, particularly those with specific skills
  • Try and extend these activities across the organisation
So what are the implications of these finding for senior leadership teams?

It seems to me that there are several implications.
  • A bit of humility goes a long way – it’s ok as leader to admit that you are not an expert on something and point colleagues in the direction of others – and requires the development of a culture of trust and mutual vulnerability.
  • Your most proactive coaches and line managers, who are constantly given feedback may inadvertently be making things worse.
  • Give some thought to the types coaching currently evident in your school and reflect on whether they are doing more harm than good.
  • Focus on the quality of the coaching being given given rather than the quantity.
  • Line Managers may not necessarily be in the best position to be coaches, unless they have the appropriate skills
  • When appointing staff to senior roles, you may wish to ask interviewees to give examples of how they have gone about coaching others and look for evidence of ‘connecting’ activities
  • When developing your own career, you may wish to look to work for leaders and managers who have a connecting coaching style.
And finally 

As Professor Steve Higgins of the University of Durham said when commenting on a meta-analysis on coaching by Kraft, Blazar, et al. (2016) – ‘it aint what you do it’s the way that you do it’

PS

It’s important to note that this research was not conducted in schools – so there are issues as to the applicability to schools in England.  In addition, with both this type of research and reporting – there would be some merit at looking at the original research conducted by Gartner, which to be honest I have not been able to do.

References

“Coaching vs Connecting: What the Best Managers Do to Develop Their Employees Today’ – Gartner, White Pape
Managers Can’t Be Great Coaches All by Themselves. Harvard Business Review. May- June, 2018.
Kraft, M. A., Blazar, D. and Hogan, D. (2016). The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence.