Evidence 101

Join us twice a month for our insightful podcasts with leading expert guests, who will look at the latest 'hot topics' in wound care to update and inspire you.

Join us for this insightful podcast to understand a little more about the types of evidence in healthcare, specifically focusing in wounds, and why it’s important…

www.smith-nephew.com/education The website may contain information and discussion (including the promotion of) methods, procedures or products that may not be available in certain countries or regions, or may be available under various other trade or service marks, names or brands.

Transcript
This webinar will not work on an Internet Explorer browser. For best viewing experience, please use a Chrome browser.
If you have trouble authenticating, please try using the button below:
Watch hereWatch hereWatch here


SPEAKER:
Welcome to Smith & Nephew's Closer To Zero podcast. Bi-monthly podcast with leading experts in wound care, hosted by Smith & Nephew, helping healthcare professionals in reducing the human and economic costs of wounds.

00:00:04

RUTH TIMMINS:
Hello, I am Ruth Timmins from Smith & Nephew and welcome to this Evidence 101one podcast with our special guest scientists, Emma Woodmansey, and Ben Costa. Emma is the clinical scientist as part of the global clinical strategy team for advanced wound management at Smith & Nephew based in UK. She has a PhD in microbiology and has been at Smith & Nephew for over 18 years. Emma develops and communicates the clinical and scientific evidence around wound care particularly specialising in infection management. Ben Costa first joined Smith & Nephew in 2018. And he has an academic background in biomedical science and clinical microbiology with interest in computer science and data analytics. Prior to joining Smith & Nephew, he worked as a contractor writing clinical evaluation reports to support regulatory requirements for wound care products. In his clinical position as a clinical evidence specialist in the evidence analysis team, his role is to utilise clinical evidence to derive commercial and strategic value to the business, through conducting activities such as systematic reviews and meta- analysis. So, thank you both for joining us today and welcome.

00:00:19

EMMA WOODMANSEY:
Thanks Ruth. Good to be here.

00:01:31

RUTH TIMMINS:
So, we know that evidence is being used in daily life, on the news, in supermarkets, you know consumer tests and so on. But today we're going to understand a little bit more about the types of evidence in health care, specifically focusing in wounds and why it's so important. So, I guess to start off, perhaps Emma, you know, why do we need evidence?

00:01:33

EMMA WOODMANSEY:
Yeah. Thanks Ruth, there's a number of reasons why we'd need evidence across wound care, wound care, as we all know is such a complex mix of care pathways, interventions and treatments. And this can be, needs to be balanced with the management of the wider patient, the holistic care of that patient, including any co-morbidities for example of that patient, so it's a really complicated population to build evidence for `cause there's so many variables. However, despite the many wound and patient variables, it's still really critical that we provide data to support evidence-based medicine where possible, and some examples of why we need the evidence is for example, to show that an intervention or treatment actually works , to help a healthcare professional understand the difference between the efficacy of certain devices or products and what is the best combination of treatments to provide the best care for that patient or that wound. It's also used to show that products are safe to be used as indicated and as part of a package, for example, as for regulatory submissions to submit to new countries or gain new claims or also to extend the indications of current treatments to new areas, for example. And finally evidence, clinical evidence can be used and is critical in demonstrating the value or cost-effectiveness of a treatment. And we'll talk about that in a little bit later. So, these are all really important needs for evidence in wound care (1).

00:01:55

RUTH TIMMINS:
Yeah. So, you know, Ben, perhaps you can throw some light on what are the different levels of evidence we often hear about levels of evidence, perhaps you can help to just explain that a bit more.

00:03:32

BEN COSTA:
Yeah, sure. So that's a really good question. So, for those that don't know evidence is typically seen as in a sort of a hierarchy of tiers. So yeah, one of the key concepts in evidence-based medicine is this concept of the evidence pyramid. And that's exactly what you think it is, it's literally a pyramid where you've sort of got these different tiers of evidence. And at the top of the pyramid you have what we call sort of level one evidence which is sort of the best type of evidence that you can possibly have. And these tend to be things such as systematic literature reviews and meta- analysis or sometimes you might see prospective randomised controlled trials at the top of this pyramid. (1)And its worth at this point sort of noting, in these levels of evidence sort of innately aligned with study design. So, sort of the more rigorous study designs actually sit at the high row on top of the pyramid. And so, as you go down the levels of evidence in the pyramid, the quality of the evidence goes down. So, sort of at level three, for example, is observation of cohort trials. While level four would be single arm case series and level five in the pyramid is typically expert opinion. Now there might be different representations of this pyramid that you might see out in the literature meaning some types of study design might actually sit a bit lower or a bit higher in the pyramid. It just really sort of depends on what schemer you're looking at. One common representation of these levels of evidence is what's called the OCEBM levels of evidence hierarchy which is produced by the Oxford centre for evidence-based medicine.(2) So, if you go have a look at that resource, you'll see some really good representation of how the different types of evidence sits in the pyramid. Additionally, and finally, I'd like to point out to go and look at the recent publication in 2020 by the World Union of Wound Healing Societies.(1) And the title of that paper is called evidence in wound care. And that actually gives a really good breakdown and detailed description of the different levels of evidence. So definitely go check that out as well.

00:03:45

RUTH TIMMINS:
Yeah, that's great information. Thank you. And yeah, we do have links available in our resource library that you can have a look at. So, Ben, you mentioned randomised control trials and case studies existed very different levels on the evidence pyramid. So, what value do these different study designs have for evidence-based medicine?

00:06:02

BEN COSTA:
Yeah, so randomised controlled trials are some of the most desirable types of evidence and studies that we sort of look for when we're trying to understand how a clinical intervention works compared specifically to a sort of comparative group or control. I mean the main reason why they're desirable is because they tend to be more scientifically rigorous because they sort of account for biases that might influence your findings. One-way randomised controls sort of do this is by being randomised which obviously sort of reduces the influence of confounding variables that might be present in the population that you're investigating. And I'll talk about confounding variables a little bit later. Now that's not to say all we want is randomised controlled trials when we want a body of evidence. `Cause there can be some arguments we made that using really stringent study designs, we end up losing some of the real-world element of clinical data. And that's basically because the real world doesn't act the same as a controlled study environment. And this is where real world evidence comes into play such as things like clinical audits and case studies. The idea is that these studies are more representative of the true clinical experience and have more of a qualitative element to them rather than purely quantitative, which is what you might see with randomised control trials. And this allows us to sort of capture and understand more of the subjective or non-measurable aspects of clinical practice for example, a patient's perspective on the emotional impact of having a chronic wound, which just rarely is captured in a randomised controlled trial. And one thing I didn't mention is the fact that the evidence pyramid usually just focuses on clinical evidence. But it's important to note that preclinical and in vitro evidence is still really valuable. But generally, is used as supportive or supplementary evidence when we're sort of considering evidence-based medicine. It just tends to reinforce the results that we've seen from proper clinical studies. So yeah, just, just be aware that while the evidence pyramid might typically not include depiction of preclinical evidence it is still generally used in scientific practice.(1)

00:06:26

RUTH TIMMINS:
Yeah. OK. That's really interesting. So perhaps Emma, you know, what evidence should I be looking for?

00:08:49

EMMA WOODMANSEY:
So that’s the key question Ruth and it's really important to have a balance of evidence from bench to bedside, let's say. And for example, that means you know, from the laboratory to the clinic.  However lab tests can have quite different end points or measures of success compared to clinical trials. And obviously as you just heard from Ben and they can't mimic the complexity of a clinical condition, so it's no good just having lab data, for example if you've no translation to good clinical outcomes, there needs to be a really good balance of consistent evidence across both areas so that you can, again, like I said have evidence from bench to bedside of why your product or your pathway works. The types of evidence that you should look for really depends on what you want to show. Obviously Ben just touched on that in terms of the different levels of evidence and the types of studies that are part of that. And there's lots of ways you can do clinical studies to provide this evidence. If you want to show an intervention is significantly better than another, then you'd need to do a comparative study and ideally randomised comparative study if possible, to reduce a bias. And bias is something that Ben will explain in a minute in terms of understanding the downsides of certain clinical evidence. In addition, you can compare a new protocol or a treatment program to historic data to show the impact of a new regime, a new treatment regime, for example or if both changes are in the past, this can be assessed as a retrospective study. Also, as I mentioned earlier, you can do further analysis of clinical data to help the health economics, understand the health economic impacts of new treatment or a new protocol. Also making the evidence really useful for healthcare systems and cost effectiveness justifications for new interventions. For example, if you were trying to bring a new product into a formulary or something similar. And so, it's really definitely recommended to go and have a look at the paper that Ben mentioned earlier is actually a really nice bit of information that details the different levels and the different types of studies that you can do to get what you need from the evidence that you're looking for.(1) So definitely recommend looking at this and I think there's a link with this podcast as well.

00:08:57

RUTH TIMMINS:
So, Emma do we need new evidence?

00:11:37

EMMA WOODMANSEY:
Well clinical times and clinical practice really go hand in hand and they're constantly moving forward. People will always have new questions to answer and new treatments to support. So, I guess the answer's yes and the famous scientists or Isaac Newton mentioned that’ if I have seen further it's by standing upon the shoulders of giants’. So, in essence it means that each piece of evidence that is produced helps build on the last piece to move understandings and discoveries forward. And these improvements drive the change and ultimately provide better care for patients. So absolutely we do need new evidence and it helps progress outpatient care.

00:11:42

RUTH TIMMINS:
Thanks Emma. That's really insightful, but you know, how would I spot poor evidence? What questions should I ask, perhaps, Ben, you might have some thoughts on that.

00:12:26

BEN COSTA:
Yeah. So, I'm quite a pedantic person, so you're asking the right person. So, I think it's worth sort of clarifying what we actually mean by poor evidence. So originally, we sort of said the evidence-based medicine demands that the studies higher up the evidence pyramid such as randomised controlled trials are sort of inherently more scientifically rigorous compared to studies at the bottom of the pyramid, stuff like case studies. So, what we mean by scientifically rigorous is that studies contained less bias and that's something that Emma alluded to earlier. We previously mentioned bias and this concept is extremely important for evaluating evidence. There's actually many types of bias, all of which sort of influenced sort of by how the researcher or investigator performs their study. And sort of just a name and go for examples. There's something called selection bias. And this refers to when your sort of sample population that you've selected for your study based on your inclusion and exclusion criteria doesn't actually reflect what your target population would be in the real world. So, a good example of this would actually be the sample population you selected contains a bunch of patients with a really rare condition, way higher frequency than the general population that you intend to apply your treatment on. There's another form of bias called classification bias. And basically, this is when you use the wrong measurement tools to collect data on your outcome of interest. And then there's something called confounding bias. And I mentioned this earlier about confounding variables. And this is where, sort of you get variables present in your sample population of patients, such as co-morbidities for example which becomes a problem when you have two study groups in a trial, because all the patients with co-morbidities might end up just by sheer chance in one of the study groups and what this might do is sort of influence the results of your study as patients with several co-morbidities might not respond to interventional treatment, for example. So, what we do is actually randomise patients when allocating to study groups so that the odds of all patients with co morbidities and it'll be in one study group is much lower.

00:12:38

Ben:
In general, when looking at a study there are certain things to look at. And one of these things is results dependent reporting. And this is where authors or offers sort of report additional statistically significant outcomes from a study that they originally didn't plan to report on. So, it doesn't align with the method section of the report. And the reason they tend to do this is because they sort of think that these findings are novel or important. But generally, it's sort of poor practice to do this. As it sort of suggests the investigators have done something called data dredging and I encourage you to go look up that term because it's really interesting. But this essentially isn't really what would be considered scientifically robust approach to conducting research and beyond the different types of biases. Also a few general things to look at sort of in any study that you come across. So, you should look at things like who actually conducted the research. Sometimes it might be an independent organisation, which is sort of the ideal or you might sort of get research with people that have a sort of vested interest in the study results. So, pay attention to that. Again, check if the study is sort of clear and transparent in its presentation. So, is the manuscript actually written well? So, I guess if an author isn't really willing to present their findings clearly that you can sort of almost half assume that the reliability of the results is also sort of diminished. So, keep that at the back of your mind at all times. And again, when you're looking at the conclusions section of a manuscripts, check if the conclusions actually follow from the results this can be sometimes quite subtle, but one good example would be, for example, if the authors are saying the new treatment that they've sort of developed is absolutely revolutionary for this condition that they've treated in this paper but the results haven't actually shown much of a difference to a control group. Again, just to summarise, what I will say is that nearly every study you come across will have faults. So even the best studies will have problems with them. So do keep an eye out, be quite sort of pedantic and pessimistic about whatever you sort of read and just keep an eye out and be sort of generally critical of what you're actually reading. And that will go a long way to spare anyone on supporting poor evidence.

00:15:09

RUTH TIMMINS:
That's really good advice there, Ben and are there any guidelines out there on how to evaluate studies that we might come across?

00:17:42

BEN COSTA:
Yeah, so a lot of the stuff that I just mentioned is actually covered by some tools (1,2)and you can find these online that research organisations use. And they're basically a series of prompts questions that sort of force you to think about the aspects of study design to sort of find those flaws and weaknesses that I mentioned earlier. So, one good example of this type of tool is a Rob 2.0 tool, and that stands for risk of bias. And that was sort of made by the Cochrane Institute. They were really sort of famous Institute that performs a lot of systematic literature reviews. So, I'd encourage you to go look at those sorts of tools and they will really help when you're looking at aspects of study sort of quality. There's quite a large array of tools out there. And some things to know about them is that the prompted questions are specific to the study design type. So, the Rob 2.0 tool I mentioned is actually specific to randomised controlled trials. But you'll see different types of tools for different types of questions for stuff like cohort studies, for example.

00:17:51

RUTH TIMMINS:
Great. OK. So how do we deal with contradictory or controversial evidence?

00:18:58

BEN COSTA:
Yeah, so as we mentioned earlier, I think Emma mentioned that evidence is sort of cumulative. We have a nice quote to Isaac Newton and yeah, so evidence is cumulative, but not all evidence is favourable for a particular sort of treatment or therapy. For example, you might have drug A is more effective than drug B for condition X but not all studies might show that. And in fact, you might get the exact opposite where they're showing drug B is better for condition X. And the way that I like to think of it is if you have a set of scales and you have supportive evidence for your therapy on one end of the scale. And then the contradictory evidence on the other side of the scale, if the weight of the evidence for one side of the scale is greater than the other, then the current consensus would be that it was drug  A, was either in favour or not in favour. And one of the best ways to sort of determine whether intervention is better or worse than another treatment with regard to a particular outcome is by performing a systematic literature review or meta- analysis. And just to explain what those types of studies are, they are sort of the ultimate type of study designed for evidence-based medicine and their sort of their sole purpose is to sort of aggregate all the evidence we currently sort of spoken about sort of randomised controlled trials. And it sort of includes both the positive and the controversial evidence to produce a way to estimate as to the efficacy or safety of an interventional treatment. And another reason why we use systematic literature reviews and meta- analysis is that they sort of aim to eliminate those biases that we spoke about earlier by using specific methods of data aggregation. So, these types of study designs are sort of really effective for providing recommendations and developing clinical guidelines for clinical practice. So that's why we tend to use them.

00:19:05

RUTH TIMMINS:
OK. So, could you give me an example of how systematic literature review and meta- analysis reduces  bias?

00:21:08

BEN COSTA:
Yeah, so that's a really good question. I'll probably sort of best illustrate this with an example from some of the stuff that Smith & Nephew's actually recently published. And so recently we published a couple of systematic literature reviews and meta-analysis. One on PICO in close surgical incisions(3) and IODOSORB(4), being chronic wounds. And I encourage you to go look at those studies and what you'll see in those papers is what we have done. Things like used established methods for conducting a systematic literature reviews and such as those written in the Prisma standard. And what this means is that we have followed sort of strict guidelines to ensure the input into our studies fair and representative and unbiased. We also sort of clearly state that there are conflicts of interest in those papers. `Cause obviously we've, it's an industry sponsored publication by employees of Smith & Nephew. But the premise is that the science is no different to what you'd see published independently. And this is because we are using the same stringent scientific methods to produce our research as you'd see from sort of leading independent institutions like the Cochrane Institute. So, I actually encourage you to go read those two papers and see what I mean by the methods used in there, where we are actively trying to look at the body of literature in a balanced and systematic way to reduce the opportunity for the bias.

00:21:16

RUTH TIMMINS:
OK, thanks for explaining that then. So maybe Emma, who should be involved in evidence generation?

00:22:47

EMMA WOODMANSEY:
Well, the simple answer is everyone who's interested in progressing healthcare. So, people who want to identify more effective treatments or pathways or benefit patients as we said earlier, you know, evidence builds on evidence. So, everybody should be involved if you've got a question or a theory and you want to prove it and you need to test it by doing a study and by developing evidence to prove or disprove it.

00:22:55

RUTH TIMMINS:
And how does Smith & Nephew support generation of new evidence? Perhaps you can explain that for us.

00:23:23

EMMA WOODMANSEY:
So, Smith & Nephew work really closely with healthcare professionals globally to support the clinical research and development of high-quality clinical evidence with the ultimate goal of optimal care for patients obviously. We perform many different types of studies across the world, ranging from high-level comparative studies, supporting new product development, to more practical real-world data supporting improvement in decision-making in practice, therefore helping healthcare professionals to make the most out of the resources they have and to get the best outcomes for their patients. So, the types of studies we can do can be either led by Smith & Nephew and the clinical team that we have globally or they can be research projects that are initiated independently from Smith & Nephew. And they follow the investigator-initiated studies route, and it's worth mentioning that we have a dedicated website for healthcare professionals to submit independent proposals for investigator-initiated studies. And listeners can find that link to this page on the podcast website. So, when Smith & Nephew receive a proposal through this independent route they're evaluated on the basis of their scientific merit and alignment with the Smith & Nephew research portfolio and business strategy. I think overall Ruth, from an evidence perspective that the key thing for anyone involved in health care is always to ask questions. And the chances are that the answer ultimately will help many of the people that will be thinking or asking the same things. And of course, ultimately that supports the goal for optimal care for the patient, allowing them to live a life unlimited.

00:23:31

RUTH TIMMINS:
Well, thank you both Ben and Emma for your valuable insights and sharing your knowledge and expertise in this very important topic. And I'm sure it's been really useful to our listeners. So, thanks again, and thank you to our listeners. Don't forget to join us for our next podcast and we hope to see you soon. Thanks, Ben. Thanks, Emma.

00:25:21

EMMA WOODMANSEY:
Thank you.

00:25:44

BEN COSTA:
Thank you.

00:25:45

SPEAKER:
Smith & Nephew's Education and Evidence is a powerful e-learning platform for healthcare professionals to access and share peer to peer educational resources. The member-based service already hosts more than a thousand clinical videos, scientific literature and learning modules. It provides search facilities and the ability to customise content. So, you can decide when and what you study. The choice is entirely up to you. It is optimised to work on computers, tablets, and smart phones. Visit now at www.smith-nephew.com/education. Helping you get closer to zero, human and economic consequence of wounds. The information presented in this podcast is for educational purposes only, it is not intended to serve as medical advice. Products listed, outline of care are examples only. Product selection and management should always base on comprehensive clinical assessment. The detailed product information including indications for use contraindications, precautions and warnings. Please consult the products applicable instructions for use prior to use.

00:25:47

References

1. Dissemond, J. et al. WUWHS position document. Evidence in wound care. Wounds Int. 1–28 (2020).

2. Oxford Centre for Evidence-Based Medicine. OCEBM Levels of Evidence. https://www.cebm.ox.ac.uk/resources/levels-of-evidence/oxford-centre-for-evidence-based-medicine-levels-of-evidence-march-2009  (2009).

3. Saunders, C., Nherera, L. M., Horner, A. & Trueman, P.  Single-use negative-pressure wound therapy versus conventional dressings for closed surgical incisions: systematic literature review and meta-analysis . BJS Open 5, 1–8 (2021).

4. Woo, K. et al. Efficacy of topical cadexomer iodine treatment in chronic wounds : Systematic review and meta-analysis of comparative clinical trials. Int. Wound J. e-pub, 1–12 (2021).


Listen hereDownload

Speakers

Emma Woodmansey

PhD

Clinical Science Director Infection,

Clinical Affairs,  Smith+Nephew

Emma Woodmansey  is a clinical scientist, part of the Global clinical strategy team for AWM at Smith+Nephew.

Emma has a PhD in Microbiology and has been at S+N for over 17 years.

A microbiologist by background, Emma is focused  on developing and communicating clinical and scientific evidence around infection; particularly around antimicrobial resistance, antimicrobial treatments and biofilms in wound care.

Emma loves educating people about bacteria and infection and how we can use our knowledge to make sure we manage them appropriately ultimately ensuring the best outcomes for our patients with infected wounds.

More from this speaker
Ben Costa

Clinical Evidence Specialist

Clinical Affairs

Smith+Nephew UK

Ben Costa first joined Smith and Nephew in 2018. He has an academic background in Biomedical Science and Clinical Microbiology, with interests in Computer Science and Data Analytics. Prior to joining Smith and Nephew, he worked as a contractor writing Clinical Evaluation Reports to support regulatory requirements for wound care products. In his current position as a Clinical Evidence Specialist in the Evidence Analysis team, his role is to utilise clinical evidence to derive commercial and strategic value to the business through conducting activities such as systematic literature reviews and meta-analyses.

More from this speaker