What are the best practices? (Has Best Practice)

From The Embassy of Good Science
Available and relevant practice examples (max. 400 words)


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
[
A prominent example of the distortion of research findings by the media relates to an article published in ''PLoS One'' in 2009 '"`UNIQ--ref-00000033-QINU`"'. It presented a study that examined whether NHS hospitals in England have a higher mortality rate in the first week of August than in the last week of July, due to the fact that newly qualified doctors begin working in hospitals on the first Wednesday of August. The study used hospital admissions data from 2000 to 2008 for all emergency patients in the last week of July and the first week of August. Taking into account the year, patient gender, socio-economic deprivations and co-morbidity, the study showed that for patients admitted on the first Wednesday of August the odds of death were 6% higher in comparison to those admitted on the last Wednesday in July. Also, clinical patients on the first Wednesday of August had 8% higher odds of death than surgical patients. Even though the confidence intervals for these odds ratios included a value of 1, and researchers suggested that further studies were needed, the media distorted the study findings. Under a sensationalist headline, “Killing Season”, ''The Daily Mail'' reported that death rates are 8% higher in the said period because newly qualified doctors had started their jobs '"`UNIQ--ref-00000034-QINU`"'. It reported that the “number of mistakes are so notoriously high that day of the week” that this day should be called “Black Wednesday” '"`UNIQ--ref-00000035-QINU`"'. Other media outlets reprised the phrases “Killing season” '"`UNIQ--ref-00000036-QINU`"'. Some even said that it was “the worst day of the year to go to hospital” '"`UNIQ--ref-00000037-QINU`"'. Sometimes researchers and reporters can, together, contribute to sensationalism and the exaggeration of research findings. One of the studies that caused a lot of uproar in 2015 was written by Tomasetti and Vogelstein, and published in ''Science'' '"`UNIQ--ref-00000038-QINU`"'. The media, along with some experts, including the authors, oversimplified the interpretation of the results, claiming that the vast majority of cancers are caused by random mutations or “bad luck” '"`UNIQ--ref-00000039-QINU`"'. However, experts and the media paid insufficient attention to the study design. It was an observational study, so no definitive or reliable inferences could be made regarding the cause and effect relationship; conclusions could only be based on the associations between different cancer-occurrence factors, which do not reliably support conclusions regarding direct causation. '"`UNIQ--references-0000003A-QINU`"'  
The ASA statement on P-values gives instructions on the correct use of P-values, with the goal of improving interpretation in quantitative science. The overall conclusion of the ASA is that scientific inferences should not be based exclusively on P-value threshold, because that, in itself, does not provide substantial evidence regarding a model or hypothesis, nor does it measure the size of a certain effect or determine the importance of the results. Researchers should use P-values within a proper context, because otherwise it can lead to selective reporting '"`UNIQ--ref-00000303-QINU`"'.  Good scientific inference requires the full and transparent reporting of data and methods '"`UNIQ--ref-00000304-QINU`"'. There are other methods that researchers can use with or instead of P-values, which mostly focus on estimations as opposed to testing. These include confidence, credibility or prediction intervals, Bayesian methods, decision-theoretic modeling and false discovery rates '"`UNIQ--ref-00000305-QINU`"'. Since its release in 2016, the ASA statement has been cited about 1,700 times and downloaded nearly 300,000 times. In 2017, the ASA organized a symposium on statistical methods, which resulted in 43 articles on the topic of the responsible use of P-values'"`UNIQ--ref-00000306-QINU`"' . Statisticians and scientists are currently considering “a world beyond p<0.05” ('"`UNIQ--ref-00000307-QINU`"'), suggesting a wide spectrum of solutions and possibilities. One solution involves changing the P-value threshold for statistical significance from 0.05 to 0.005 ('"`UNIQ--ref-00000308-QINU`"''"`UNIQ--ref-00000309-QINU`"'). By contrast, others argue that reproducibility of results and pre-registration are the best means for preventing selection bias '"`UNIQ--ref-0000030A-QINU`"'. Others still recommend including more information when reporting P-values, such as the researcher’s confidence in the P-value or their assessment of the likelihood that a statistically significant finding is, in fact, a false positive result '"`UNIQ--ref-0000030B-QINU`"'. Critiques, initiatives and recommendations require not only further academic discussion, but also significant educational reforms in statistics '"`UNIQ--ref-0000030C-QINU`"'. '"`UNIQ--references-0000030D-QINU`"'  
Regardless of the importance and necessity to fully report study limitations, in practice researchers still need to be encouraged to report their limitations and to describe them properly and thoroughly. The following example demonstrates that scientists in medicine do not fully discuss and present limitations of their research '"`UNIQ--ref-0000018C-QINU`"'. A study was conducted on 400 articles published in 2005 in journals with the highest number of citations, among them two open-access journals. Full-texts of these articles were electronically searched, looking for words ‘limitation’, ‘caveat’ or ‘caution’. The results showed that only 67 articles (17%) used at least one of the mentioned words when presenting their own research. Furthermore, only four articles (1%) used the word ‘limitation’ in their abstract, while not one article mentioned limitations of their research that had impact on the conclusions '"`UNIQ--ref-0000018D-QINU`"'. Researchers do not present their study limitations because perhaps they do not fully understand the significance, outcomes and implications of these limitations to the study results. Maybe they think that probability for publication of their work would be higher by not addressing them '"`UNIQ--ref-0000018E-QINU`"'. Journals also bear great responsibility in this matter because of the word limits that prevent authors from reporting and thoroughly describing their limitations '"`UNIQ--ref-0000018F-QINU`"'. When researchers do mention their study limitations, they usually provide only a list, they do not fully describe them '"`UNIQ--ref-00000190-QINU`"'.   There are several things researchers and journals can do to responsibly report study flaws and limitations. When describing them, researchers should clearly classify the type of limitation so that readers could interpret the research findings correctly '"`UNIQ--ref-00000191-QINU`"'. They should not only describe the limitations, but also explain their implications. Assessing impact of limitations on conclusions of the research and its validity is also very important and can help to avoid bias. Researchers should explain why they did not take some alternative approaches or maybe provide some alternative explanations of their findings. Finally, researchers should describe efforts taken to mitigate the implications of study limitations '"`UNIQ--ref-00000192-QINU`"'. Journals, on the other hand, should encourage authors to present their study limitations and provide them with some guidelines '"`UNIQ--ref-00000193-QINU`"'. Reporting study flaws and limitations should enter the everyday research practice. The only way to deal with such uncertainties is to present data, methodology, limitations and study deficiencies transparently so that decision makers can be fully aware of quality and potential errors in inference. '"`UNIQ--references-00000194-QINU`"'  
If a study’s methodology is valid, it is important to publish all of the results, including negative ones. The International Committee of Medical Journal Editors stated that researchers should publish negative data in order to prevent publication bias and potential waste of time and money because of duplication. World Health Organization, in 2005, called for publication of previous non-reported negative findings. The Committee on Publication Ethics, in their guidelines, state that journals should not refuse to publish negative findings. Some journals are dedicated to publication of null results only, such as the Journal of Negative Results, in the field of ecology and evolutionary biology. BioMed Central’s Journal of Negative results in BioMedicine ceased to publish in 2017. In order to assess publication bias when conducting a meta-analysis, researchers use a funnel plot. A funnel-plot is a type of scatter-plot, in which both treatment effect and study precision are shown. If the data is not symmetrical, there is a high chance of either publication bias or small-study effect. '"`UNIQ--ref-00000206-QINU`"' This is especially important when doing a meta-analysis of clinical trials, as such results often end up being used as the strongest evidence in making of clinical practice guidelines.'"`UNIQ--ref-00000207-QINU`"' '"`UNIQ--references-00000208-QINU`"'  +
The use of appropriate statistical analyses and the full publication of results (whether approving or rejecting the study’s hypothesis) are among the best practices. Preregistering research is another way of communicating research plans and improve the credibility of results.'"`UNIQ--ref-0000008E-QINU`"' In the case of clinical trials, ICMJE advises groups to register clinical trials and link the analysis of clinical trials to the same registration. They are also advised to make sure that there is no discrepancy between the registered methodology in the registries and what is published in the journals, and publish the registration number at the end of the abstract. Since July 2018, groups are also asked to include their data sharing plan and data sharing statement.'"`UNIQ--ref-0000008F-QINU`"' Furthermore, in order to encourage impartial and clear use of statistical methods, ICMJE asks groups to: “Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to judge its appropriateness for the study and to verify the reported results. When possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals). Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size and precision of estimates. References for the design of the study and statistical methods should be to standard works when possible (with pages stated). Define statistical terms, abbreviations, and most symbols. Specify the statistical software package(s) and versions used. Distinguish pre-specified from exploratory analyses, including subgroup analyses” (p.16-17). '"`UNIQ--ref-00000090-QINU`"' '"`UNIQ--references-00000091-QINU`"'  +
It’s difficult to address the issue of P-value hacking, especially since there aren’t many incentives to replicate research. However, some steps can be taken in order to prevent it. Cross-validation, or out-of-sample testing is a statistical method used to create two sets of data. The first set of data is then used for statistical analysis, to develop new models or hypotheses, and the other, independent set is then used to verify them.'"`UNIQ--ref-0000025B-QINU`"' A number of statistical analyses is also available to check for p-value hacking, such as Bonferonni correction, Scheffé's method and false discovery rate. A lot of journals will now ask for raw data to be published, or shift their way of work to registered report format. That is a publication process in which journals accept the publications based on theoretical justification and methodology only, without looking at results. '"`UNIQ--ref-0000025C-QINU`"' '"`UNIQ--references-0000025D-QINU`"'  +
Everybody who ever performed research probably experienced the sadness of getting a p value bigger than 0.05. Instead of critically looking at the data and results in the light of present knowledge and trying to figure out the impact of results, the hypothesis usually gets abandoned and another set of experiments gets initiated. There is some effort to change the existing practice. A new course in editorial policy for publication, which is considering only the methodological rigor, and not the direction of results, was set by PLOS some 20 years ago. Another praise worthy initiative is Journal of Negative Results which also considers methodological rigor as the only criteria for the publication of a manuscript.  +
<br /> '"`UNIQ--references-000002E9-QINU`"'  +
The correct use of previously published material does not involve selective citation to enhance one’s own findings or to please editors, reviewers or colleagues.'"`UNIQ--ref-00000315-QINU`"' References to published material should not be used to promote self-interests.'"`UNIQ--ref-00000316-QINU`"' Nuanced use of previously published material involves using resources in a neutral and unbiased way. '''Examples of citation in the scientific and popular literature''' Journal articles: *[https://bmjopen.bmj.com/content/9/2/e026518 Selective citation in the literature on the hygiene hypothesis: a citation analysis on the association between infections and rhinitis] *[https://jamanetwork.com/journals/jamaophthalmology/fullarticle/267954 Selective Citation of Evidence Regarding Photoreceptor Loss in Glaucoma] *[https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-017-0041-z Selective citation in the literature on swimming in chlorinated water and childhood asthma: a network analysis] Blog sphere: *[https://www.embassy.science/theme/Citation%20bias%20favoring%20positive%20clinical%20trials Citation bias favoring positive clinical trials] News outlets: *[https://theconversation.com/how-time-poor-scientists-inadvertently-made-it-seem-like-the-world-was-overrun-with-jellyfish-61564 How time-poor scientists inadvertently made it seem like the world was overrun with jellyfish] *[http://www.israelnationalnews.com/News/News.aspx/167812 'Stunningly Offensive' Paper 'Negates Judaism]' '"`UNIQ--references-00000317-QINU`"'  +
Open data practices can help increase transparency, allowing other researchers and interested parties to undertake their own analyses. A technique to identify and classify spin in RCT reports has been developed by Boutron et al,'"`UNIQ--ref-00000111-QINU`"''"`UNIQ--ref-00000112-QINU`"' focusing on RCTs reporting statistically nonsignificant primary outcomes because the interpretation of these results is more likely to be subject to prior beliefs of effectiveness, leading to potential bias in reporting. Similar approaches are available to systematically assess the explicit presentation of nonsignificant results in trial reports in various subspecialties, such as described by Lockyer et al, and Turrentine. '"`UNIQ--ref-00000113-QINU`"''"`UNIQ--ref-00000114-QINU`"' '"`UNIQ--references-00000115-QINU`"'  +
Institutions and journals need to have clear guidelines on publication and authorship in place. Guidelines should involve a section about gaining consent from all authors before submitting a manuscript or grant proposal. The Forum from COPE suggests that journals should send acknowledgements to all listed authors, not just the corresponding author, upon receiving a manuscript.'"`UNIQ--ref-00000226-QINU`"' '"`UNIQ--references-00000227-QINU`"'  +
Concern for research collaborators and those involved in research forms an important tenet of the ECoC. <sup>4</sup> In the spirit of respect and collegiality, it is essential that decisions regarding benefits and burdens be made after sufficient deliberation with the different teams. According to the ECoC, all involved partners should agree in advance on important aspects of the research, such as the goals and outcomes. <sup>4</sup> The attribution of credits (such as authorships) also form important benefits, and should be decided in consultation with all collaborators. The Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations <sup>5</sup> states that all involved partners should reach an agreement at the outset, and later as needed, as to how the outcomes of the research, research data and authorship and publication responsibilities will be handled. The Committee on Publication Ethics (COPE) also offers best practice guidelines on how to handle authorship disputes, should they arise. <sup>6</sup>  +
Considering fake review, there are several strategies journals can implement to overcome the challenges. A first strategy is not accepting the requests of peer reviewers from the authors. The reviewers are chosen by the journal editors, and ensure there are no ‘fake reviewers’. However, many journals cannot find (enough) peer reviewers, and granting the request can be time saving for journals.'"`UNIQ--ref-0000025F-QINU`"' At times, journals need to rely on the requests of authors to find peer reviewers at all. A second strategy is implementing an easy system that verifies reviewers. One online platform created to facilitate verification is Publons.'"`UNIQ--ref-00000260-QINU`"''"`UNIQ--ref-00000261-QINU`"' Here, journal editors can do background checks on the reviewers, and easily check their contributions in the field. In addition, reviewers get recognition for their reviews, even if these are anonymous. '"`UNIQ--ref-00000262-QINU`"' <br /> '"`UNIQ--references-00000263-QINU`"'  +
A working paper by [https://www.leru.org/files/LERU-PPT_Bias-paper_Jadranka_Gvozdanovic_January_19_18.pdf LERU] sets out the following recommendations:'"`UNIQ--ref-00000009-QINU`"' #"Universities and other research institutions need to have regular '''monitoring''' in place to examine whether their organisational structures and processes are susceptible to a potentially biased access to resources that cannot be justified by the meritocratic principle. If so, they should develop and implement a plan to mitigate any identified bias. It is crucial that the university’s leadership commits to this plan, sees it through with appropriate encouragement, support and initiatives, throughout the organisation. Clear '''accountability''' should be assigned, with final responsibility for action resting with the President/Rector and the governing body. #Universities and other research institutions should examine crucial areas of potential bias and define '''measures''' for countering bias. Progress needs to be monitored and, if necessary, measures re-examined and adjusted. #Universities and other research institutions should gather expertise and organise '''gender bias training''' in various formats, including the possibility of anonymous training. There is no shortage of national and international resources which organisations can use. #'''Recruitment''' and/or '''funding processes''' should be as open and transparent as possible and be genuinely merit-based. This includes measures such as briefing selection committees about bias pitfalls, deciding on clear selection criteria at the outset, letting '''external observers''' monitor the selection process and involving external evaluators. #There should be close monitoring of potential '''bias in language''' used in recruitment processes. #Universities should undertake action towards eliminating the '''pay gap''' and monitor progress, examining bias as a contributing factor to pay gap. #Employees should be compensated for '''parental leave''', making sure the process is bias-free, for example by extending fixed-term positions or calculating the leave administratively as active service, yet exempt from publication expectations. #Universities and other research institutions should monitor '''precarious contracts''' and '''part-time positions''' for any gender-based differences and correct any inequalities. Universities should examine conditions for part- time positions for professors and their gendered division. #Universities and other research institutions should undertake '''positive action''' towards a proper representation of women in all leading positions, making sure that leadership and processes around leadership are free from bias." '"`UNIQ--references-0000000A-QINU`"'  
The International Committee of Medical Journal Editors (ICMJE) provides recommendations for defining the roles of authors and contributors. The ICMJE recommends the four main criteria that should be taken into account for authorship. These criteria include a) substantial contribution related to the study design, data collection, data analysis, and data interpretation, and b) drafting and critically revising the work, and c) approval for the final version for publication, and d) accountability for all aspects of the work, including its integrity '"`UNIQ--ref-0000031E-QINU`"'. The ICMJE emphasizes that those who meet all four criteria should be assigned as authors and provides guidance for acknowledging those who do not meet all of the above-mentioned criteria but still contributed to the study and whose contribution should be acknowledged. The Contributor Roles Taxonomy (CRediT) is another example of guidance for avoiding authorship malpractices and disputes '"`UNIQ--ref-0000031F-QINU`"'. CRediT statement contains 14 items related to the authors’ contributions. For example, some of the items included in the statement are the authors’ contributions in conceptualization, methodology, analysis, writing and editing the manuscript, visualization, supervision, etc. Many publishers have already adopted the CRediT taxonomy and encourage authors to use it when providing authors contributions during the manuscript submission process '"`UNIQ--ref-00000320-QINU`"'.  +
It is difficult to cope with negative criticism, especially when it’s hostile in nature. Always keep in mind that any reviewer is a person, just like you.'"`UNIQ--ref-00000163-QINU`"' Maybe they were burdened with work, maybe they had a bad day at the office. It is nothing personal, and can happen to anybody. Think of anything useful that you can take from such a review. Maybe there is advice hidden under that unnecessary criticism? Speak with your superior, talk to your mentor. If you both consider that the review is insulting, consider raising that topic with the editor. '"`UNIQ--references-00000164-QINU`"'  +
A lot has been said about authorship. One of the milestones in tackling authorship are the famous four criteria of the International Committee of Medical Journal Editors. That means that those who fulfil the ICMJE criteria should be listed as authors (to avoid not giving credit when credit is due and to avoid ghost-writers), and authors should fulfil all of those criteria (to avoid guest and honorary authorship). Researchers who fulfil some, but not all four criteria should be acknowledged in the manuscript. When submitting research manuscript, journals will often ask for the statement of authorship, signed by authors. That way, journals’ editors want to make sure all authors have been informed, and they can be held accountable if any problem arises.  +
Variety of journals, such as [https://journals.plos.org/plosone/s/submission-guidelines PLOS ONE], [https://thelancet.com/pb/assets/raw/Lancet/authors/tlrm-info-for-authors.pdf The Lancet] or [https://www.nature.com/nature/for-authors/supp-info Nature], request complete disclosure and transparency from authors, so by not acknowledging your contributors you are disregarding the principle of transparency. This also means that you are not being completely honest because you do not acknowledge that someone has done a certain amount of work for you. Some authors even use the help of professional writers who, for example, may substantially contribute to drafting or write a full first draft of the manuscript'"`UNIQ--ref-0000028E-QINU`"'. In such a case, authors should also acknowledge the contribution and obtain a written permission from those named in acknowledgments.'"`UNIQ--ref-0000028F-QINU`"' '"`UNIQ--references-00000290-QINU`"'  +
For successful collaboration it is necessary to '"`UNIQ--ref-00000235-QINU`"': *''Address mutual expectations''. Each team member may have different expectations about their contribution and the recognition they will receive. If you discuss these expectations openly, it will be easier for each team member to contribute effectively to the project. *''Clearly divide and define who is responsible for what task.'' Similar to expectations, a clear division of labor makes each team member's role in the project clear. This facilitates conversations about authorship. *''Determine authorship.'' In a collaborative effort, it may appear that each person has a clear role. However, this assumption can lead to confusion and disagreement about initial authorship. Agree on authorship at the beginning of the project. *''Communicate frequently.'' Ensure open communication with the team. If you do not have a clear timeline or research goals, it can be easy to lose sight of each other. *''Access to data''. Not all parties may have access to all data. A clear conversation at the beginning of the project is necessary to determine who will have access to what information. *Collaboration in research also means ''a shared responsibility for the integrity of the research.'' '"`UNIQ--references-00000236-QINU`"'  +
Different fields take different stances in regard to self-plagiarism. For example, legal research has a lot more tolerance for reuse of one's work than biomedical science. In 1969, the scientific journal the “New England Journal of Medicine” announced they would no longer publish already published work. This is called Ingelfinger rule and became a norm for high quality scientific journals. '"`UNIQ--ref-00000007-QINU`"'Because of the rise of preprint servers (such as arXiv), journals now tend to loosen that policy. Secondary publications are a different issue, as they clearly state that work has been previously published. They are produced with a goal of reaching a bigger (and sometimes different) audience, often through translations to different languages. Keep in mind that a lot of scientific journals use computer software to check if your text is similar to anything already published. The majority of software works through screening available online databases for similarities. '"`UNIQ--ref-00000008-QINU`"' '"`UNIQ--references-00000009-QINU`"'  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.3.4