What are the best practices? (Has Best Practice)

From The Embassy of Good Science
Available and relevant practice examples (max. 400 words)


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
[
The ECoC states that all partners involved in research take full responsibility for the overall integrity of the project. All partners are also expected to have agreed at the outset on the standards of research integrity that will be maintained. <sup>1</sup>  This can include all aspects of the research, from conception to publication, in order to prevent ambiguity at a later stage. The Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations <sup>4</sup> states that all involved partners openly discuss their customary practices and expectations, including those of research integrity. While every individual is responsible fully for their own contribution, there should also be a collective responsibility for the integrity of the project. <sup>4</sup>  +
*A zero-tolerance culture towards putative breaches of research integrity. When institutions or individuals turn a blind eye to misbehaviour, they fail to foster a culture of research integrity. A zero-tolerance culture encourages people to report suspicions of malpractice. *A clear reporting system, including clear procedures and access to guidance and help (e.g. ombudspersons). A scheme has to be in place to handle expressions of concern and actual allegations of potential errors. *Protection of whistleblowers, clarity about the rights of both whistleblowers and persons who are accused.   +
Being a reviewer comes with the responsibility of fairly reviewing others. One way to promote fair processes is ''transparent ''peer review. For example, Nature, BMC and EMBO now publish peer review and editorial comments after a manuscript has been accepted for publishing, when both reviewers and authors agree on this.'"`UNIQ--ref-000002F0-QINU`"' In the words of Nature: “in adopting transparent peer review, we are taking a step towards supporting increased openness, accountability and trust in the publishing process.”.'"`UNIQ--ref-000002F1-QINU`"' Transparent peer reviewing is an example initiative to encourage fair reviewing and to appreciate the contribution of reviewers. Moreover, having a bullying and harassment policy in place sends a signal that bullying, including unfair reviewing, is inappropriate,'"`UNIQ--ref-000002F2-QINU`"' thereby promoting good behaviour of scientists. Lastly, conflicts of interest should always be disclosed when professional or personal interests collide with the review process '"`UNIQ--references-000002F3-QINU`"'  +
'''Data organization''' Data should be organized in a logical and structured way. Within research groups, consensus on naming and organizing data and files can help in structuring data. The University of Cambridge has provided [https://www.data.cam.ac.uk/data-management-guide/organising-your-data this resource] which provides a good oversight of what you should keep in mind for naming files, organizing folders and more. In addition, they collected various resources [https://www.data.cam.ac.uk/support/external that can support in data management]. '''Pseudonymization''' When performing research involving human subjects, participants should be pseudonymized or anonymized. Pseudonymization removes the information that would allow for identification of individuals from a dataset. The main researchers have access to an encryption key. When John Smith, aged 31 and Jane Doe, aged 25 are in your data set, you should not pseudonymize with ‘JS31’ and ‘JD25’. Correct pseudonymization is naming them, for example, participants 001 and 002. According to the GDPR, “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”. '"`UNIQ--ref-00000181-QINU`"' <br /> '"`UNIQ--references-00000182-QINU`"'  +
'''Laboratory notes''' When performing experiments, researchers should carefully and comprehensively report their experimental design, materials and techniques used, and all of their outcomes. Here, reproducibility and transparency are paramount. This is important within the biomedical sciences, physics, chemistry, computer science, etc. Laboratory notebooks are important to use when performing experiments of any kind. The ten rules, as adapted below, can help in organizing and providing complete information in your notebook.'"`UNIQ--ref-000001A1-QINU`"' #'''Learn your institution’s or laboratory’s notebook policy o'''r create one in your research team #'''Select the right medium for your lab notebook''' either electronically or a hardcopy. #'''Make the habit of keeping the lab notebook in your desk''' to keep your notebook at hand while working. #'''Record all scientific activities in your lab notebook''' including thoughts during meeting, theorizing about problems etc. #'''Every entry should be recorded with a date, subject and protocol''' to organize your lab journal or notebook. #'''Keep a record of how every result was produced''' to ensure reproducibility of your experiments. #'''Keep an overview of the different study protocols you use''' including adaptations from standard (laboratory) protocols #'''Keep a lab notebook that can serve as a legal record of your work''' to ensure you can take ownership of ideas, show you deserve authorship or protect intellectual property rights. #'''Create a table of contents in your lab notebook''' to eEnsure your notebook is organized and easily searchable. #'''Protect your lab notebook.''' Your lab notebook belongs to your institution since you are funded through your institution. “Your lab notebook is part of the scientific legacy of your laboratory. Therefore, you need to protect your lab notebook."'"`UNIQ--ref-000001A2-QINU`"' '''Fieldnotes''' For qualitative researchers it is important to record: 1) descriptive information related to the data generation; and 2) to reflect on the process of data generation and interpretation. Both types of notes help in the interpretation and contextualization of findings.  Descriptive information should focus on observations related to the research problem. For the reflective content, the importance of note taking is to place into observations in the perspective of the researcher’s “personal, cultural and situational experiences”. Here, a critical attitude is important, with notes focusing on initial impressions, assumptions, concerns, and surprises. '"`UNIQ--ref-000001A3-QINU`"' <br /> '"`UNIQ--references-000001A4-QINU`"'  
VIGOR (Vioxx Gastrointestinal Outcomes Research) study is one of the most known cases where researchers ignored safety risks. The study aimed to examine if a new drug Vioxx, produced by drug maker Merck, would cause fewer gastrointestinal side effects for the treatment of the rheumatoid arthritis than naproxen. Over 8,000 patients participated in the study, half of them taking Vioxx, and the other half naproxen. As it turned out, the risk of serious heart problems and death was twice as higher for patients using Vioxx, than for those using naproxen, however, the study decided to ignore the risks and obscure the results.'"`UNIQ--ref-0000009F-QINU`"''"`UNIQ--ref-000000A0-QINU`"' Five years after Vioxx’s launch, Merck withdrew the drug from the market, but by that time it had already sold billions of dollars of the drug'"`UNIQ--ref-000000A1-QINU`"'. Another study published in the medical journal Lancet estimates that 88,000 Americans had heart attacks from taking Vioxx, and 38,000 of them died.'"`UNIQ--ref-000000A2-QINU`"' '"`UNIQ--references-000000A3-QINU`"'  +
Observational studies, such as cohort or case – control studies, are sometimes overinterpreted in terms of cause-effect relationship. Correlation between a factor and an outcome does not necessarily mean causation. When it comes to experimental studies, sometimes randomization is not possible due to ethical reasons which should be taken in account when interpreting results of such studies. '"`UNIQ--ref-000001FE-QINU`"' Sometimes outcome measures do not correspond completely to questions asked in the study i.e. they are only indirectly connected. All of this is usually addressed in research methodology classes. When planning, doing and reporting research, you can always rely on appropriate EQUATOR reporting guidelines to make sure you have everything accounted for. '"`UNIQ--references-000001FF-QINU`"'  +
There are several ways to deal with this questionable research practice. The first is disclosing the changes made to the research design. The second is preregistration of studies. '''Disclosing changes''' Deviating from the research design is allowed under certain instances. For instance, new sub-questions can surface when further progressing with the project. These can only be answered by performing extra analyses or different tests. In those cases, disclosing changes to the research design is considered a good practice. These analyses or changes should be presented as explorative, rather than final. '''Preregistration''' Preregistration is the process of submitting the research design before performing the study. Preregistration can be seen as an effective way to address researchers from not ‘luring’ them into changing methods to present results more spectacularly. Some journals also publish protocols and/or accept studies based in their design, proposed methods and relevance – and make a commitment to publish results. The outcome of the study is made of lesser importance, and the relevance of the study and rigor of the study design more important.  +
In a series of the Lancet on research waste, '"`UNIQ--ref-00000192-QINU`"' the following steps were suggested for setting research priorities and diminishing research waste (as cited from pg. 158): #Include objectives in research groups’ strategic plans and define the stakeholders whose opinions and priorities will be considered #Draw on an existing summary of previous priority-setting exercises in the specialty before undertaking own exercise #Use available methodological reviews of research priority setting as guidance about how to meet priority-setting objectives #Ensure that the priority-setting team has the necessary data, information about context, and skill set for their exercise #Pilot, assess, revise, and update the priority-setting exercise at intervals #Participate in discussions within the community of interest to share findings and experiences” '"`UNIQ--references-00000193-QINU`"'  +
How to reform the incentive structure of science is a subject of ongoing research and debate. See, e.g., *Bornmann, L., & Williams, R. (2017). Can the journal impact factor be used as a criterion for the selection of junior researchers? A large-scale empirical study based on ResearcherID data. Journal of Informetrics, 11(3), 788–799. doi:[https://doi.org/10.1016/j.joi.2017.06.001 10.1016/j.joi.2017.06.001] *Krimsky, S. (2004). Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? Rowman & Littlefield. *Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12(1), 365–384. doi:[https://doi.org/10.1016/j.joi.2018.01.007 10.1016/j.joi.2018.01.007]  +
Holm and Ploug suggest that researchers should address the following two questions: #If the results of your current (well planned and well conducted) project point in the opposite direction of the results of your previous research on this topic, would your first reaction be to reanalyse the data and reconsider your methods, or to reconsider your previous conclusions? #If your findings were the exact same as the opposing researchers in this field of research, would your policy recommendations be any different from the recommendations of the opposing group? '"`UNIQ--ref-00000085-QINU`"' Four questions about polarized research: *'''Why does polarized research exist?''' Because researchers have different perspectives and interests. *'''Is polarized research fraud?''' No, because it is based on valid scientific methods. *'''How does polarized research occur?''' Researchers may use different definitions, indexes, end-points, models, statistical methods, interpretations etc making their results come out very differently. *'''How can we avoid polarized research?''' One suggestion is to force authors to declare “polarized conflict of interest” when submitting papers. Another is to make editors and publishers check for polarized conflicts of interest. '"`UNIQ--references-00000086-QINU`"'  +
Lists of predatory publishers (blacklists) as well as lists of high quality open access publishers (whitelists) are of great value to researchers and decision makers. ===Blacklists=== The University of Colorado librarian Jeffrey Beall developed a list of potential predatory journals in 2008, which has been since taken offline because of certain flaws in the methodology. *[https://beallslist.weebly.com/ Beall's list] *[https://www2.cabells.com/blacklist Cabells' lists] *[https://predatoryjournals.com/ Stop Predatory Journals] ===Whitelists=== *[https://doaj.org/ Directory of Open Access Journals (DOAJ)] ===Choosing a journal=== Stefan Eriksson and Gert Helgesson have identified 25 signs of predatory publishing, '"`UNIQ--ref-00000043-QINU`"' and argue that more points on the list that apply to the journal at hand, the more skeptical you should be." #The publisher is not a member of any recognized professional organization committed to best publishing practices (like COPE or EASE) #The journal is not indexed in well-established electronic databases (like MEDLINE or Web of Science) #The publisher claims to be a "leading publisher" even though it just got started #The journal and the publisher are unfamiliar to you and all your colleagues #The papers of the journal are of poor research quality, and may not be academic at all (for instance allowing for obvious pseudo-science) #There are fundamental errors in the titles and abstracts, or frequent and repeated typographical or factual errors throughout the published papers #The journal website is not professional #The journal website does not present an editorial board or gives insufficient detail on names and affiliations #The journal website does not reveal the journal's editorial office location or uses an incorrect address #The publishing schedule is not clearly stated #The journal title claims a national affiliation that does not match its location (such as "American Journal of ..." while being located on another continent) or includes "International" in its title while having a single-country editorial board #The journal mimics another journal title or the website of said journal #The journal provides an impact factor in spite of the fact that the journal is new (which means that the impact cannot yet be calculated) #The journal claims an unrealistically high impact based on spurious alternative impact factors (such as 7 for a bioethics journal, which is far beyond the top notation) #The journal website posts non-related or non-academic advertisements #The publisher of the journal has released an overwhelmingly large suite of new journals at one occasion or during a very short period of time #The editor in chief of the journal is editor in chief also for other journals with widely different focus #The journal includes articles (very far) outside its stated scope #The journal sends you an unsolicited invitation to submit an article for publication, while making it blatantly clear that the editor has absolutely no idea about your field of expertise #Emails from the journal editor are written in poor language, include exaggerated flattering (everyone is a leading profile in the field), and make contradictory claims (such as "You have to respond within 48 h" while later on saying "You may submit your manuscript whenever you find convenient") #The journal charges a submission or handling fee, instead of a publication fee (which means that you have to pay even if the paper is not accepted for publication) #The types of submission/publication fees and what they amount to are not clearly stated on the journal's website #The journal gives unrealistic promises regarding the speed of the peer review process (hinting that the journal's peer review process is minimal or non-existent)—or boasts an equally unrealistic track-record #The journal does not describe copyright agreements clearly or demands the copyright of the paper while claiming to be an open access journal #The journal displays no strategies for how to handle misconduct, conflicts of interest, or secure the archiving of articles when no longer in operation A number of other initiatives have also put together criteria for journal selection: *[https://thinkchecksubmit.org/ Guideline to choose the right journal for research] - *[https://guides.mclibrary.duke.edu/beinformed Be iNFORMEd: Checklist] - A checklist to assess the quality of a journal or publisher ==Other information== [http://www.wame.org/identifying-predatory-or-pseudo-journals The World Association of Medical Editors (WAME) statement on predatory publishing] [https://www.tandfonline.com/doi/full/10.1080/03007995.2019.1646535 The American Medical Writers Association (AMWA), European Medical Writers Association (EMWA), and International Society for Medical Publication Professionals (ISMPP) Joint Position Statement on Predatory Publishing] [http://www.icmje.org/news-and-editorials/fake_predatory_pseudo_journals_dec17.html ICMJE document on predatory publishing]<br /> '"`UNIQ--references-00000044-QINU`"'  
==QRPs== According to research integrity experts who participated in a survey, '"`UNIQ--ref-0000013F-QINU`"' there are a number of QRPs that occur frequently and have a high impact on science. In relation to study design, for instance, QRPs include presenting misleading information in a grant application or ignoring risks of unexpected findings or safety risks to study participants, workers or environment. Under data collection falls behaviour such as collecting more data when noticing that statistical significance is almost reached or keeping inadequate notes of the research process. in relation to reporting, examples of QRPs are hypothesizing after the results are known (HARKing), concealing results that contradict earlier findings, or not publishing a study with negative results. Moreover, selective citing to enhance your own findings or pleasing editors and colleagues is reported to often occur. QRPs that fall under collaboration are demanding or accepting authorship for which you do not qualify and reviewing your own papers. In addition, the misbehaviour that is estimated to occur the most and have a high impact on truth is insufficiently supervising junior coworkers. The misbehaviour that occurs the most and has the highest impact on trust is using published ideas of others without referencing. =='''Prevention'''== A way to counter QRPs could be to create awareness about research integrity issues and alter the current reward system. Instead of rewarding the number of publications, alternative aspects that could be rewarded include a researcher's commitment to pre-registration, data sharing and open science. '"`UNIQ--references-00000140-QINU`"'  +
One of the techniques for detecting the fabrication of numbers is to check the “rightmost digits” of the collected data. The “rightmost digit” is the digit that a number ends in. It is considered to be “the most random digit of a number,” which means that that the numbers that make up a data set should be uniformly distributed as in a lottery '"`UNIQ--ref-0000007B-QINU`"''"`UNIQ--ref-0000007C-QINU`"'. Since the rightmost digits in each study should be unpredictable, the appearance of any patterns is a reason to suspect data fabrication'"`UNIQ--ref-0000007D-QINU`"' '"`UNIQ--ref-0000007E-QINU`"''"`UNIQ--ref-0000007F-QINU`"''"`UNIQ--ref-00000080-QINU`"'. Research conducted by Mosimann et al. in 1995 showed that most people cannot generate random numbers when fabricating data, which makes it possible to detect potentially fabricated data '"`UNIQ--ref-00000081-QINU`"'. They also developed a program called the “chi-square test for uniformity of the digit distributions”, which measures the production of random digits '"`UNIQ--ref-00000082-QINU`"'. If the distribution of numbers is not uniform, the numbers are falsified '"`UNIQ--ref-00000083-QINU`"''"`UNIQ--ref-00000084-QINU`"''"`UNIQ--ref-00000085-QINU`"'. There are other methods that can be used to detect the fabrication of rightmost digits. For example, some journals have adopted a policy of statistical review for all papers containing numerical data '"`UNIQ--ref-00000086-QINU`"' '"`UNIQ--ref-00000087-QINU`"'. In addition, published graph data can be compared with “raw” notebook or computer data to determine whether the numbers have been reported correctly '"`UNIQ--ref-00000088-QINU`"''"`UNIQ--ref-00000089-QINU`"'. Authors should present the raw data that supports their findings, while journals, universities and granting agencies should promote this practice '"`UNIQ--ref-0000008A-QINU`"' '"`UNIQ--ref-0000008B-QINU`"'. Some argue that the use of statistical methods will significantly reduce fabrication of numerical data '"`UNIQ--ref-0000008C-QINU`"'. '"`UNIQ--references-0000008D-QINU`"'  
The best practice is to preregister study protocols online in a registry. When describing their study designs, researchers should be as transparent and complete as possible. To date, only two reliable animal registries are available:[https://preclinicaltrials.eu/ Preclinicaltrials.eu] and the [https://www.animalstudyregistry.org/asr_web/index.action Animal Study Registry]. It is also possible to use general registries, e.g. the Open Science Framework, however the registration forms will not be tailored to animal studies specifically. If a study could not be preregistered, it is still worthwhile to register its protocol at a later stage; especially if the study could not be published. Although, prospective registration (i.e., registration before the experiments) should be encouraged.  +
In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that clinical trials beginning after July 1, 2005, would be under a new trial registration policy. '"`UNIQ--ref-00000004-QINU`"' To be published in member journals, the trials would have to be registered in an approved trial registry prior to the enrollment of the first participant. Since 2005, ICMJE has reiterated that registering a prospective study should be a condition of publication and after the announcement, several journals endorsed this policy. The registration must occur prior to enrollment of the first study participant in a trial registry that meets the quality criteria developed by WHO. '"`UNIQ--ref-00000005-QINU`"' However, the adherence for this requirement remains low by both researchers and journal editors and, unfortunately, not all clinical trials are registered before they start. Recent findings suggest that among the reasons that lead to the low adherence to the new requirement by the researchers are: lack of awareness of the criteria; misunderstandings regarding the definition of clinical trial by ICMJE; and difficulties for registration. '"`UNIQ--ref-00000006-QINU`"' On the part of journal editors, the main reason is that not all journals are equally committed to meeting the registration requirements, strengths, and limitations of the study. The Committee on Publication Ethics suggested that “it is probably best to judge each paper on a case by case basis.” '"`UNIQ--references-00000007-QINU`"'  +
The most relevant examples are studies needed for drug approval. The approval procedure usually requires a series of clinical trials divided into three phases. Phase I and II can involve model building according to European Medicines Agency (EMA), however phase III trial is always designed as a confirmatory trial'"`UNIQ--ref-000001FF-QINU`"'. Both the EMA and the Food and Drug Administration (FDA) require statistical pre-registration before the beginning of a trial. Exploratory trials aim to produce evidence of effectiveness of new drugs. These results then lead to confirmatory trials'"`UNIQ--ref-00000200-QINU`"'. The need for pre-registrations of trials is demonstrated by the following case. In 2004, the New York attorney general’s office filed a lawsuit against pharmaceutical company GlaxoSmithKline. Four unpublished clinical trials showed evidence that use of the antidepressant Paxil increases the risk of suicidal tendencies amongst young people. This lawsuit helped raise awareness that studies need to be pre-registered'"`UNIQ--ref-00000201-QINU`"'. That year the International Committee of Medical Journal Editors required pre-registration of clinical trials'"`UNIQ--ref-00000202-QINU`"'. US laws also require clinical trials to be pre-registered'"`UNIQ--ref-00000203-QINU`"'. Journals and research funders support pre-registration and some organizations promote it as an important step towards openness and transparency in research'"`UNIQ--ref-00000204-QINU`"'. There have been some other interesting efforts that promote pre-registration, such as the Preregistration Challenge, sponsored by the Center for Open Science. This campaign offered $1,000 awards to researchers who pre-register their studies and publish their results within a deadline'"`UNIQ--ref-00000205-QINU`"''"`UNIQ--ref-00000206-QINU`"''"`UNIQ--ref-00000207-QINU`"'. These initiatives have led to a certain “cultural shift” - there are now more than 8,000 pre-registrations on Open Science Framework for research in different disciplines'"`UNIQ--ref-00000208-QINU`"'. APS journals also began to practice pre-registration in 2014. From 2014 to 2019, 43 of 154 articles published in Psychological Science earned “Preregistered badge” due to pre-registration of design and analysis plan of their studies'"`UNIQ--ref-00000209-QINU`"'. Although pre-registration has demonstrated benefits for the trustworthiness of research, the practice still needs to be widely adopted across the scientific community. '"`UNIQ--references-0000020A-QINU`"'  
In 2012, ORCID launched their Registry as a result of which researchers could be assigned unique identifiers, a 16-character code compiled of numbers 0-9, and thus distinguish themselves from other researchers. In 2019, there are more than 7 million ORCID accounts'"`UNIQ--ref-000002F9-QINU`"' registered to individual researchers, universities, scientific publishers and commercial companies. '"`UNIQ--ref-000002FA-QINU`"' Increasingly, funding organisations are requiring that their applicants provide their ORCID identifier. '"`UNIQ--references-000002FB-QINU`"'  +
One of the best examples of the application of digital tools within the humanities is the collaborative, interdisciplinary research project [http://republicofletters.stanford.edu/index.html Mapping the Republic of Letters], developed by Stanford University in 2010 and funded by the National Endowment for the Humanities (NEH). The aim of the project is to map the 17<sup>th</sup> and 18<sup>th</sup> century correspondence of prominent and influential intellectuals in the Age of Enlightenment '"`UNIQ--ref-000001D8-QINU`"'. The “Republic of Letters” was a self-proclaimed community of scholars that exchanged their ideas via handwritten letters across Europe and the Americas. The researchers on the project used metadata to produce maps, charts and other visual tools '"`UNIQ--ref-000001D9-QINU`"'. These modern visualization tools provide a greater understanding of distribution of the letters over hundreds of years and help identify geographic “hot-spots” in the archive '"`UNIQ--ref-000001DA-QINU`"'. They shed light on, for example, Voltaire’s correspondence, which consists of about 15.000 letters. The visualization of the letter exchanges on a map shows the places where Voltaire traveled and reveals patterns in his writing at specific times and in specific places '"`UNIQ--ref-000001DB-QINU`"'. These maps of correspondence raise new questions and facilitate new interpretations of the letters and related documents '"`UNIQ--ref-000001DC-QINU`"'. The project also provides a basis for further research not only concerning the Republic of letters, but also in related topics. The use of digital tools in the humanities has seen the formation of organizations that foster research in the digital humanities. One of them is the European Association for Digital Humanities (EADH), established in 1973 under the name of the Association for Literary and Linguistic Computing '"`UNIQ--ref-000001DD-QINU`"'. This organization is one of the constituent organizations in the Alliance of Digital Humanities (ADHO), formed in 2005, which supports and promotes digital research and education in all the arts and humanities disciplines '"`UNIQ--ref-000001DE-QINU`"'. In addition, numerous universities now offer undergraduate and graduate courses and programs in the digital humanities '"`UNIQ--ref-000001DF-QINU`"'. '"`UNIQ--references-000001E0-QINU`"'  
The main options analysed are the establishment of permanent European bodies to support institutions in investigating, overviewing or advising on research misconduct investigations. A European body to carry out investigations on behalf of institutions would ensure that investigations are carried consistently, reduce the risk of conflicts of interest, allow expertise to develop, and professionalize the handling of cases. It would be particularly helpful for institutions that still do not have any structures or experience in handling research misconduct allegations. Some obstacles would have to be overcome for such a body to be effective. Institutions might be reluctant to expose internal problems for fear of damaging their reputation and losing their autonomy; national regulations might limit access to data; and some counties might not recognize its legitimacy. A further option would be to set up an oversight body that would not conduct investigations but only review investigations carried out by institutions to make sure that they have followed appropriate procedures, previously agreed on internationally. This might motivate institutions to follow those procedures, and so it would bring more homogeneity in the handling of allegations across Europe. As well, an external check would help control and lower risks of conflict of interest. On the other hand, depending on its status, it might not be able to require an institution to redo a poorly conducted investigation, and if it did, this would require more resources for each investigation. Another role that a European body could have is advisory. It could advise institutions on how to create structures and policies to prevent research misconduct and protect integrity, and it could even set up a database of experts to assist investigations committees. The main concern about such a body is that it might be appear redundant or in conflict with existing national advisory bodies.  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.3.4